id
stringlengths 3
9
| source
stringclasses 1
value | version
stringclasses 1
value | text
stringlengths 1.54k
298k
| added
stringdate 1993-11-25 05:05:38
2024-09-20 15:30:25
| created
stringdate 1-01-01 00:00:00
2024-07-31 00:00:00
| metadata
dict |
|---|---|---|---|---|---|---|
258919670
|
pes2o/s2orc
|
v3-fos-license
|
Integrative Taxonomy of Novel Diaporthe Species Associated with Medicinal Plants in Thailand
During our investigations of the microfungi on medicinal plants in Thailand, five isolates of Diaporthe were obtained. These isolates were identified and described using a multiproxy approach, viz. morphology, cultural characteristics, host association, the multiloci phylogeny of ITS, tef1-α, tub2, cal, and his3, and DNA comparisons. Five new species, Diaporthe afzeliae, D. bombacis, D. careyae, D. globoostiolata, and D. samaneae, are introduced as saprobes from the plant hosts, viz. Afzelia xylocarpa, Bombax ceiba, Careya sphaerica, a member of Fagaceae, and Samanea saman. Interestingly, this is the first report of Diaporthe species on these plants, except on the Fagaceae member. The morphological comparison, updated molecular phylogeny, and pairwise homoplasy index (PHI) analysis strongly support the establishment of novel species. Our phylogeny also revealed the close relationship between D. zhaoqingensis and D. chiangmaiensis; however, the evidence from the PHI test and DNA comparison indicated that they are distinct species. These findings improve the existing knowledge of taxonomy and host diversity of Diaporthe species as well as highlight the untapped potential of these medicinal plants for searching for new fungi.
Introduction
Medicinal plants are essential for sustaining human health and livelihoods according to their ethnobotanical uses and therapeutic purposes [1,2]. They have also contributed to maintaining biodiversity in forest ecosystems and supporting natural recreation in urban ecosystems [1,2]. Fungi are usually encountered in medicinal plants, where they can affect their hosts in both beneficial and harmful manners [2][3][4]. As pathogens, they impair plant health and productivity [4]; whereas, as endophytes, they promote plant growth and produce a diverse array of secondary metabolites, which have been exploited for the development of new drugs and pharmaceutical products [2,3]. Thus, studies of fungi associated with medicinal plants represent a significant repository for the estimation of fungal diversity, the discovery of novel fungi and fungal-plant interactions, as well as the bioprospecting of new bioactive compounds and their biotechnological applications [5][6][7][8][9][10][11][12].
Taxonomic studies of Diaporthe revealed a variety of medicinal plants as their hosts [38]. However, most of these studies have been conducted in temperate zones (i.e., [15][16][17]21,24,26,28]). Knowledge of Diaporthe associated with medicinal plants in the tropics is still limited [31,32]. Therefore, this study aims to identify and describe isolates of Diaporthe associated with several medicinal plants in Thailand using both morphological and molecular analyses. To better illustrate the placements of the five new species, their morphological descriptions, micrographs, and updated phylogenetic trees are presented and discussed.
Sample Collection and Morphological Examination
Fresh fungal specimens were collected from the dead leaves and woody twigs of various medicinal plants in urban parks and forest areas in the Chiang Mai and Tak provinces of Thailand in 2019 and 2022. Collected samples were investigated for macroand micro-morphological structures using a Nikon SMZ800N stereo microscope (Nikon Instruments Inc., Melville, NY, USA) and photomicrographed with a Nikon Eclipse Ni compound microscope attached to a Nikon DS-Ri2 camera system (Nikon Instruments Inc., Melville, NY, USA). The measurement of each structure (i.e., conidiomata, conidiomatal walls, conidiophores, conidiogenous cells, and conidia) was taken using the Tarosoft (R) Image Frame Work program. All figures were modified using Adobe Photoshop CS6 Extended version 10.0 software (Adobe Systems, San Jose, CA, USA).
Fungal Isolation and Preservation
Pure cultures were obtained from single spore isolation on 2% water agar (WA), and germinated conidia were aseptically transferred to potato dextrose agar (PDA) [39]. Fungal cultures were incubated at 25 • C for four to six weeks and then examined for colony morphology and spore production. Herbarium material and pure culture of Diaporthe globoostiolata were deposited in the herbarium of Mae
Phylogenetic Analyses
The sequences obtained in this study were submitted through a BLASTn search in GenBank (www.ncbi.nlm.nih.gov/blast/, assessed on 1 March 2023) to determine the most similar taxa. The initial phylogenetic analysis was conducted based on the ITS sequence dataset from Norphanphoun et al. [32] to identify the placement of our isolates within species complexes. The newly generated sequences and their related sequences were then selected for the concatenated ITS, tef1-α, tub2, cal, and his3 sequence dataset based on the BLASTn search results and updated literature [18,22,32,[46][47][48] (Table 1). The alignment of a single locus dataset was performed using MAFFT v.7 (http://mafft.cbrc.jp/alignment/ server/index.html, assessed on 1 March 2023) [49] and the ambiguous sites were manually adjusted using BioEdit 7.1.3.0 [50]. The phylogenetic trees of single locus and combined datasets were analyzed using maximum likelihood (ML) and Bayesian inference (BI) criteria. Tree topologies from single locus analyses were also compared and no conflicts were found.
Taxa Names Culture Accession No.
GenBank Accession No. [52][53][54] in the CIPRES Science Platform V3.3 (https://www. phylo.org/portal2/home.action, assessed on 1 March 2023) [55]. The GTRGAMMA model of the bootstrapping phase with 1000 bootstrap iterations was set as the parameter for ML analysis [51]. The best nucleotide substitution model was determined using MrModeltest v.2.3 [56], and GTR + I + G was selected as the best-fitting model for the ITS, tef1-α, tub2, cal, and his3 datasets. For BI analysis, six simultaneous Markov chains were set to run 10,000,000 generations with a sampling frequency of 100 generations. The burn-in phase was set as 0.25, and the posterior probabilities (PP) were evaluated from the remaining trees. The phylogenetic trees resulting from the ML and BI analyses were visualized by the FigTree v1.4.0 program [57] and adjusted using Adobe Photoshop CS6 software (Adobe Systems, San Jose, CA, the USA). Novel obtained sequences were registered for GenBank accession numbers.
Genealogical Concordance Phylogenetic Species Recognition Analysis
The recombination level between new species and their most closely related taxa was examined using the Genealogical Concordance Phylogenetic Species Recognition (GCPSR) model [58,59]. A pairwise homoplasy index (PHI) test was implemented by SplitsTree4 using the LogDet transformation and split decomposition options [60,61]. A PHI test result (Φw) above 0.05 indicated no significant recombination in the dataset. In addition, split graphs were generated for visualization of the relationship between closely related species.
Genealogical Concordance Phylogenetic Species Recognition Analysis
In the PHI analysis, there was no evidence of significant recombination (Φw > 0.05) between each new species (Diaporthe afzeliae, D. bombacis, D. globoostiolata, and D. samaneae) and their closely related taxa in the combined ITS, tef1-α, tub2, cal, and his3 sequence dataset (Figure 2a-d). The results of PHI analysis also revealed no significant recombination (Φw > 0.05) between D. zhaoqingensis and D. chiangmaiensis (Figure 2e). This evidence confirms that they are distinct species.
Culture characteristics: Colonies on PDA reached 5 cm diam. after 10 days at 25 • C, effuse, fluffy, lobate at the margin, originally white, becoming yellowish to pale brown mycelium with age, yellowish to pale brown in reverse, with numerous black dots developing as the fruiting bodies (conidial production not seen).
Culture characteristics: Colonies on PDA reached 9 cm diam. after 10 days at 25 • C, effuse, sparse hyphae, filiform margin, originally white, becoming pale yellowish mycelium with age, yellowish to pale brown in reverse, with numerous black dots developing as the fruiting bodies (conidial production not seen).
Discussion
This study describes five novel species of Diaporthe in Thailand. Aside from the phenotypic traits, phylogenetic and PHI analyses based on the combined sequence datasets of ITS, tef1-α, tub2, cal, and his3 were successfully applied to justify the novel species. In particular, tub2, cal, and his3 have a high discrimination power for distinguishing species in Diaporthe, and this is consistent with the results from other studies [15,18,22,[35][36][37].
Our study also gains better insight into the phylogenetic relationships within Diaporthe, especially in the D. arecae species complex. Diaporthe zhaoqingensis and D. chiangmaiensis were clustered together in the same clade (98% ML, 1.00 PP) and not so well separated ( Figure 1). Therefore, we compared the base pair differences between the type strains of D. zhaoqingensis ZHKUCC 22-0056 and D. chiangmaiensis MFLUCC 18-0544. There are 1.38% base pair differences in ITS (7/508 bp) between the ex-type of both strains. In the tef1-α gene region, there are 0.33% base pair differences (1/300 bp) between the type strains of D. chiangmaiensis MFLUCC 18-0544 and D. zhaoqingensis ZHKUCC 22-0057. There are 4.94% base pair differences (19/385 bp) in the tub2 gene region, between D. chiangmaiensis MFLUCC 21-0212 and the type strain of D. zhaoqingensis ZHKUCC 22-0056. However, some genes from the type strains were not available to compare. The PHI test result also showed that D. zhaoqingensis and D. chiangmaiensis were not conspecific, indicating that they are different species (Figure 2e). Diaporthe zhaoqingensis was isolated as an endophyte on Morinda officinalis [18], and D. chiangmaiensis was isolated from Magnolia lilifera as an endophyte and saprobe [47]. However, the morphological characteristics of these two species could not be compared as only gamma conidia were observed in D. zhaoqingensis while alpha conidia were observed in D. chiangmaiensis [18,47]. Therefore, more sequence data such as the tub2, cal, and his3 of the type strain of D. chiangmaiensis are needed to resolve their taxonomic placements and confirm whether they are distinct species.
Furthermore, the new species, D. careyae, was shown to be distinct from other Diaporthe species based on its morphology and phylogeny. The conidia of D. careyae were 0-1(-2) septate, whereas aseptate conidia were a typical characteristic of Diaporthe. The septation of conidia has been reported in some Diaporthe species (e.g., D. foeniculina and D. saccarata) [17,68], however, their phylogenetic placements were not closely related to D. careyae. It is noteworthy that there are some singleton species that were not grouped into any species complex, and their taxonomic positions remain unclear [32]. In addition, most species of Diaporthe lack sequence data and have incomplete morphological descriptions [31,32]; therefore, further extensive sampling is needed in order to unravel the taxonomic circumscription of this genus.
The newly introduced species of Diaporthe were associated with different medicinal plants, comprising D. afzeliae on Afzelia xylocarpa, D. bombacis on Bombax ceiba, D. careyae on Careya sphaerica, and D. samaneae on Samanea saman. These plant species have been used as traditional medicines in tropical countries, including Thailand, and have been reported on concerning their various phytochemicals and pharmacological activities [69][70][71][72][73][74][75]. To the best of our knowledge, none of the Diaporthe species have been isolated from these host genera, making this the first report of such a host association [38]. Moreover, a new species, D. globoostiolata, was found on a member of Fagaceae. Some plant genera in Fagaceae, such as Castanopsis, Quercus, and Lithocarpus, have also been reported on regarding their medicinal usage and pharmacological properties [76][77][78][79]. Furthermore, more than 30 Diaporthe species have been recorded from the host family Fagaceae [38]. This study reflects the high genetic diversity and phenotypic variation within Diaporthe and expands our understanding of the diversity and host relationships of the Diaporthe species associated with medicinal plants in tropical regions. However, future studies are necessary to investigate the disease symptoms and evaluate the pathogenicity of these Diaporthe isolates as they are important for tree health assessments and management. Data Availability Statement: All sequences generated in this study were submitted to GenBank (https://www.ncbi.nlm.nih.gov, accessed on 1 April 2023).
|
2023-05-27T15:13:39.149Z
|
2023-05-24T00:00:00.000
|
{
"year": 2023,
"sha1": "87a1677da0e1357a8db6e7c8f774f3bc692964bb",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/jof9060603",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6ed7d4e8d1c5d29928dd4b846fd7a6ed086be0b3",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
239616958
|
pes2o/s2orc
|
v3-fos-license
|
Swinepox Virus Strains Isolated from Domestic Pigs and Wild Boar in Germany Display Altered Coding Capacity in the Terminal Genome Region Encoding for Species-Specific Genes
Swinepox virus (SWPV) is a globally distributed swine pathogen that causes sporadic cases of an acute poxvirus infection in domesticated pigs, characterized by the development of a pathognomonic proliferative dermatitis and secondary ulcerations. More severe disease with higher levels of morbidity and mortality is observed in congenitally SWPV-infected neonatal piglets. In this study, we investigated the evolutionary origins of SWPV strains isolated from domestic pigs and wild boar. Analysis of whole genome sequences of SWPV showed that at least two different virus strains are currently circulating in Germany. These were more closely related to a previously characterized North American SWPV strain than to a more recent Indian SWPV strain and showed a variation in the SWPV-specific genome region. A single nucleotide deletion in the wild boar (wb) SWPV strain leads to the fusion of the SPV019 and SPV020 open reading frames (ORFs) and encodes a new hypothetical 113 aa protein (SPVwb020-019). In addition, the domestic pig (dp) SWPV genome contained a novel ORF downstream of SPVdp020, which encodes a new hypothetical 71aa protein (SPVdp020a). In summary, we show that SWPV strains with altered coding capacity in the SWPV specific genome region are circulating in domestic pig and wild boar populations in Germany.
Introduction
Swinepox virus (SWPV) is the only member of the genus Suipoxvirus, which belongs to the subfamily Chordopoxvirinae, within the family Poxviridae. This virus contains a linear double-stranded DNA genome of 146 kbp and is the etiological agent of an eruptive dermatitis in pigs, known as swinepox. Swinepox was first described as a disease of domestic pigs in Europe in 1842 [1] and in the USA in 1929 [2] but is now known to have a worldwide distribution and is endemic in many areas of Africa [3], Australia [4], North America [5], South America [6,7] and Asia [8,9]. For a number of decades, Vaccinia virus (VACV) was the etiological agent of a similar disease in domestic pigs with distinctive pustular lesions [10]. However, the incubation period of VACV infection was shorter than that of SWPV infection with lesions that were in general smaller in size [11,12]. Although VACV-related viruses continue to circulate in some countries such as Brazil and cause occasional disease in wildlife and domestic animals [13][14][15], VACV is not endemic in Western Europe and can be readily excluded as a causative agent by SWPV-specific molecular diagnostic assays [14,16].
Clinical studies, experimental infection of laboratory animals and in vitro infection of cell lines originating from different species have shown that SWPV displays a high degree of host specificity with infections restricted to domestic pigs [17] with a single case reported in a wild boar [18]. Experimental infection of other species with SWPV including rats, mice and rabbits have been unsuccessful with respect to induction of viremia or skin lesions pathognomonic of swinepox [19]. Similarly, infection of nonporcine mammalian or avian cell lines failed to produce viral particles [20]. Piglets under three months of age are most commonly infected and display more severe clinical signs of infection [17,21]. The macroscopic manifestations of swinepox have been described as a multifocal, eruptive dermatitis with cutaneous lesions commonly observed on the abdomen, inner surface of the legs, pinnae and sporadically on the face [3,4,22]. Clinical lesions are restricted to the skin and less frequently, mild changes in the superficial lymph nodes have been reported [10,23]. Histological examination of tissues from swinepoxinfected pigs has shown that virus replication occurs in keratinocytes of the epidermal stratum spinosum [5,12,24]. The combination of a strongly restricted host range and genetic stability aroused interest in SWPV as a vaccine expression vector for the immunization of swine [25][26][27].
In addition to direct contact between infected and susceptible animals, a mechanical route of transmission by insect vectors such as the swine louse (Haematopinus suis) has been shown to facilitate the spread of the virus between populations [28]. Early experiments investigated the role of Haematopinus suis in swinepox infections and demonstrated its function as a mechanical vector but not as an intermediate host [29]. Additionally, the clinical appearance of morphological and histological signs of a congenital SWPV infection and thereby the possibility of a vertical transmission of SWPV in naturally occurring infections has been described [30,31]. Morbidity of SWPV infection can be high in piglets of an infected litter, but mortality is often low. This has resulted in a paucity of research into this virus since it is primarily connected to poor sanitation and plays no major economical role in modern agriculture of developed countries [32].
SWPV shares genomic similarities with virus species from other poxviral genera such as hairpin-shaped inverted terminal repetitions [33] and genes that modulate antiviral immune responses [34]. However, little is known about the strain diversity of SWPV due to the limited numbers of complete genome sequences of SWPV. Three open reading frames (ORFs) are unique to the SWPV genome (SPV018, SPV019, SPV020) and encode proteins containing 63-73 amino acids which have been postulated to play a key role in the pathogenesis of swinepox [35]. In this study, we have characterized SWPV strains isolated from German domestic pigs and wild boar to investigate genome sequence diversity and draw conclusions on horizontal and vertical transmission routes and possible reservoirs.
Patholomorphological, Histological and Ultrastructural Examination
Tissue samples from domestic pigs (Sus scrofa domesticus) were collected from recurrent sporadic cases of swinepox that occurred in piglets on two conventional pig farms in Westphalia, Germany between July 2019 and January 2020. A complete postmortem examination was performed on three piglets with suspected poxvirus skin lesions and tissue samples (skin, tongue, esophagus, lung, heart, liver and intestine) were fixed in 4% buffered formaldehyde and routinely processed for histopathological examination by embedding in paraffin wax according to standard procedures, prior to haematoxylin-eosin (HE) staining of microtome-cut tissue sections. A formalin fixed block containing skin tissue from a previously described swinepox case in a domestic pig in 2008 was also used for retrospective SWPV genome analysis [36]. Native tissue samples were also harvested and preserved frozen for virus culture and molecular biological investigation. Additional frozen skin samples were obtained from two SWPV-infected juvenile wild boar (Sus scrofa) found in Baden-Württemberg with an eruptive dermatitis in October 2019. Ultrastructural examination was performed using a standardized diagnostic electron microscopy protocol with negative staining technique according to the recommendations of the Robert Koch Institute, Germany. Briefly, the unfixed skin lesions were scarified and placed against formvar-filmed electron microscopy copper grids to adsorb virus particles. These were negatively contrasted with 3% (w/v) phosphotungstic acid for 30 s and then ultrastructural investigation performed using an LEO0906 Electron microscope (Carl Zeiss, Oberkochen, Germany). Samples were initially examined using a 50,000× magnification.
Cells and Viruses
Porcine embryonic kidney cell line SPEV (cell line 0008, Friedrich Loeffler Institute, Germany), porcine lymphoma cell line 38A 1 D [37], porcine kidney cell lines SK6 [38], PK-15 cells and PK-15 derivative Riebe 5-1 (Friedrich Loeffler Institute, Germany) and a porcine testis epithelia cell line Riebe 255 (Friedrich Loeffler Institute, Germany) were maintained in Eagle's minimal essential medium (EMEM) (Thermo Fisher Scientific, Waltham, MA, USA) containing 10% fetal bovine serum (FBS) (Thermo Fisher Scientific) except for PK-15 cells in which 7.5% FBS was used. PK-15 cell lines were confirmed to be free of porcine circovirus type 1 and 2 contamination using specific primer pairs in a PCR performed on DNA extracted from cells [39,40]. Virus isolation was performed using skin samples containing lesions from a domestic piglet (Host ID 201-20046) or wild boar piglet (Host ID 201-20070) with suspected SWPV infection. Tissue samples were homogenized with PBS and centrifuged at 12,000× g for 5 min at 4 • C. The clarified supernatant was used to inoculate PK-15 cells for 90 min at 37 • C. Following infection, the medium was changed to EMEM containing 10% FBS and 1% Penicillin, Streptomycin and amphotericin and infected cell monolayers were passaged every three days until cytopathic effects (CPE) were observed.
Detection of SWPV-Specific Sequences Using PCR and qPCR
A SWPV-specific real-time quantitative PCR (qPCR) was performed to confirm SWPV infection in clinical samples using probe and primer pair sequences targeting the C20L-C1L region [41]. The oligonucleotide sequences used were 5 -TAATCCGGGCATCAATCCTC-3 (forward primer), 5 GCTGATTGGGCCAGAAAATG-3 (reverse primer) and 5 FAM TTCCCTCCACAGCTGCAAATGCTACT-TAMRA-3 (probe). All available samples from SWPV-infected animals or cell cultures were homogenized, centrifuged, and clarified supernatants were collected. DNA was extracted from frozen tissue and a single formalin fixed tissue block using a QIAmp DNA Mini kit and QIAamp DNA FFPE Tissue Kit (Qiagen, Hilden, Germany), respectively, according to the manufacturer's instructions. Amplification of SWPV-specific sequences via qPCR was performed using 45 cycles and an annealing temperature of 53 • C following the recommended protocol of the Luna Probe One-Step qPCR kit (NEB, Ipswich, MA, USA). A specific 238 bp region (12,989-13,227 bp) of the SPV020 ORF was amplified by PCR using the primer pair 5 GAAGATATTGACACTGTATCCATAC-3 and 5 GAGCACTACATTTCATTTC-3 using the Q5 ® High-Fidelity PCR kit (NEB). For Sanger sequencing of ORF SPV006 and SPV136 coding sequences, two primer pairs were used, 5 -TGAACGGAATCTGAAATACGA-3 and 5 -AAATATCTCATACAATCATTATACTTAC-3 for SPV006 and 5 -ATTACAGGAAAGATT GGCGTA-3 and 5 -TAATTTCCAAGACCTTCGCTT-3 for SPV136 [9].
Permissivity of Different Porcine Cell Lines to SWPV Infection
The isolated virus strains from domestic pigs and wild boar were used to test the permissivity of porcine-derived cell lines. Monolayers of the kidney-originated cell lines (SPEV, SK6, and two PK-15 subclones) as well as of the 38A 1 D lymphoma cell line and porcine epithelia testis cell line Riebe-255 were infected with the two SWPV strains for 2 h at 37 • C. The cell cultures were maintained in EMEM supplemented with 10% FBS and 1% penicillin/streptomycin and monitored for the development of cytopathic effects. Following the infection, supernatant samples were taken at days 0, 3, 6 and 10 post-infection and the amount of viral DNA was quantified by SWPV-specific qPCR with three replicates.
Whole Genome Sequencing and Analysis
Next generation sequencing (NGS) was used to obtain the complete genome of the SWPV strains isolated from the domestic pig (Host ID 201-20046; Table 1) and wild boar (Host ID 201-20070; Table 1). DNA Library preparation was performed with an Illumina Nextera TruSeq Library Preparation Kit (Illumina, Inc, San Diego, CA, USA) followed by sequencing on a NextSeq 550 sequencer to obtain 2× 10Mio reads (150 bp, paired end) per sample. Full-genome sequences were compiled and analyzed using QIAGEN CLC Genomics Workbench (v12). Annotation of ORFs was performed using Geneious Prime (Biomatters, Ltd., Auckland, New Zealand).
Phylogenetic Analysis
The phylogenetic relationship of different SWPV strains was investigated by performing a multiple sequence alignment using MAFFT version 7 [42] of the two newly generated full genome sequences (SWPV/domestic/GER/2019; GenBank accession no. MZ773481 and SWPV/wildboar/GER/2019, GenBank accession no. MZ773480) and the two existing SWPV genomes deposited in GenBank (GenBank accession no. NC_003389.1 and MW036632). The maximum likelihood method was applied using MEGA X [43] with 1000 bootstraps and the Tamura-Nei model [44] was used to perform the phylogenetic analysis based on the whole genome sequences. Branch lengths drawn to scale on the phylogenetic tree represent the number of substitutions per site. The phylogenetic trees for the evolutionary investigation of the nucleotide similarity of the open reading frames SPV006 (Hasegawa-Kishino-Yano model [45]) and SPV136 (Tamura-Nei model) were generated in an analogous manner.
Clinical and Pathological Findings
Three piglets with skin lesions pathognomonic for poxvirus infection originating from two farms in North Rhine Westphalia, Germany in July 2019 and January 2020 were submitted to the local veterinary state laboratory (CVUA WFL, Arnsberg) for further investigations into the etiology of the disease. Two piglets from farm 1 died within 24 h of birth while one piglet from farm 2 (201-200045) was euthanized on day 1 post-birth (Table 1). Approximately ten litters of domestic pigs had been affected with this disease syndrome on farm 1 in the previous 18 months and involved one or two symptomatic piglets per litter, which died within 24 h of birth. On farm 2 only two litters with a total of three symptomatic piglets had been noted in the previous 18 months. On both farms, the sows had intermittent contact with one another, and animals were kept in year-round stable housing and routinely treated against ecto-and endoparasites using defined regimes. Post-mortem examination of all domestic piglets showed concordantly multifocal severe, erythematous maculae covering the whole body (Figure 1a). These lesions were characterized by round, occasionally coalescing papules with raised, wall-like borders, encircling a depressed center, and partially covered by encrusted exudate. Circumscribed epithelial defects were observed additionally on the tongue (Figure 1b). Additional tissue samples were obtained from two juvenile wild boar with suspected poxvirus lesions submitted to the local veterinary state laboratory (CVUA Karlsruhe) in Baden-Württemberg in October 2019. Post-mortem examination revealed extensive skin lesions consisting of papules around the eyes and lips of the wild boar (Figure 1c,d). Microscopic examination of tissue sections from the domestic piglets showed a multifocal, moderate to severe, proliferative, and necrotizing dermatitis and folliculitis with ballooning degeneration and eosinophilic cytoplasmic viral inclusions were found (Figure 1e). A moderate perivascular and periadnexal lymphohistiocytic and plasmacytic infiltration was present in the dermis. The tongue showed a multifocal ulcerative glossitis with adjacent epithelial hyperplasia, ballooning degeneration and cytoplasmic viral inclusions (Figure 1f). Esophageal lesions consisted of focal epithelial proliferations with ballooning degeneration and cytoplasmic viral inclusions (Figure 1g). Other organs and tissues were without morphological changes. Electron microscopical investigation showed evidence of 220 × 450 nm sized, brick-shaped orthopoxvirus particles, with an electron-dense DNA core containing a biconcave core in tissue from skin lesions of all necropsied piglets (Figure 1h). The tissue tropism was further investigated by performing qPCR using SWPV-specific primers on DNA extracted from different organs of a congenitally infected piglet (Host-ID 201-20047). The lowest ct values were found in skin, umbilical cord tissue and tongue with higher ct values noted in the lung and intestine (Table 1). SWPV was not detected by qPCR in tissue samples from liver or kidney. SWPV was also detected in frozen skin samples from two additional recent cases
Assessment of The Susceptibility of Porcine Cell Lines to SWPV Infection
Virus isolation from confirmed swinepox cases was performed by infection of PK-15 cells with clarified supernatant from pox skin lesions that had been homogenized in PBS from a SWPV-infected domestic pig (Host-ID 201-20046) and wild boar (Host-ID 201-20070). Initially, no cytopathic effects were observed. Therefore, infected cells were blind-passaged until the characteristic CPE of SWPV including cell rounding, ballooning and vacuolization developed at passage four and three for the domestic pig (SWPV/domestic/GER/2019) and wild boar (SWPV/domestic/GER/2019) strains, respectively (Figure 2a-c). The susceptibility of additional porcine kidney cell lines (SPEV, SK6 and two PK-15 subclones), a porcine testis epithelia cell line (Riebe-255) and a porcine lymphoma cell line (38A 1 D) to infection with the German domestic pig and wild boar SWPV strains was investigated together with virus growth kinetics over time. In the absence of a SWPV-specific antibody and with cytopathic effects often absent or subtle in infected cell cultures, a SWPV-specific qPCR was used to assess virus replication. All tested cell lines showed evidence of SWPV replication as evidenced by a decreasing ct value (Figure 2d
Phylogenetic and Whole Genome Analysis of SWPV Sequences
SWPV is the single member of the genus Suipoxvirus, within the subfamily Chordopoxvirinae and its clade is between that of members of the genus Capripoxvirus such as lumpy skin disease virus and the novel Brazilian porcupinepox virus. However, knowledge about the phylogenetic relationships and sequence diversity within the SWPV species is limited. We therefore compared the two available whole-genome SWPV sequences from domestic pigs in North America (SWPV/USA/2002, GenBank accession no. NC_003389.1) and India (SWPV/India-Assam/16, GenBank accession no. MW036632.1) to the new German genome SWPV sequences obtained in this study. Phylogenetic analysis showed that the German domestic pig and wild boar SWPV strains from 2019 form a clade with SWPV/USA/2002 and branch separately from SWPV/India-Assam/16 (Figure 3a). The SWPV strains derived from a German domestic pig and wild boar were found to have 99.924% identity with respect to nucleotide sequence (Table S1). These strains also showed 99.92% (wild boar)/99.94% (domestic pig) nucleotide sequence identity toSWPV/USA/2002 and 98.14% (wild boar)/98.15% (domestic pig) to SWPV/India-Assam/16 ( Figure S1). Further phylogenetic analysis based on the nucleotide sequences of the SPV006 and SPV136 ORFs confirmed the existence of two separate clades of SWPV (Figure 3b,c).
Comparison of whole-genome sequences of the two new German SWPV isolates showed 75 synonymous and 37 nonsynonymous changes, which were spread throughout the genome (Table S1). All previously annotated ORFs were present in these strains apart from some alterations in the SWPV-specific genome region. The wild boar SWPV strain but not the domestic pig SWPV strain was observed to contain a single base pair deletion at position 13,061 bp. This resulted in a frameshift in the SPVwb020 ORF, which led to the fusion of the SPVwb020 and SPVwb019 ORFs and the corresponding amino acid sequences, resulting in a hypothetical 113 aa SPVwb020-019 protein (Figure 4a). This represents the first 28 aa of SPVwb020, 15 unique aa and then 70 aa encoded by the SPVwb019 ORF. We confirmed that the single nucleotide deletion in the German wild boar SWPV strain did not occur because of in vitro passage by performing Sanger sequencing on the original tissue samples. A 238 bp region of SPVwb020-019 was amplified from DNA extracted from frozen tissue originating from the two SWPV cases in wild boar from 2019, the domestic pig cases from 2019 and a formalin-fixed tissue block from a SWPV case in a domestic pig from 2008. Subsequent analysis of Sanger sequencing data showed that the single base pair deletion in SPV020 was only present in tissue samples from the SWPV-infected wild boar piglets (Figure 4c). The single base pair deletion in the German wild boar SWPV strain also generates a truncated form of SPV020 (Figure 4b). This ORF encodes 51 aa, which represents 10 unique aa at the N-terminus with the remaining 41 aa identical to the corresponding region of SPV020 in other SWPV strains. The genome region encoding the three SWPV-specific genes (SPV018, SPV019 and SPV020) is followed by a 366 nt noncoding sequence. Within this sequence, a new ORF (SPV020a) encoding a hypothetical 71 aa protein was detected in the domestic pig SWPV sequence (Figure 4a). Further analysis showed that this ORF has homologs in other members of the Orthopoxviridae including a 58.9% similarity with the hypothetical protein LSDVgp023 from the lumpy skin disease virus (GenBank accession no. NP_150457.1) and with the hypothetical protein SPPV_20 from the sheeppox virus (GenBank accession no. NP_659596.1) ( Figure 5).
Discussion
Outbreaks of swinepox are commonly observed in domestic pigs worldwide, but knowledge about the prevalence, strain diversity, wildlife reservoirs and evolutionary origins of SWPV is limited. In this study, SWPV strains isolated from skin samples containing pox lesions obtained from a wild boar piglet and a congenitally infected domestic piglet were sequenced and analyzed to determine the evolutionary origins of these strains. We observed a sequence divergence of only 0.076% (112 nt) between the SWPV strains present in domestic pigs and wild boar, meaning that at least two closely related strains are circulating in German wildlife and livestock pigs. In countries with a comparable, industrial structure of pig farming, the implementation of strict sanitary measures and high hygienic standards serves to shield the livestock population against contact with pathogens present in the environment and especially prevent potential direct or indirect encounters with wild boar. However, given the high nucleotide identity between the two SWPV strains, wild boar may have a role as a potential SWPV wildlife reservoir mediating the spread of SWPV to domestic pig populations. Additional sequencing of SWPV strains in domestic pig and wild boar populations is required to determine the routes of virus transmission.
The higher nucleotide sequence identity of the two German SWPV strains (2019) to the reference North American SWPV strain (2002) than to a recent Indian SWPV strain (2016) may reflect the history of domestication and distribution of the domestic pig. Zooarchaeological evidence shows that pigs were domesticated independently in at least two locations, namely Eastern Anatolia [46], from where the European domestic pig population originated [47] and China, from which it is assumed that domestic pig populations were then dispersed throughout Asia [48,49]. In North America, domestic pigs were introduced from Europe starting in the 15th century [50]. The two currently recognized SWPV lineages in Asia and Europe/North America might thus have evolved from SWPV strains present in wild pig populations following the two geographically separated domestication events. However, additional SWPV sequences from domestic pigs in more dispersed geographical regions worldwide would be required to confirm this hypothesis. It would also be of interest to obtain more full-length SWPV genome sequences from other wild pig species of the family Suidae present in Africa and Asia.
Genome analysis of the domestic pig and wild boar SWPV strains showed interesting differences with respect to potential alterations in gene expression in the region of the genome encoding SPV018, SPV019 and SPV020. These genes are unique to SWPV with no homologs within the wider Poxviridae family and are predicted to encode proteins that may be associated with immune evasion, restricted host range and/or virulence. The SWPV strain from wild boar had a one nucleotide deletion at position 13,061 bp resulting in fusion of the SPV019 and SPV020 ORFs. A similar fusion of the SPV019 and SPV020 ORFs has been recently observed in an Indian SWPV sequence from a domestic pig in which a single base pair deletion at position 12,993 bp caused a similar frameshift in SPV020 [51]. This sequence variation is thus not correlated with infection of domestic pigs or wild boar and additional genome sequences are required to determine the prevalence of fused SPV019 and SPV020 ORFs in SWPV strains originating from these species. A hypothetical protein (SPVdp020a) encoded by a novel ORF downstream of SPVdp020 in the German domestic pig SWPV strain has not been observed in other SWPV strains. However, further analysis showed that SPVdp020a displays approximately 58% homology to hypothetical proteins present in other related poxviruses including lumpy skin disease virus. Additional research is required to determine the function(s) of this and other unique SWPV proteins and the significance of mutations in this region of the genome for virus host range and pathogenicity. Interestingly, the SWPV-specific genome terminal region appears to be flexible with respect to gene expression in these hitherto unknown SWPV strains. These findings are compatible with sequencing studies on other poxviruses, demonstrating that terminal genome regions show significant variability and a disrupted collinearity in comparison to the conserved core region with approximately 80 genes [35,52].
The piggeries affected by this outbreak were assessed to be in a good hygienic condition with an absence of pig lice (Haematopinus suis) from the housing units, which indicates that vector-borne mechanical transmission was probably not responsible for the introduction of SWPV into these farms or subsequent virus transmission between animals. The pig louse has previously been shown to be a vector for SWPV transmission as the main lesions observed on infected pigs are on the abdomen and inner surface of the legs, corresponding to the predilection sites of these parasites. However, piglets which have been intravenously infected with SWPV also show lesions in the same positions [5]. The absence of this insect species in the housing units of the current swinepox cases suggests that the route of transmission was by subclinically or inapparently infected sows. This theory is supported by our findings of the permissivity of porcine lymphoma and epithelia cell lines and the detection of viral genomic sequence in different peripheral organs of the congenitally infected piglets. The establishment of a persistent poxviral infection has previously been reported for different orthopoxviruses [53][54][55], especially during ectromelia virus infections in mice, where bone marrow and blood cells appear to be key sites for persistence in immunocompetent mice [56]. Poxviruses have also been shown to persist in vitro in myeloid and lymphoma cell lines [57][58][59]. However, further research is required to delineate the potential of SWPV to persist in immunocompetent animals.
The in utero infection of piglets with SWPV from clinically unremarkable sows suggests that viremia occurs in immunocompetent animals, together with virus spread to reproductive tissue and vertical virus transmission through the placenta. Although the placentas from these cases were not available for further study, we did observe that the lowest ct values for SWPV apart from skin tissue were present in the umbilical cord. We were unable to consistently identify SWPV infection by qPCR in the blood of sows within the housing units in which swinepox cases occurred. Similar observations have been made previously for other poxvirus infections. Rare cases of generalized poxvirus lesions in the human fetus following in utero virus infection have been reported following both primary vaccinia virus vaccination [60][61][62] and monkeypox infections during pregnancy [63]. This is consistent with cases of intrauterine poxvirus infection in animals [64,65]. Prior to eradication, Variola virus infections resulted in the termination of 60-75% of pregnancies due to the severe consequences of systemic virus infection. Furthermore, half of the babies born alive were reported to die within two weeks, even though only 9% of newborns exhibited visible cutaneous pox lesions [66]. Further research is required to determine if SWPV is responsible for a higher case number of gestational losses and asymptomatic stillbirth in sows than previously presumed, especially given that visible poxvirus lesions are normally present on only a few of the piglets in a litter.
Estimating the real prevalence of SWPV in domestic pig populations has been challenging, as swinepox is often neglected by farmers due to the mild clinical signs in noncongenitally infected piglets. Serological testing for antibodies against SWPV is also unreliable and technically challenging, with only limited neutralization activity detected in sera from convalescent animals, which are nevertheless protected against reinfection [21,29,67]. The distribution of SWPV may be wider than assumed given the fact that SWPV infection and disruption of skin epithelium could be the underlying cause of more commonly observed secondary bacterial infections, which mask the distinctive cutaneous lesions of swinepox. Such infections can induce an exacerbation in the disease course and lead to a more severe drop in pig herd performance.
Conclusions
We have characterized two SWPV strains circulating in German wildlife and livestock animals and shown differences with respect to coding capacity within the SWPV-specific terminal region of the genome. The congenital infection of piglets and the detection of SWPV DNA in the umbilical cord suggests that the virus transmission within the domestic pig population is mediated by persistently infected asymptomatic animals. Future studies are required to determine the true prevalence and strain diversity of SWPV in domestic and wild pig species and to understand the role of comorbidities and coinfections in exacerbation of the disease course in infected animals.
Supplementary Materials: The following are available online at https://www.mdpi.com/article/ 10.3390/v13102038/s1, Figure S1: Nucleotide sequence identity observed between full genomes of different SWPV strains, Table S1: Nucleotide and amino acid changes between domestic pig and wild boar SWPV strains.
|
2021-10-15T15:34:06.224Z
|
2021-10-01T00:00:00.000
|
{
"year": 2021,
"sha1": "55336ec468a81b33cc990c09dc80653b59585891",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1999-4915/13/10/2038/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ee4f1eb97794d8cfc759100a951e471a0579ae73",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
260807094
|
pes2o/s2orc
|
v3-fos-license
|
CREB1 activation promotes human papillomavirus oncogene expression and cervical cancer cell transformation
Abstract Human papillomaviruses (HPVs) infect the oral and anogenital mucosa and can cause cancer. The high‐risk (HR)‐HPV oncoproteins, E6 and E7, hijack cellular factors to promote cell proliferation, delay differentiation and induce genomic instability, thus predisposing infected cells to malignant transformation. cAMP response element (CRE)‐binding protein 1 (CREB1) is a master transcription factor that can function as a proto‐oncogene, the abnormal activity of which is associated with multiple cancers. However, little is known about the interplay between HPV and CREB1 activity in cervical cancer or the productive HPV lifecycle. We show that CREB is activated in productively infected primary keratinocytes and that CREB1 expression and phosphorylation is associated with the progression of HPV+ cervical disease. The depletion of CREB1 or inhibition of CREB1 activity results in decreased cell proliferation and reduced expression of markers of epithelial to mesenchymal transition, coupled with reduced migration in HPV+ cervical cancer cell lines. CREB1 expression is negatively regulated by the tumor suppressor microRNA, miR‐203a, and CREB1 phosphorylation is controlled through the MAPK/MSK pathway. Crucially, CREB1 directly binds the viral promoter to upregulate transcription of the E6/E7 oncogenes, establishing a positive feedback loop between the HPV oncoproteins and CREB1. Our findings demonstrate the oncogenic function of CREB1 in HPV+ cervical cancer and its relationship with the HPV oncogenes.
The transcription factor cAMP response element (CRE)-binding protein 1 (CREB1) belongs to a subcategory of the basic leucine zipper (bZIP) superfamily and has the potential to regulate approximately 4000 genes. 22 It can form homo-and/or hetero-dimers with other CREB family members (e.g., ATF1), or the Activator protein 1 (AP-1) component 23 to mediate gene transcription by binding to cis-regulatory elements containing a conserved CRE. 24 Besides binding to full CRE sequences (TGACGTCA), CREB1 can also bind to half CRE sequences (TGACG or CGTCA) to mediate transcription. The transcriptional activity is induced upon the phosphorylation of CREB1 (S133) by multiple protein kinases including MAPKs, 25 PKA, 26 MSKs, 27,28 and CaMKs. 29 CREB1 has been shown to function as a proto-oncogene and its overexpression contributes to human malignancies. [30][31][32][33] Furthermore, CREB signaling is required for transformation caused by oncogenic viruses, such as human T-cell leukaemia virus type 1 (HTLV-1). 34 However, few studies have focused on CREB1 in cervical cancer and especially in the context of HPV infection. MicroRNAs (miRNAs) are non-coding RNAs (ncRNAs) that negatively regulate gene expression by binding to the 3′-untranslated region (UTR) of target messenger ribonucleic acids (mRNAs), leading to degradation of mRNA or translational repression. 35 Studies have demonstrated that the aberrant expression of miRNAs contribute to cancers. 36 miR-203 functions as a tumor suppressor, the ectopic expression of which inhibits carcinogenesis and tumor progression, whereas downregulation of miR-203 was observed in variety types of cancer due to epigenetic silencing. [37][38][39] miR-203 has been shown to be downregulated in cervical cancers and correlated with HPV infection and tumor aggressiveness. [40][41][42] In this study, we showed CREB1 was overexpressed in HPV+ cervical cancers and promoted cell proliferation and migration. We found that E6-induced CREB1 phosphorylation and transcriptional activity depended on the MAPK/MSK signaling axis. Additionally, the increased CREB1 expression observed in HPV+ cervical cells was negatively regulated by miR-203a. Finally, we identified a positive feedback loop between the HPV oncogenes and CREB1, in which CREB1 can directly bind to the viral promoter and upregulate the transcription of HPV oncogenes.
| Cervical cytology samples
Cervical cytology samples were previously described. 19 RNA and protein were extracted from the samples using Trizol, following manufacturer's instruction and analysed as described. 12
| HPV positive biopsy samples
Archival paraffin-embedded cervical biopsy samples were obtained with informed consent. Subsequent analysis of these samples was performed in accordance with approved guidelines, which were approved by Glasgow Royal Infirmary: RN04PC003. HPV presence was confirmed by PCR using GP5+/GP6+ primers.
Normal human keratinocytes (NHK) and HPV18-containing NHK were previously described. 43 Cells were routinely tested for mycoplasma.
| Organotypic raft cultures
Control and HPV18 containing foreskin keratinocytes were grown in organotypic raft cultures by seeding the keratinocytes onto collagen beds containing J2-3T3 fibroblasts. Once confluent the collagen beds were transferred onto metal grids and fed from below with FCS-containing E media lacking EGF. The cells were allowed to stratify for 14 days before fixing with 4% formaldehyde in E media. The rafts were paraffinembedded and 4 μm tissue sections prepared (Propath UK, Ltd.).
| High calcium differentiation assay
NHK and HPV18 containing keratinocytes were grown in complete E media until 90% confluent. Media was changed to serum free keratinocyte media without supplements (SFM medium, Invitrogen) containing 1.8 mM calcium chloride. Cells were maintained in this media for 72 h before lysis and analysis.
| Plasmids, small interfering RNA (siRNA), and reagents
Plasmids for HPV oncoproteins have been previously described. 12 An HPV18 upstream regulatory region (URR)-driven luciferase reporter construct has been previously described 44 and was kindly provided by Prof. Felix Hoppe-Seyler (German Cancer Research Center, Heidelberg, Germany). The HPV16 URR-driven luciferase reporter construct has been previously described 45 and HPV18 E6 were previously described. 46 hsa-miR-203a miRNA mimic (MIMAT0000264) was obtained from ABM. Codon optimized HPV18 E6 and E7 sequences were cloned into pcDNA3.1 using KpnI and EcoRI. The 3′-UTR of CREB1 was cloned into psiCHECK2 using XhoI and NotI. CREB1 was cloned into CMV500 using BamHI and NotI, and pcDNA3.1 using HindIII and EcoRI, respectively. FLAGtagged MSK AA has been previously described. 47 Mutagenesis was performed by PCR using the Site-directed mutagenesis kit (NEB). The
| Cell proliferation assay
Cell growth curves were performed to evaluate cell proliferation.
Cells were counted manually using a haemocytometer every 24 h for a period of 5 days.
| Colony formation assay
Transfected and corresponding control cells were re-seeded in a sixwell plate and incubated for 2-3 weeks. Colonies were then stained (1% crystal violet, 25% methanol) and counted manually.
| Wound-healing assay
Wound-healing assays were performed to evaluate cell migration.
Briefly, a scratch was created through the confluent cell monolayer using a plastic micropipette tip. The cells were cultured in low serum (1%) medium and incubated at 37°C for 24 h. Images of wounds were captured using an EVOS microscope. The closure rate was quantified using ImageJ.
| RNA extraction and qRT-PCR
Total RNA was extracted using the E.Z.N.A. Total RNA Kit I (Omega Bio-Tek) or TRIzol reagent (Sigma) according to the manufacturers' instruction, followed by DNase I treatment (AMPD1, Sigma-Aldrich).
qRT-PCR was performed using a GoTaq 1-step qRT-PCR system (Promega). The reaction was conducted on a CFX96 Connect Real-Time PCR Detection System (Bio-Rad) using default protocol with melt curve. B2M and GAPDH served as normalizer genes. miR-203a expression was detected by miScript PCR system (Qiagen) and Snord68 was used for normalization. The data obtained was analysed using the ΔΔCt method. 48 Specific primers were used for each gene analysed and are shown in Table S1.
| Dual-luciferase reporter assays
Unless otherwise indicated, the dual-luciferase reporter assays were conducted in HEK293T. Luciferase activity was examined by using the dual luciferase reporter assay kit (Promega). Briefly, 48 h after transfection, the relative luciferase activity was measured and calculated, according to the reporter system used and as described. 49 2.14 | Chromatin immunoprecipitation (ChIP) assays Cells were fixed with 1% formaldehyde solution for 15 min at RT and fixation quenched by incubation with 125 nM glycine for 5 min. DNA fragments ranging from 200 to 300 bp were generated using sonication.
| Microarray analysis
For microarray analysis of CREB1 expression, the following datasets were used: GSE6791, GSE63514, and GSE39001. For microarray analysis of miR-203a expression, the following datasets were used: GSE30656 and GSE19611.
| Statistical analysis
Statistical analyses were performed using GraphPad Prism 7.00. The Student t-test (unpaired, two-tailed) was performed to determine significance.
| CREB1 is activated in HPV containing keratinocytes and cervical cancer progression
To understand if CREB1 plays a role in the biology of cervical cancer, we explored the TCGA database by Gene Expression Profiling Interactive Analysis (GEPIA) and found that CREB1 expression in cervical cancers was higher than normal tissue ( Figure 1A). Consistently, according to the OncoMine database and public dataset (GSE6791), we found significantly upregulated CREB1 expression in cervical cancers compared with normal cervical tissue ( Figure 1B,C).
We also explored GSE datasets for CREB1 expression in different CIN grades, representing disease severity, and found that CREB1 expression significantly correlated with increasing CIN grade and was further increased in cervical squamous cell carcinoma ( Figure 1D). In addition, we investigated CREB1 expression with regards to HPV infection status. By analysis of public dataset (GSE39001), we found a significant higher CREB1 expression in HPV16+ cervical cancer specimens compared with healthy exocervix ( Figure 1E).
To experimentally confirm these findings, we harvested cervical cytology samples collected from healthy patients as negative controls and patients with increasing CIN grade. Our results showed that CREB1 expression was significantly upregulated with CIN progression ( Figure 1F). These findings were corroborated using immunostaining for the active, phosphorylated, form of CREB in sections of tissue from low-grade CIN1 and high-grade CIN3 samples, revealing a marked increase in phosphorylated CREB in CIN3 ( Figure 1G). To be certain we performed Annexin V assays, which also revealed no difference in apoptosis levels between control and CREB depleted cells ( Figure S1E). These results suggested that inhibition of CREB1 suppressed cell proliferation but did not increase apoptosis in HPV+ cervical cancer cells.
| Inhibition of CREB1 suppresses cell migration and epithelial mesenchymal transition (EMT)
EMT plays an important role in cervical cancer progression and metastasis 52 and CREB1 has been reported to promote migration and EMT in cancers. 53,54 To address this, we first performed wound healing assays to investigate the effect of CREB1 on the migration of cervical cancer cells. CREB1 silencing or inhibition significantly slowed the wound closure rate compared with the control ( Figure 2G and Figure S1F). These data indicated that inhibition of CREB1 attenuated migration in cervical cancer cells.
To determine the role of CREB1 in regulating EMT, we first investigated the changes in expression of multiple EMT markers by RT-qPCR after CREB1 silencing. This showed that depletion of CREB1 resulted in reduced mRNA levels of mesenchymal markers including MMP2, CDH2 (N-cadherin), SNAI1 (Snail), SNAI2 (Slug), and TWIST1 in both HeLa and CaSKi cells ( Figure 2H). To validate these findings, we analysed the protein expression of Slug and Snail, key transcription factors which regulate EMT. 55,56 Consistently, we showed that their protein expression was decreased with CREB1 F I G U R E 1 CREB1 is overexpressed in HPV containing primary keratinocytes, HPV+ cervical cancers and is associated with disease progression. (A) TCGA data analysis of CREB1 expression in cervical squamous cell carcinoma and endocervical adenocarcinoma (CESC); (B) Pyeon multicancer mRNA dataset analysis of CREB1 in cervical cancer (CC) was compared to cervical normal (CN); (C) GSE6791 dataset analysis of CREB1 expression in CC tissue was compared to CN tissue; (D) GSE63514 dataset analysis of CREB1 expression was compared among different cervical intraepithelial neoplasia (CIN) grades and cervical squamous cell carcinoma (SCC). CIN grades represent the severity of cervical disease; (E) GSE39001 dataset analysis of CREB1 expression in HPV16+ CC tissue was compared to CN tissue; (F) qPCR analysis of CREB1 in cervical cytology samples collected from healthy patients and patients with different CIN grades; (G) Representative immunostaining analysis of tissue sections from cervical lesions representing low-grade through to high-grade cervical disease. Sections were stained for phosphorylated CREB1 (green) and nuclei were visualized using DAPI (blue). Images were acquired using identical exposure time. (H) Western blot analysis of CREB1 in HPV+ cervical cancer cell lines compared to NHK; (I) Representative western blot analysis of normal human keratinocytes (NHK) and HPV18-containing keratinocytes subjected to high calcium differentiation and analysed for CREB phosphorylation. GAPDH serves as loading control. Representative sections of organotypic raft cultures from NHK and HPV18-containing keratinocytes stained with antibodies specific for phosphorylated CREB (green) and counterstained with DAPI to highlight the nuclei (blue). Images were acquired using identical exposure times. White dotted lines indicate the basal cell layer. Data shown are mean ± SD, n > 3. *p < 0.05; **p < 0.01; ***p < 0.001. CIN, cervical intraepithelial neoplasia; HPV, human papillomaviruses; mRNA, messenger ribonucleic acid. LI ET AL. | 5 of 14 silencing ( Figure 2I). These results indicated that CREB1 contributed to migration and EMT in cervical cancer cells.
| CREB1 contributes to HPV18 E6 driven proliferation
Expression of the HPV oncoproteins is necessary for cervical cancer cell proliferation in cell culture. To determine whether CREB1 was required for proliferation by HPV18 oncoproteins, C33A cells were transfected with GFP-tagged HPV18 E6 and E7. Of note, although C33A cells showed increased CREB1 expression and phosphorylation compared with NHK controls (Figure 1H), silencing CREB1 did not inhibit the proliferation ( Figure 3A,B). Expression of 18E6 led to an increase in both CREB1 protein levels and phosphorylation over the baseline levels seen in C33A cells, whereas 18E7 expression only resulted in an increase in CREB1 protein levels ( Figure 3C). Western blot demonstrated that CREB1 siRNAs-mediated knockdown was successful in both of these cell lines ( Figure 3C). C33A cells expressing 18E6 and 18E7 showed increased proliferative capacity over empty vector controls and their growth was reproducibly impaired when CREB1 was silenced, with the loss of CREB1 having a more pronounced impact on 18E6 driven proliferation compared to 18E7 mediated growth ( Figure 3D,E). The results suggested that whilst CREB1 expression was dispensable for proliferation in HPV-C33A cell it was required for growth driven by the HPV18 oncoproteins, particularly E6.
| HPV E6 induces CREB1 phosphorylation and activity by a MAPK/MSK signaling pathway
To investigate which HPV oncoprotein was responsible for increasing CREB1 transactivation function, we again utilized HPV E6 has been demonstrated to deregulate MAPK signaling. [12][13][14] To investigate whether E6-induced CREB1 activity was ERK/p38 kinase dependent, we performed the CRE-driven luciferase assay in 18E6 expressing cells treated with small molecule inhibitors targeting the MAPK signaling components ( Figure 4C). The results showed that inhibition of either ERK, p38, or the downstream effector MSK 57 reduced E6-induced CREB phosphorylation and CREdriven luciferase activity ( Figure 4D,E). To orthogonally confirm the importance of MSK in E6-driven CREB1 phosphorylation and activation, we used a characterized MSK mutant (MSK T581A/ T700A (AA)), which abolishes MSK activity, 47 and found that in agreement with our pharmacological data, the increase in CREB1 phosphorylation and CRE-driven luciferase levels mediated by 18E6 was impaired by the MSK mutant ( Figure 4F,G). Taken together, these results suggested that HPV E6 induced CREB1 activity via MAPK/MSK signaling.
| miR-203a directly targets and inhibits CREB1 expression
During our studies we noted that CREB1 protein levels were also increased in HPV+ cells ( Figure 1F and 1H). Whilst the importance of showing that miR-203a expression is modulated by HPV, 42 our analysis showed that the expression of miR-203a in HPV+ cervical cancer cell lines was significantly lower compared with NHK ( Figure 5B), and the presence of the whole HPV18 genome in NHK was sufficient to reduce miR-203a expression ( Figure 5C).
To experimentally validate whether miR-203a inhibits CREB1 expression, a miR-203a mimic was employed ( Figure S3C). Endogenous CREB1 protein and mRNA expression were both significantly reduced with miR-203a overexpression ( Figure 5D,E). We then used a luciferase reporter controlled by a partial CREB1 3′-UTR containing the putative miR-203a binding site ( Figure 5F). Overexpression of the miR-203a mimic significantly decreased the luciferase activity controlled by the wild-type (WT) CREB1 3′-UTR, whereas it failed to repress the activity of a luciferase reporter containing a mutated miR-203a binding sequence in the 3′-UTR (Mut) ( Figure 5G). Taken together, the above results confirmed that miR-203a directly targeted the CREB1 3′UTR and repressed CREB1 expression.
| miR-203a overexpression attenuates cell proliferation by targeting CREB1
To determine whether the tumor suppressive effects of miR-203a are due to its impact on CREB1 expression, we co-expressed a miR-203a mimic and CREB1 ( Figure 5H,I). Our results showed that cervical cancer cells overexpressing the miR-203a mimic alone presented significantly suppressed cell growth and clonogenicity, whilst overexpression of CREB1 could partially rescue the suppression in HeLa and CaSKi cells ( Figure 5J-L). These results revealed that miR-203a-attenuated cell proliferation was at least partially due to targeting of CREB1.
| CREB1 transcriptionally upregulates HPV oncoprotein expression by binding to the HPV URR
During our studies we noticed that the depletion of CREB1 in the HPV+ cancer lines decreased E6 and E7 expression ( Figure 6A,B). This effect was also seen in the context of a productive infection.
When keratinocytes containing HPV18 were transfected with A-CREB and induced to differentiate with high calcium, E6 and E7 levels were both reduced compared to control. We also observed a concomitant reduction in proliferation markers such as ΔNp63 and a corresponding increase in expression of terminal differentiation markers like involucrin ( Figure 6C). Given this broad effect on oncoprotein expression, we wondered whether CREB1 could regulate the transcription of E6 and E7 via the URR. To investigate this, cells were co-transfected with luciferase reporters driven by the URR of either HPV16 or HPV18, and A-CREB or CREB1 ( Figure S3D), and stimulated by Forskolin (FSK), which promotes CREB1 activity. The results showed that both HPV16 and HPV18 URR activity was significantly enhanced by CREB1 overexpression and FSK, and diminished by A-CREB even in cells stimulated by FSK ( Figure 6D). These results implied that CREB1 contributed to HPV URR activity. Although there is no established consensus CRE sequence reported within HPV16/ 18 URRs, we and others have previously showed AP-1 regulated HPV URR activity. 12,44 Therefore, we wondered whether the AP-1 sites contribute to CREB1-mediated URR activity as CREB1 can regulate gene transcription via CRE and AP-1 binding sites. 60,61 To investigate this, we co-transfected cells with CREB1 and a luciferase reporter containing WT or mutants of the reported AP-1 sites within the HPV18 URR. Our results showed that mutation of either or both AP-1 sites within the enhancer region (AP-1E) or promoter region (AP-1P) suppressed the basal URR activity, while CREB1 overexpression was able to enhance the URR activity, exceeding the basal level detected with the WT reporter but not reaching the levels reached by the WT reporter in cells overexpressing CREB1 ( Figure 6E). This implied the upregulation of HPV18 URR activity by CREB1 might be partially dependent on the AP-1 sites, but that there must be additional regions within the URR that are regulated by CREB1. (H) Schematic of proposed model. Data shown are mean ± SD, n ≥ 3. ns, non-significance. *p < 0.05; **p < 0.01; ***p < 0.001. HPV, human papillomaviruses; URR, upstream regulatory region. together, our results indicated that CREB1 upregulated transcription of the HPV oncogenes by direct binding to the HPV18 URR.
| DISCUSSION
CREB1 is a multifunctional transcription factor with the potential to regulate approximately 4000 target genes. 22 Overexpression of CREB1 is often found in multiple human cancers and linked to several hallmarks of cancer. 63 Although CREB1 was previously reported to regulate mitophagy in cervical cancer, 64 no investigation has been taken to study CREB1 function in the context of HPV infection, and so its functions remain to be fully elucidated in these cells. Here, we showed CREB1 expression positively correlated with cervical disease progression. Using RNAi knockdown and the dominant negative CREB1 inhibitor A-CREB, we demonstrated that CREB1 functions as a proto-oncogene by promoting proliferation, migration and EMT in HPV+ cervical cancer cells. We did not observe a role for CREB1 in regulating apoptosis in the cervical cancer cells, which would have provided an alternative explanation for the growth curve and clonogenicity assays undertaken. We did not test for senescence and so cannot rule out that CREB signaling might also feed into this biological process. HPV E6 and E7 have been shown to regulate the MAPK signaling pathway, [12][13][14] which are direct drivers of CREB1 activation through their ability to phosphorylate and activate the MAPK-activated kinase MSK, 27,28 which phosphorylates CREB1 on S133. 25 We found CREB1 phosphorylation and activity was upregulated by HPV E6, and it required the MAPK/MSK signaling axis. Furthermore, E6-enhanced cell proliferation in the HPV-cervical cancer C33A cell line was at least partially CREB1dependent. Taken together, our results suggest E6 utilizes the MAPK/MSK pathway to activate CREB1, thereby driving cervical cancer. We also demonstrated that active CREB1 was necessary to maintain proliferative signaling within the differentiating environment of the epithelium, as loss of CREB activity correlated with a reduction in proliferation markers such as p63 and increased expression of terminal differentiation marker expression. This is likely mediated through several of the myriad of gene targets of CREB1, many of which such as cFos are known to induce proliferation and have been associated with HPV previously. Going forward it would be informative to determine the impact of CREB inactivation, either through knockdown or A-CREB expression, more comprehensively on productive infection.
Non-coding RNAs (ncRNAs), including miRNAs, regulate chromatin remodeling, transcription, post-transcriptional modifications, and signal transduction and thereby control many fundamental pathological processes. 65 Studies have indicated that deregulated expression of ncRNAs is pivotal to HPV+ cervical cancer. 16,46,66,67 For example, the oncogenic miR18a targets the STK4 tumor suppressor to inhibit the Hippo pathway and activate the protumorigenic transcription factor YAP1. 46 miR-203a, a well-studied tumor suppressor, is downregulated by the HPV oncoproteins 41,42,68 and controls the pathogenesis of cervical cancer by regulating multiple target genes including VEGF, 69 BANF1, 70 and ZEB1. 71 An inverse correlation between miR-203a and CREB1 expression was observed in melanoma. 72,73 However, to our knowledge, no evidence of their relationship in cervical cancer has been shown. In the present study, we confirmed that CREB1 is a direct miR-203a target in cervical cancer cells. Our results demonstrated CREB1 overexpression could partially rescue miR-203a-suppressed proliferation, suggesting the importance of miR-203a/CREB1 in regulating cervical cancer.
HPV early gene transcription is initiated from the early promoter located upstream of the E6 open reading frame (P97 for HPV16 and P105 for HPV18) within the viral URR. Multiple host cell transcription factors have been shown to bind the URR to control early gene transcription, such as AP-1, SP1, TBP, Oct-1, and YY1. 74 CREB1 has been reported to regulate transcription of viral genes, including those of HTLV-1, 75 Kaposi's sarcoma-associated herpesvirus, 76 HBV, 77 human immunodeficiency virus, 78 and EBV. 79 However, to our knowledge, there is no report demonstrating that CREB1 can directly regulate HPV gene transcription. Here, we showed that the transcription of HPV early genes was upregulated by CREB1 in cancer cell lines and in primary keratinocytes harboring the entire HPV18 genome. Mechanistically, we identified two putative CBSs responsible for CREB1 binding and CREB1-induced URR transcriptional activity. Recently, PKA, a direct upstream activator of CREB1, and Forskolin, a stimulator of the PKA/CREB1 signaling, were shown to regulate the replication of HPV18. 80 We discovered that overexpression of A-CREB could reduce the Forskolin-mediated enhancement of URR-driven luciferase activity (HPV16 and 18), indicating the importance of CREB1 in Forskolin/PKA-stimulated URR activity. Our results also demonstrated that CREB1 can bind AP-1 sites within the URR to upregulate the transcription of HPV early genes. Taken together, CREB1 appears to be a pivotal driver of HPV early gene transcription.
In conclusion, we propose an HPV/CREB1 positive feedback loop whereby HPV drives CREB1 expression by downregulating miR-203a and CREB1 activity via the induction of MAPK/MSK signaling, and CREB1, in turn, induces HPV early gene transcription. We therefore demonstrate a novel regulatory network controlled by HPV to regulate proliferation, migration and EMT in cervical cancer ( Figure 6H). Going forward, as CREB1 has the potential to regulate the transcription of thousands of genes, the identification of which CREB1-depedent genes contribute to productive infection and ultimately cervical cancer, will need to be further investigated.
|
2023-08-12T06:17:39.187Z
|
2023-08-01T00:00:00.000
|
{
"year": 2023,
"sha1": "05d2f1c5ef00ddb2f1e7adc088d36e372a8f5cc2",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/jmv.29025",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "3f6dd551750449790408eff10ea0afe8d5df6aa0",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
54548298
|
pes2o/s2orc
|
v3-fos-license
|
Impact of compaction test on mineral texture breakage in Tanjung Bunga Beach , Makassar
Rate of mineral deterioration in Tanjung Bunga Beach, Makassar city sediment, as a coastal city area that temporarily evolved with infrastructure development, through a Compaction test has found damage mineral from cracks, split and shatter. The aim of this research is to classify the compaction test from mineral as result texture breakage from compression test. Tanjung Bunga beach sediments based on grain size was classified from fine sand, medium sand and coarse sand. Petrographic analysis of 15 samples of in three drilling sites with 5 meters depth, found quartz mineral composition from 20 – 25%, hornblende bertween 5 – 20%, pyroxene 5 – 15%, pagioclast 5 – 15%, orthoclast 5 – 15%, biotite 10 – 20%, and opaque minerals 10 – 25%. Value of Compaction Test laboratory Fine Sand has 4 14.4 Div, Medium Sand has 6.7 20 Div, and Coarse Sand has 3.2 24 Div. Petrographic analysis after Mineral Compaction Test on fine sand was cracked up to 2 – 20%, split 2 – 12%, and shattered 2 – 10%, medium sand was cracked mineral up to 2 – 15%, split up to 1 – 13%, and shattered up to 1 – 5%, and coarse sand up to 1 – 10% cracked minerals, split up to 2 – 15%, and shattered up to 2 – 7%. High percentage of crack minerals were found in fine sand with low Compaction Test values and coarse sand with low crack mineral and high Compression Strength values. More cracked mineral was found in Quartz and no opaque minerals were shatter. Mineral pyroxene is founded as most shatter shape and orthoclase mineral with the most split shape.
Introduction
Tanjung Bunga beach area, Makassar area in development concept of "Water Front City", is part of "Waterfront Development" as development concept of coastal area with the construction of settlements infrastructure and coastal tourism.The burden in coastal sedimentary area may damage the minerals, change size and shape of minerals that will reduce the ability, formation strength of land, and coastal area functions.
Tanjung Bunga beach, characterized as spit beach were changing in gradual, vertical sediment composed by coastal sediment and a thickly fluvial-deltaic layer, a resistivity of 0.1 -402 Ohm m, with value of 1.3 Ф -2.4 Ф (fine sand -medium sand), Coastal topography is relatively sloping and affected by the tides.Lands of coastal areas, generally contain by many pyrite minerals (FeS2), [1].Coastal land is also known have relatively high levels of rotting minerals than weathered mineral, and generally this land is rich in kaolinites mineral types and quartz [2].Coastal plain of Tanjung Bunga is part of the Jeneberang River, containing minerals which are Quartz, Hornblende, Pyroxene, Orthoclase, Plagioclase, Biotite, and Opaque minerals in larger quantities than other minerals, with sizes from fine to coarse sand and rounded to sub rounded shape [3].
Quartz mineral (SiO2) is one of the main mineral constituents of sedimentary rocks, and as the most stable mineral found in clastic sedimentary rocks.Quartz minerals also found in coastal sediments and constitute up to 65% in sandstones and up to 30% in claystone [4].The most composition of sandstones is a stable mineral, such as quartz, [5] which is part of the stable mineral group.
The aim of this research is to classify the compaction test from mineral as result texture breakage from compression test.Laboratory Compaction Test as one of the method of testing the ability and strength of the soil with loading, followed by analysis of the level of damage of minerals before and after loading with a magnification polarization microscope up to 100x conducted on thin section samples.In this study used 15 sediment samples at 3 points, which generally show the damage as well as changes size and shape surfaces of minerals from rupture, split to shattering, which is found on sediment samples of fine sand, medium sand and coarse sand.
Field research
Determination of location at 3 drilling points, based on survey and analysis of coastal sedimentation profiles which is conducted until a depth of 5 meters by using Hand Auger.Sampling, observation and measurement of grain size on each sample are taken with the addition of depth every 50 cm.
Analysis of field data
The samples were taken from 3 drilling points representing the top and bottom of each hole were 6 samples of fine-grained sediment, 4 samples of medium-sized sediment and 5 samples of coarse sand-size sediment, samples taken through sieve analysis using mesh size 60, then dried and grouped according to the size of the mineral grains.Furthermore, it is cleaned from dirt and clay minerals, then do the preparation becomes "smear slide" and "Thin Section" with a thickness of 0.003 mm, for petrographic analysis before loading and continued by testing loading with laboratory Compaction Test.Sediment samples from the loading test with Compaction Test are then repeat "Thin Section" preparation.
Analysis of laboratory data
Petrographic analysis of the sample before and after loading with laboratory Compaction Test, using a 50x-100x magnification polarization microscope, to determine the composition and amount of minerals.Which is broken from the crack level, split to shreds.The Smear Slide analysis of the sample was also observed using a 50x polarization magnification microscope to determine the grain size and grain shape of the mineral.Testing of loading with Compaction Test of the SNI standard laboratory was conducted to determine the value of change until the sample state had cracks occurring per unit time and level loading of samples.
Result and analysis
Sand is a non-clastic grain material came from a rock source, the sand is a naturally formed grain material and is composed of small sized materials comprising mineral and rock particles [6].To observe the characteristics of soil, whether it is residual soil or sediment deposition, depends on two factors, which is; the natural characteristics of the soil particles themselves (such as the size, shape and composition of minerals) and the second one is particle content on the soil.In other words, these factors refer to the composition and structure [7].
The results of the loading test with CBR laboratory tests found the percentage of mineral damage from the cracking, diking, and crushing was not the same on the sediment of fine, medium, and coarse sands.Damage rate of cracked mineral is almost same in fine, medium, and coarse sand.Dikes minerals are more on medium sand, while shatters minerals in fine sand is more than coarse sand where the CBR test value on fine sand is lower than coarse sand.Change of mineral area before and after loading with CBR laboratory test is more on coarse sand than fine sand, there was significant change between value increases of CBR test with change level of mineral area.
Mineral composition are Quartz, Pyroxene, Hornblende, Plagioclase, Orthoclase, Biotite and Opaque mineral with different percentages on fine, medium and coarse sand sediment materials.Quartz minerals has the largest cracks percentage and Opaque mineral was the smallest.Orthoclase, Biotite and Plagioclases was the highest shatters minerals and the other minerals are generally dikes [3].The graphic chart show that Compaction laboratory test value indicate increase from low to high on fine sand, medium sand, until coarse sand, thereby Compaction laboratory test on coarse sand is higher than fine sand and medium sand.On the graphic show that relationship between load value Compaction laboratory test is significance with mineral grain size from fine sand, medium sand until coarse sand.
According to mineral breakage stage analysis from crack, torn apart until broken after load test with Compaction laboratory Test for sample group from fine sand, medium sand until coarse sand, there are mineral breakage percentage from crack, torn apart until dike, different on fine sandm medium sand and coarse sand, there are relationship between mineral breakage percentage with Compaction laboratory Test (figure 3, 4 & 5).
Figure 3. Damage percentage of fine sand
There are mineral with crack achieve 75 -100%, they are quart and biotite mineral, partly biotite, hornblende and pyroxene mineral with torn apart achieve between 60 -80%, on opaq mineral, crack is less than 45%.Mineral with broken between 45 -85% torn apart and crack achieve 20 -50% is orthoclase.Shatters mineral between 60 -80% is partly on Biotite, Pyroxene mineral, quartz and orthoclase mineral achieve less than 60%.The mineral Quartz mineral damaged achieve between 32-52%, while mineral Plagioclase and hornblende damaged achieve between 50-100%.Cracked minerals reach 65 -100% consisting of quartz minerals, Plagioclase and, hornblende and Opaque Minerals cracked up to 35%, as well as on Orthoclase minerals that dikes 100%.Biotite and Orthoclase are crushed up to 45%.Mineral Biotite, Pyroxene and Plagioclase.Which dikes in between 30-70%, from the amount of defective biotite minerals reaches 67 -100%, whereas in the mineral pyroxene up to 53 -100%.
In the encountered mineral cracking reaches 65 -100% consisting of Quartz mineral, Plagioclase, Hornblende and Biotite.Quartz damaged achieve between 40-80%, whereas the Plagioclase minerals and mineral Hornblende total damage at between 50-100%.
There are torn apart mineral achieve 50 -100% consist of quartz, plagioclase, pyroxene and biotite mineral.Along with 20 -50% torn apart are biotite and pyroxene mineral.Plagioclase 50 -80% torn apart and crack achieve 50 -75%, and this mineral broken until 50%.On quartz mineral damage total achieve 40 -80%, on plagioclase and hornblende mineral damage total achieve 50 -100%.On biotite mineral damage total is 67 -100%, on pyroxene and plagioclase mineral damage total achieve 100%.On opaque mineral that crack until 100% from mineral damage total 45%.This data show that broken mineral amount is more happen on fine sand and torn apart mineral on coarse sand.There are high crack mineral amount on fine sand with Compaction Test value lower and coarse sand with low crack mineral amount with high Compaction Test value.Quartz mineral much more that crack and no opaque mineral that broken, pyroxene mineral much more broken along with orthoclase mineral much more torn apart.
Triangle analysis above show that between Compaction laboratory test with damage mineral level that show high CS test value increase on coarse sand, inversely proportional with crack and broken mineral much more on fine sand with low CS test value.That thing show that Compaction Test value on coarse sand higher than fine sand and medium sand, where broken and torn apart mineral amount less, so that damage mineral amount with increasing Compaction Test is not significance and negative relationship.
This thing support by geotechnic index characteristic different from soil that almost multiple diverse and mineral proportion different [2] also give different result, along with change degree that change because of load depend on gradation, soil component, mineral reduction characteristic, amount and load average and tenacity degree from particel individual load set to constant, porocity change average, pore number and weight unit depend on soil permeability.Particel individual here is minerals that show microstructural characteristic, like mineral composition, grain size, and damage on mineral, considerable by some writer to become micostructure parameter to determine sedimen and rocks mechanic habit characteristic.Microstructure point to characteristic on observed surface in thin section with using microscope on milimeter scale and some writer using texture and microstructure as synonim [8].
Accroding to petrography analysys mineral damage level find on this research from crack, torn apart and broken on micro µm scale with 50x zoom until 100x (Figure 6), and mineral physical change on this matter as mineral area wide (Figure 7).According to mineral size change on thin section fine sand sample, medium and coarse, before and after Compaction Test load, show mineral are wide size change on fine sand lower than medium and coarse sand (Fig 6 -7).
On the graphic difference mineral area wide size change from before Compaction Test load and after Compaction Test load, show that difference area wide size quartz and opaque mineral lower than other, this thing show that quartz and opaque mineral have hardness level higher than other, so that load value on Compaction Test related positive on the mineral.This research result also show that effect of load with Compaction Test give influence on mineral damage lower than its percentage on coarse sand with high Compaction Test value and mineral damage with high percentage on fine sand with low Compaction Test value.The microstructural characteristic that encountered is result of barrier increase.These especially importantgiven were stressing is a very important factor to determining the soil or sediment characteristics and refers to normal soil and excessively of consolidated soil material [9].
Conclusions
The result of barrier test with Compaction from laboratory test showed the damage percentage of minerals from cracked, split, and shattered are not same on the fine sand, medium sand, and coarse sand sediment.Damage level of cracked and shattered minerals in fine sand is much higher than in coarse sand were Compaction Test value on fine sand is lower than the coarse sand.mineral changes area before and after barrier with Compaction Test laboratory, more occurred in coarse sand than fine sand, the increase of Compaction Test value with change level of mineral area.
Mineral composition has aree Quarts, Pyroxene, Hornblende, Plagioclase, Orthoclase, Biotite and Opaque mineral with different percentages on fine, medium, and coarse sand sediments.In Quartz minerals with has high crack and opaque mineral was lower percentage.Biotite, Plagioclase, and Orthoclase are minerals whose has highest shattered percentage were the other minerals are dikes.
Figure 1 .
Figure 1.Map of research area and drilling point
Figure 2 .
Figure 2. Value of loading of fine sand, medium sand, and coarse sand
Figure 4 .
Figure 4. Damage percentage of medium sand
Figure 5 .
Figure 5. Damage percentage of coarse sand
Figure 6 .
Figure 6.Condition from mineral Orthoclase and Hornblende before compaction test (A), mineral Orthoclase and Hornblende in minimum test (B), mineral Orthoclase and Hornblende in maximum value test (C).
Figure 7 .
Figure 7. Photomicrograph of monomineral Quartz and Opaque mineral before barrier (A) and after barrier (B) in 100x magnification
Table 1 .
Difference change of mineral size area before and after barrier with CS laboratory test Figure 8. Difference graph of mineral size
|
2018-12-02T01:23:02.786Z
|
2018-01-01T00:00:00.000
|
{
"year": 2018,
"sha1": "9b38f4b336f6132ea863b83087a3aa53d8db248b",
"oa_license": "CCBY",
"oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2018/40/matecconf_istsdc2017_11002.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "9b38f4b336f6132ea863b83087a3aa53d8db248b",
"s2fieldsofstudy": [
"Geology"
],
"extfieldsofstudy": [
"Geology"
]
}
|
6572900
|
pes2o/s2orc
|
v3-fos-license
|
Evaluation of complex integrated care programmes: the approach in North West London
Background Several local attempts to introduce integrated care in the English National Health Service have been tried, with limited success. The Northwest London Integrated Care Pilot attempts to improve the quality of care of the elderly and people with diabetes by providing a novel integration process across primary, secondary and social care organisations. It involves predictive risk modelling, care planning, multidisciplinary management of complex cases and an information technology tool to support information sharing. This paper sets out the evaluation approach adopted to measure its effect. Study design We present a mixed methods evaluation methodology. It includes a quantitative approach measuring changes in service utilization, costs, clinical outcomes and quality of care using routine primary and secondary data sources. It also contains a qualitative component, involving observations, interviews and focus groups with patients and professionals, to understand participant experiences and to understand the pilot within the national policy context. Theory and discussion This study considers the complexity of evaluating a large, multi-organisational intervention in a changing healthcare economy. We locate the evaluation within the theory of evaluation of complex interventions. We present the specific challenges faced by evaluating an intervention of this sort, and the responses made to mitigate against them. Conclusions We hope this broad, dynamic and responsive evaluation will allow us to clarify the contribution of the pilot, and provide a potential model for evaluation of other similar interventions. Because of the priority given to the integrated agenda by governments internationally, the need to develop and improve strong evaluation methodologies remains strikingly important.
Introduction
Integrated care refers to many different models of care [1] yet underlying these is a model where the patient's journey through the system of care is made as simple as possible. Integration of care is expected to improve quality of care, patient safety and cost effectiveness [2][3][4]. As a result, the English Department of Health has been actively encouraging integration of care within local health economies, and included a duty to encourage integration in national legislation [5].
Getting integrated care right, and then demonstrating its effectiveness, is a clinical, organisational and research challenge. Several local attempts to introduce integrated care in the English National Health Service (NHS) have been tried, with limited success [6]. Results of the national Integrated Care Pilot programme showed that despite some improvements in process, and staff perceptions that care was being integrated, the pilot programmes achieved only limited improvements in clinical effectiveness and reduction in cost, and had little effect on patient satisfaction [7].
For the integrated care agenda to proceed, robustly evaluated examples in real-world conditions are needed to examine effectiveness, justify investment and consider their potential for implementation on a large-scale. In this paper, we describe a comprehensive evaluation approach that assesses multiple aspects of a large and complex integrated care intervention in London known as the North West London Integrated Care Pilot (NWL ICP). The evaluation involves several work streams that assess the broad aims of the pilot including how it fits within the wider health economy, its impact on clinical outcomes and cost, and the patients' and professionals' experience of integrated care.
The intervention
The aim of the NWL ICP is to improve care for 15,000 people with diabetes and 22,000 people over the age of 75 in northwest London. It seeks to improve the quality of care yet at the same time reduce emergency admissions and the overall cost of care. It is a large, complex intervention covering over one hundred general practices, five local authorities, two mental health trusts, five primary care trusts, two acute hospital trusts and two voluntary organisations. The population covered is typical of inner London, with pockets of extreme affluence and deprivation side by side in an area of high population density.
As described in Harris et al. [8], the approach taken in NWL ICP contains several interventions including: risk stratification using the combined predictive model; care planning across care settings; multi-disciplinary group (MDG) meetings; new financial incentives for participating organisations; and a new information technology (IT) system to facilitate sharing of information and patient records between providers (see Box 1). The MDG meetings are designed to deliver joined-up care by bringing different health care professionals together (including GPs, hospital specialists, mental health care, community nursing, social care, and other allied health care professionals) to discuss the management of those with diabetes or older than 75 years that have been identified as having the most complex needs. Care plans are agreed in these meetings, which can then be monitored using the IT tool. The MDGs also have a secondary aim to improve interaction between primary, community, social and hospital care teams, hopefully leading to enhanced delivery models. The IT tool has been developed for the intervention and allows the various partner organizations to share, store, and analyse patient data. In particular, it allows referral support, performance management and risk management to take place, combining data from the various organizations in a central secure database. The NWL ICP is governed by an unincorporated association of its constituent organisations, with a regular management board chaired by an independent representative. It has a small dedicated management team to run the pilot on a day-to-day basis.
The logic of the evaluation
This paper describes the evaluation methodology for the NWL ICP, a significant integrated care program within a purchaser-provider split, taxpayer-funded national health system. Central in developing the evaluation plan was to recognise the complexity of the intervention in clinical, financial, strategic and political contexts. In particular, we see that the intervention targets "several integrating components" of care. These components include inter-professional communication, incentives for participation and performance and the adoption of new technologies and ways of working. These elements impact on "several groups and organisational levels" such as local and national commissioning bodies, primary and acute care in both the health and social care domains. As such, we locate the methodology within the broader theory of complex intervention evaluation and we draw from the UK Medical Research Council's guidance [6].
This evaluation therefore has a deliberately broad focus to enable the different facets of the pilot's consequences to be captured. A quantitative analysis, designed to measure activity-and consequently impact on cost-within the health system and analysis of changes to health outcomes. It also includes qualitative themes, looking at patient, clinician and manager experiences to the process of the implementation, the barriers to adoption, and wider questions of how the pilot fits into the national integration agenda. The four streams of the evaluation are described in Table 1, where methods of investigation match the core aims of the pilot. This evaluation will take place over the first year of the pilot, with further evaluation being planned for the future.
The ICP itself operates within a dynamic healthcare economy in the midst of financial challenges and national legislative changes [9,10]. The underlying shape of the project has been subject to change and refinement including expanding to new locations and adjusting expectations in terms of its perceived outcomes and impact. This has made its evaluation a moving target [11]. We have therefore adapted our evaluation to these changes, to fit the on-going context.
In addition, the evaluation has not sought to remain separate from the pilot process, only publishing findings at a later date. Instead, information from the Table 1. What this Integrated Care Pilot evaluation looks at.
Aim of the pilot Evaluation workstreams
Reduce unwarranted service utilization and costs Workstream 1 Measuring service usage patterns in relation to secondary and social care, using a propensity matched case control model, allowing cost changes to be understood. Improve clinical outcomes and quality of care Workstream 2 Using a mixture of clinical process and outcome measures to observe service quality, both in primary and secondary care. Improve patient and professional experience Workstream 3 A mixed methods approach to capture both professional and patient experience of the integrated care process. It consists of: 1. Non-participant observations of multidisciplinary meetings, patient case conferences and operational meetings; 2. Focus groups with patients and professionals; 3. Semi-structured interviews with patients and professionals; 4. A mixed method survey with main stakeholders. Understand the pilot's position within the broader integrated care agenda Workstream 4 Qualitative analysis looking at the strategic of the nature pilot, including: the type of integration produced, analysis within the national policy context and understanding the higher level decision making processes involved. [8] Early identification of at-risk diabetic or elderly people, including risk stratification using combined predictive model and others • Proactive care planning and delivery by community teams • Care planning shared across care settings including clear guidance for out-of-hours services ○ Proactive case management of patients with complex conditions • Multidisciplinary teams led by a general practitioner or consultant to ensure that care of patients with complex conditions reduces risk of ○ hospitalization Appropriate emergency responses • Improved ambulance protocols and assessment integrated with care planning and community care ○ Improved information flows and system redesign • Improved systems and processes to share patient notes and care plans across care settings via a novel technology platform ○ evaluation process is being fed back to the organisations taking part in the pilot, and to the pilot's management board, regularly during the operation of the pilot. In particular, we will feed back findings to the ICP board in the form of an interim and a final report, and via several evaluation committee meetings en route, allowing the ICP management team to respond to finding so far (and potentially act to improve the intervention), and allowing the evaluation team to identify data sources and participants. The formative nature of this approach may lead to some confounding of our evaluation of the intervention; however the reality of service redesign is often that evaluators need to work closely with implementers.
The evaluation framework Workstream 1: Impacts on service use and costs
One of the intended consequences of the NWL ICP is to change the pattern of service use, and in particular, reduce the use of more expensive hospital care by substituting better preventive and anticipatory care [12]. This element of the evaluation will look at the extent to which the pattern of health and social care service use has changed for patients.
In undertaking this work, we will seek to maximise the sample of cases under study by exploiting existing administrative information. Though this means that the results are influenced by the quality and depth of information recorded, the approach does have some advantages in that data collection is relatively inexpensive. Hence, it becomes possible to look at large sample sizes and that for individual-based analyses administrative data overcomes problems of recall bias when asking people about service use and medical diagnoses. Through linkage of administrative data sets, it is also possible to look at how patients use resources across organisations including social care [13].
The analysis looks at changes and difference in a number of metrics of activity. These are the number of hospital admissions, out-patient attendances and A&E attendances, the estimated costs of these events, intensity of social care service use (notional cost per person per month), number and estimated cost of GP visits and community nursing inputs (where data allow). Activity will be costed to weight different forms of activity using methods applied in previous work on national resource allocation models [14] and in studies of social care [15].
One of the key challenges in undertaking analyses of changes in hospital use for complex interventions is that individuals may be selected for an intervention because they have a high use of health services. The problem is that any subsequent fall in utilisation in this group may simply be due to regression to the mean-that is people reverting to a normal level of use irrespective of the intervention [16,17]. This means that simple changes over time are not sufficient to show an effect but there needs to be some form of control group to show what would have happened anyway. Whilst randomised prospective analyses would overcome these problems, these were not feasible in this instance due to resource constraints. As an alternative, we plan to identify controls through a quasi-experimental design selecting from a wider population, a subgroup of matched controls that are sufficiently similar to the intervention group with respect to a set of baseline variables (age, sex, comorbidities and hospital activity up to point of intervention). The aim is to derive a control group that is well matched on all potential confounder variables so that a statistically valid inference can be drawn (see Box 2).
The proposed analysis will have two arms a. Comparison of use of hospital services relative to an external control group In this part of analysis, we will focus on a comparison with a control population drawn from other areas in England. Information will be available for the individuals enrolled in any intervention, and also for the whole populations of general practices which are participating Box 2: Approaches to identifying matched controls.
There are a range of methods that can be used to select matched control groups though all have the common aim of selecting a subgroup of patients who are similar to the patients receiving the intervention with respect to variables recorded for all individuals. The two most commonly used methods are the propensity score and the prognostic score. The propensity score is an estimate of the probability that a given individual will be recruited to the intervention [18] and summarises a wide range of variables such as age and prior hospital use into a single quantity. Balance can be further improved by simultaneously matching on key variables predictive of future health and social care utilisation along with the propensity score, using a multivariate distance measure such as the Mahalanobis distance [19] (1).
An alternative strategy for finding controls is to match on the estimated probability of experiencing the outcome (for example, an emergency hospital admission), where this is calculated assuming that the intervention is not in place. This score is called the prognostic score, and the approach is called prognostic matching [20]. Prognostic matching can be combined with matching on other variables using the Mahalanobis distance. The prognostic approach weights variables by how predictive they are of future hospital admissions. As we were most concerned with balancing variables that are strongly predictive of future hospital admissions, this helped us prioritize variables in the matching, and was our selected matching method [21]. in the Integrated Care Pilot. The data will be at person level but anonymised so that the research team cannot identify sensitive personal information or individual identities. The NHS Information Centre for Health and Social Care will act as a trusted third party to handle any confidential information and create the anonymised linked fields for use by the research team.
The aim will be to look at patterns of hospital use for this group compared to matched individuals taken from across the country representing changes associated with 'usual care'. We will select five local authorities that are the most similar to the NW London population using Office for National Statistics (ONS) corresponding health areas methodology [22].
Information on the prior patterns of diagnoses and hospital utilisation will be used to stratify cases according to the risk of admission. The actual level of utilisation before and after the agreed starting point in the pilot will be compared. In this way, we will be able to track levels of hospital use for cohorts of people for 2-3 years before they became part of the pilot. We will then test for subsequent change and compare results by risk strata. Our analysis strategy is built around a generalized difference-of-differences regression approach at the person level.
This external comparison has the advantage that it enables more precise matching as it draws from a much larger pool of potential controls. It will also show how services within NWL cases have changed with respect to usual care in other areas. The main drawback is that this approach has to use information common to cases and controls-which effectively limits the analysis to routinely collected hospital based information on diagnoses and activity.
b. Comparison of other health and social care services change over time In this arm, we are able to access much more detailed sets on health and social care usage for participants in the ICP. In the first instance, this allows us to document changes in utilization across sectors for a cohort of patients. This is useful in ascertaining the relative weight of different services in overall costs, and also indicating whether change in the use of one service is accompanied by change in another. So for example is a reduction in hospital bed days in the intervention group offset by an increase in social care use? There will also be scope to compare pre-post patterns of service use specifically linked with different start dates at practice level.
We will also explore the extent to which we can derive matched controls based in a wider range of local data sets-however this requires data for the whole populations to be accessible. For some data sets collections may be limited to specific cases only.
Workstream 2: Impacts on clinical service quality (process and outcomes)
An important and innovative feature of the ICP is the availability of linked, patient-level primary, secondary, community health services and social care data to all the clinical teams involved, via an Information Portal which integrates these data sources. Data integration is being recognised as an important intervention in its own right in integrated care programmes. The same data is being used for the evaluation in anonymised form via a Data Sharing Agreement (informed patient consent is not required for the secondary use of data for clinical audit or NHS service evaluations). Six years' retrospective primary care data and three years' NHS Secondary Uses Service (SUS) data is available for analysis. This allows patients' health and social care data to be tracked across sectors and over time, and also casemix adjustment for demographic and comorbidity covariates.
Because there is a long-standing trend towards improved management of chronic diseases in the NHS, simply showing that care improved after the introduction of an ICP would not be sufficient to demonstrate their effectiveness. We would need to know that the improvement was greater than that based on underlying trends and that the improvement was also better than in non-ICP settings. This requires data from before the introduction of the ICP and also comparing performance against a non-ICP site. Furthermore, as this evaluation is being carried out in London, the most socio-economically and ethnically diverse part of the UK, it is important that the ICP evaluation takes into account the characteristics of the populations the ICP serves. Also important is to see how well the ICP addresses the well-recognized socio-economic and ethnic disparities in access to health services and in health outcomes.
In addition to service utilization and costs, the evaluation will examine clinical effectiveness, both in terms of outcomes and process measures, for the two groups (elderly and with diabetes) covered by the ICP. The study will look at specific clinical process and outcome metrics, described in Table 2, using both time trends and a case comparison methodology at the practice level, comparing patient data before and after they received an ICP care plan and patients in ICP practices who are eligible for the ICP but have not yet been asked for consent or received a care plan. There will also be comparisons between local practices that have chosen to be a part of the pilot, with those who have not, at the practice level as patient-level primary care data is not available for the latter. Specifically, we will examine whether the introduction of the ICP has resulted in improvements in health outcomes for patients with diabetes and for older patients.
As the key aim of any health intervention is to improve quality of care, patient safety and clinical outcomes, these should be key measures in the evaluation. This means quantifying the process and outcomes of care. For many areas of health care, standards already exist (e.g. Quality and Outcomes Framework (QOF), National Institute for Health and Clinical Excellence (NICE) guidelines). Examples would include HBA1c, blood pressure and cholesterol control in people with diabetes. Other key areas for quantitative evaluation are patient experience and impact on NHS efficiency and costs. Impact on NHS efficiency would include areas such as unplanned admissions for ambulatorysensitive conditions, A&E attendances, and inappropriate prescribing, all focusing on specific diseases to improve sensitivity, although this must be traded off against reductions in numbers of events. We are also aiming to measure changes in care processes covered by the ICP Care Packages, for example, referrals to fall services.
Apart from regarding mortality and utilization of unscheduled care as adverse endpoints, there is a dearth of available outcome data meaningful to patients. The use of disease-specific patient-reported outcome measures (PROMs) in primary care is the subject of an Oxford pilot funded by the Department of Health. However the use of a measure of health-related quality of life, such as the EQ-5D, by ICPs would be of great utility both for clinical care and evaluation.
In the statistical analysis, we will compare percentage differences in annual measurement of the outcome measures using χ 2 -tests. Linear regressions for pre-ICP data for each patient will be generated with a time indicator, and the slope and intercept will be used to predict the future value. This value represents the expected value of the outcome if the ICP had not been established. An additional challenge in the statistical analyses will be to accommodate the hierarchical nature of the data, which are years of measurement nested within patients nested within practices. Ignoring this multilevel clustering would result in faulty estimation of standard errors. We will therefore use a random effects multilevel model in this analysis to adjust for casemix at the practice level.
A spatial analysis will be also conducted using a Geographic Information Systems (GIS). Patient data will be mapped at the Lower Super Output Area (LSOA) to explore the spatial distribution of patients enrolled in the ICP compared to controls. Similarly maps will be created which display the geographic distribution of outcomes, both at practice level and aggregated from individual or practice level data as median values to LSOA level across the ICP area pre-and post-intervention. The mapping will assist in identifying geographic areas where there is higher uptake of the ICP and allow for monitoring of outcomes over time and space to detect where outcomes are affected by spatial factors.
Workstream 3: Qualitative assessment of the impact of the Pilot
This part of the study investigates the human perception, experience and involvement of participants in the pilot. We hope to develop a comprehensive understanding of the patients', carers' and professionals' experiences and perceptions of the pilot, as well as their suggestions for effective implementation. In addressing these broad objectives, we are employing a mixed methods design. This will include focus groups with patients and professionals to understand their perceptions and experiences of the pilot and semi-structured interviews with a purposive sample from both patients and professionals to investigate the perceptions of all users. Topic guides for these interviews focus on users' experience of the integrated pilot, and their attitudes towards the intervention. We will use a thematic approach to explore and integrate the findings of this approach [23].
To complement these findings, we will also develop and implement a survey to record patient and carer experience, and a separate survey of professional experiences. Survey questions will explore issues such as motivation to take part in the pilot and experience of participating, but will also incorporate questions specific to themes raised from participant interviews and observations, to ensure the surveys reflect the challenges faced by the pilot during implementation.
Finally, in this qualitative component, we will include a novel analysis of patterns of communication within MDG meetings, looking at the nature and direction of conversation between participants. MDG meetings involve the participation of GPs, hospital consultants, community and social service professionals each from different organizations within the local health economy and are therefore different to MDGs within hospital settings. We will explore whether traditional power relationships and communication patterns exist and persist or are broken down leading to a more integrated way of working between the professional groups. Does the discussion of the complex clinical cases brought to the MDG meeting lead to or foster opportunities to consider the wider health economy and ways to improve and identify efficiencies in and between participants' respective organizations? This will involve recording and transcribing multidisciplinary group meetings, and then coding the utterances that occur. We draw on Bale's validated coding scheme [24] to describe and characterize the content of the utterances and the kind of interactions that are occurring. We will be exploring whether some professional groups dominate the conversations and whether their discussions focus on individual patient level detail or broader ways of working together as a heterogeneous but integrated group. A more full description of the methodology of this novel approach is available elsewhere [25].
The findings from this component of the evaluation will add to our understanding of professional practice in integrated care programmes and contribute towards a framework of knowledge to inform policy and organisational change processes related to integration, enhanced communication and collaborative working.
Workstream 4: Strategic evaluation of the pilot within the national policy context
The final part of the evaluation has the broadest focus; examining how the pilot operates at an organisational level and how the wider policy environment has shaped the design and implementation of the initiative. This component also aims to ground the evaluation of the pilot in the context of the field of integrated care in the NHS and beyond. This workstream complements the other workstreams by taking a strategic overview. To this end, we aim to explore how national policy has impacted upon the design, implementation and operation of the pilot, identifying factors that have facilitated or hindered progress. By locating the pilot within the wider literature and evidence base, we hope to draw insight from other national and international models of integrated care, identifying and exploring areas where the pilot appears to be distinctive. We will also examine how the pilot is developing at a strategic level in order to understand the organisational level motivations for, and challenges of, developing integrated care.
This part of the evaluation will be addressed via a programme of on-going policy analysis, observations and semi-structured interviews. Interviews are being undertaken with senior representatives of the key organisations involved in the pilot. We seek to understand the organisational and strategic motivations for engaging with the pilot and any challenges and barriers to doing so. Interviews also aim to understand how national policy and local contextual factors-such as organisational relationships, financial positions and local priorities-have shaped the design of the pilot and helped or hindered its implementation and development. Ongoing consultation with key individuals in the wider policy arena will ensure that interviews with those involved in the pilot address appropriate issues. The policy literature will be regularly reviewed to ensure that new evidence and emerging issues are taken into account.
Interviewees from all key organisations have been identified to ensure the evaluation understands the pilot from the perspective of all the different players. A number of observations of board meetings, committee meetings and operational team meetings will complement interviews by offering an insight into how the pilot is being implemented and highlighting particular dynamics and challenges at a strategic level. Interview and observational data will be analysed by qualitative researchers who will identify key themes, drawing out the most important barriers and enablers. The framework of analysis will be based on the theoretical literature of integrated care and the wide body of policy literature. Where appropriate, comparisons will be made with other examples of integrated care. This analysis will add to the body of literature and evidence on the implementation of integrated care initiatives, extending our understanding of the challenges involved in executing large scale change within a dynamic policy environment.
Dealing with complexity
The complex nature of the intervention and the environment will make attribution of cause and effect difficult. This pilot is occurring in a period of almost unparalleled structural reforms in the English National Health Service-many of which may have an impact on the desired effect of the pilot. The ICP could be described as the introduction of a complex intervention into a complex environment-having the characteristics of adaptation and learning by both those delivering and receiving the intervention, feedback loops, a sensitivity to starting conditions and with a diversity of activities and emergent outcomes [11,26].
We are aware of the tension between providing early evaluation results to inform decision makers-against the need to undertake rigorous analytical methods. For example, in some cases changes can only be confirmed with large samples and longer follow-up periods. In addition, experience from other countries shows that successful integrated care organisations take many years or decades to develop.
Exploring the counterfactual in this case will consequently be difficult. The presence of control groups is reassuring, but comparison against other areas of a national health economy where innovation is being actively encouraged means that it is difficult to confirm if the control groups are genuinely intervention free. At best, they represent the average pattern of care seen outside of this pilot environment. There are also other projects in the same geographical area that might influence the findings.
Year on year comparison is made more difficult by the secular trends and on-going structural changes occurring in the NHS. Furthermore, any evaluation that includes clinical outcomes has to be timed appropriately, to allow the natural history of disease and the effect of interventions on this to take its course. Although this plan describes a time limited evaluation, relating to cost, an ideal evaluation would follow patients and appropriate controls over a prolonged period. Given this environment and the complex and iterative nature of the intervention, the evaluation requires a degree of flexibility in method and processto learn and adapt as the intervention does so.
We accept that there are limitations to our approach, and that we were not able to move systematically though the various stages of evaluating a complex intervention that would ideally be done using the MRC's framework, including modelling and delivering a small scale proof of concept. However, given the financial constraints placed on many innovative delivery models in the current period of financial austerity, we suggest that this work will provide a useful, real world model for others attempting the evaluation of similar schemes.
Strategies we have adopted in the evaluation design to mitigate against the various challenges faced are listed in Table 3. We appreciate that there are many attempts to evaluate complex interventions using mixed methods approaches. We believe that this approach adds extra depth, beyond simple quantitative aspects of performance and qualitative assessment of user experience, by measuring integration behaviour and strategic, organisational level experience. This may serve as a useful resource for others also embarking on the evaluation of complex interventions such as integrated care pilots. 3 Inconsistent implementation and levels of involvement from different organizations Non-participant observation in operations meetings, MDG performance reviews, IMB meetings and sub-committee meetings to stay abreast of significant decisions and changes to the ICP as it is being implemented. Locating the findings within this meso-organizational context 4 Adaptive responses to the pilot implementation over time Non-participant observation of MDG, Independent Monitoring Board and subcommittee meetings and informal interviews with ICP staff to ascertain early changes to systems, processes and structures within participating organizations and to feed this into the broader summative evaluation findings 5 Heterogeneous and incomplete data sources Novel linkage of data sets allowing combination of primary care, secondary care and social care data 6 Implementation in a region already crowded with multiple other interventions each with the potential to confound the findings Identify concurrent programmes within the same geographical area and ascertain the reasonable contribution of the ICP to broader health care changes and outcomes; triangulation of health outcomes from ICP with interviews with participants, key informants and local stakeholders to ascertain views on the impact of the ICP
Conclusion
The proposed methodology, with a focus on looking at service usage and quality, and a matched case comparison approach will allow a robust assessment of effectiveness of a large integrated care intervention within the NHS. We believe that the investigation of the qualitative aspects, including ways of working, barriers to adoption and staff and patient experience, will allow us to gain an insight into those 'softer' cultural aspects of the development of the integrated care model, which have often proved so hard to do.
Despite its limitations, we hope this evaluation allows us, in the words of Tom Ling, to clarify contribution (how reasonable is it to believe that the intervention contributes to the intended goals effectively) even if it is not able to definitely identify attribution (what proportion of the desired outcomes was produced by the intervention) [11]. The broad, dynamic and responsive nature of the approach should allow some of the inherent complexities to be accounted for. Given the priority given to the integrated agenda by governments internationally, the need to develop and improve strong evaluation methodologies remains strikingly important.
|
2017-03-31T08:35:36.427Z
|
2013-03-08T00:00:00.000
|
{
"year": 2013,
"sha1": "5ea1ae1f55f9b5335550cdc46bc7a9824314e069",
"oa_license": "CCBY",
"oa_url": "http://www.ijic.org/articles/10.5334/ijic.974/galley/1970/download/",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2d36be96e3e182663bc924ecf5908b579cbe450e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
244285832
|
pes2o/s2orc
|
v3-fos-license
|
Fluorescence Based Comparative Sensing Behavior of the Nano-Composites of SiO2 and TiO2 towards Toxic Hg2+ Ions
We have synthesized sulfonamide based nano-composites of SiO2 and TiO2 for selective and sensitive determination of toxic metal ion Hg2+ in aqueous medium. Nano-composites (11) and (12) were morphologically characterized with FT-IR, solid state NMR, UV-vis, FE SEM, TEM, EDX, BET, pXRD and elemental analysis. The comparative sensing behavior, pH effect and sensor concentrations were carried out with fluorescence signaling on spectrofluorometer and nano-composites (11) and (12), both were evaluated as “turn-on” fluorescence detector for the toxic Hg2+ ions. The LODs were calculated to be 41.2 and 18.8 nM, respectively of nano-composites (11) and (12). The detection limit of TiO2 based nano-composites was found comparatively lower than the SiO2 based nano-composites.
Introduction
Among the several metal ion-based pollutants, mercury is a frontier contaminant to the human health and environment. Both natural and anthropogenic activities generate mercury contaminations in the surroundings [1][2][3][4]. Mercury in all of its oxidation states (Hg 2 2+ , Hg 2+ , Hg 0 , [CH 3 Hg] + ) is released in the environment by combustion of coal, medical and industrial waste [5][6][7]. Additionally, as a consequence of processes such as chlor-alkali and gold mining add mercury into the nature. In addition, the inorganic mercury pollutants possess capability to absorb and transform into organic ones by bacteria and microbes [8][9][10][11][12]. The most abundant and stable forms of mercury present in nature, are in its +2 oxidation state. Mercury enters in living system through respiration, skin absorption and oral take-up. Owing to its high bioaccumulation and bio-amplification, multi-step contamination of food chain to hazardous level of mercury has been reported [13][14][15]. Its small amount in the body triggers the long-term irreversible damages to the human health by incorporating the unfavorable impact on the vital organs and tissues such as brain, nervous/immune system, kidney, liver and induces the cognitive and motion disorders [16][17][18][19][20]. The maximum permissible concentration of Hg 2+ ions is 1 µg/L in drinking water, which is defined by the United States Environment Protection Agency [21]. Therefore, it is of great importance to develop a rapid and eco-friendly method to detect Hg 2+ ion with high sensitivity and selectivity.
Atomic-absorption spectrometry (AAS) and inductively-coupled plasma mass spectrometry (ICP-MS) are the most common instrumental techniques for metal detection but colorimetric and fluorogenic sensing procedures are found to be more effective and on-site units for this purpose [22,23]. The colorimetric method is advantageous because of easy readout with the naked eye and potential for high throughput formats. Being organic in nature, these sensors are sometimes associated with certain limitation such as less stability, use of high concentration and also do not have a limit of detection (LOD) as low as a fluorescent-or luminescent-based approach [24][25][26][27][28]. Moreover, these are incapable of removing the ions from the medium owing to their diffusive nature which is further rectified with nano-tool technique. − , HPO 4 2− , HCO 3 − , I − , NO 3 − , NO 2 − , OH − , SO 4 2− and SO 3 2− (sodium salts) were prepared in double distilled water. Stock solutions of the nano-composites (11) and (12) were also prepared by dispersing 0.01 g nano-composites in 1.00 L of double distilled water. Further, these dispersed solutions of various synthesized nano-composites were sonicated for an hour to obtain a stable colloidal solution. 3.00 mL of appropriate aliquot were taken in a quartz cuvette, in which 50.00 nM solution of various metal ions and anions were added sequentially to check the selectivity of nano-composites for any specific ion.
Finally, the practical utility of the nano-composites (11) and (12) for Hg 2+ ion detection on tap water sample, distilled water and bottled water was checked. The tap water was taken from research laboratory of Department of Chemistry (PAU, Ludhiana) and bottled water was purchased from local market. These collected samples were filtered and adjusted to pH 7.4 (10 mg in 5 mL HEPES buffer). These samples were spiked with various concentrations of Hg 2+ ions. The fluorescence intensities were recorded in triplets with their mean values as final datum to calculate the percentage recovery.
Synthesis of 4-((4-Oxo-4H-chromen-3-yl)methyleneamino)benzenesulfonamide (3)
For the preparation of the Schiff's base ligand (3), sulfanilamide (1) (1.00 mmol, 0.174 g) and 3-formyl chromone (2) (1.00 mmol, 0.172 g) were taken in a round bottomed flask containing 15.00 mL of absolute alcohol (Scheme 1). After 5 min, 2-3 drops of glacial acetic acid (AcOH) were added and the mixture was refluxed until the completion of reaction (6 h, TLC). Further, reaction mixture was allowed to stand at room temperature and solid so obtained was filtered and washed with diethyl ether (3 × 40.00 mL). Recrystallization from absolute alcohol furnished the pure product (3). 13 C NMR spectra of the ligand (3) were recorded in DMSO on a BRUKER 500 MHz and 125 MHz spectrometer respectively (Fällanden, Switzerland), at room temperature using TMS as an internal standard and chemical shifts are given in δ. Solid State 13 C and 29 Si cross-polarization magnetic angle spinning (CPMAS) NMR were recorded at Bruker 700 MHz spectrometer at Tata institute of Fundamental research (TIFR) Centre for interdisciplinary Science, Hyderabad, India. The surface morphologies of the samples were determined by Field Emission Scanning Electron Microscope (FESEM, Hitachi, Ibaraki Prefecture, Japan). Samples were recorded on Hitachi SU 8010 with EDX (Thermo Noran System SIX, Ibaraki Prefecture, Japan). The X-ray diffraction spectroscopy (XRD) patterns were recorded in the range of 5°-85° 2θ, on a SAXSPACE, Anton Paar instrument (Gurugram, India), provided with a Cu-Kα radiation source (λ = 0, 154,060 nm). FT-IR, NMR, FE SEM and Powder XRD studies conducted at Sophisticated Analytical Instrumentation Facility (SAIF), Panjab University (PU), Chandigarh, India.
Brunauer-Emmett-Teller (BET) surface area was determined using a Quanta Chrome Autosobe iQ3 instrument (Cinnaminson, NJ, USA) from Advanced Material Research Centre situated at Indian Institute of Technology (IIT), North Campus, Mandi, Himachal Pradesh. Surface area and pore volume were determined using the BET equation and Barret-Joyner-Halenda (BJH) methods, respectively.
Finally, the practical utility of the nano-composites (11) and (12) for Hg 2+ ion detection on tap water sample, distilled water and bottled water was checked. The tap water was taken from research laboratory of Department of Chemistry (PAU, Ludhiana) and bottled water was purchased from local market. These collected samples were filtered and adjusted to pH 7.4 (10 mg in 5 mL HEPES buffer). These samples were spiked with various concentrations of Hg 2+ ions. The fluorescence intensities were recorded in triplets with their mean values as final datum to calculate the percentage recovery.
Synthesis of 4-((4-Oxo-4H-chromen-3-yl)methyleneamino)benzenesulfonamide (3)
For the preparation of the Schiff's base ligand (3), sulfanilamide (1) (1.00 mmol, 0.174 g) and 3-formyl chromone (2) (1.00 mmol, 0.172 g) were taken in a round bottomed flask containing 15.00 mL of absolute alcohol (Scheme 1). After 5 min, 2-3 drops of glacial acetic acid (AcOH) were added and the mixture was refluxed until the completion of reaction (6 h, TLC). Further, reaction mixture was allowed to stand at room temperature and solid so obtained was filtered and washed with diethyl ether (3 × 40.00 mL). Recrystallization from absolute alcohol furnished the pure product (3). ober process [41,42]. In a 100 mL of conical flask, 1.00 mL of TEOS (4) was added to 10.00 mL of absolute alcohol and the reaction was sonicated for 5 min. Further, 10.00 mL of 25% ammonium hydroxide solution (NH 4 OH) and 10.00 mL of absolute alcohol were added slowly to the reaction mixture while sonication. The reaction mixture was allowed to sonicate for 1 h to obtain white turbid suspension, which was further centrifuged for 2 h. The separated nano-composites of silica were washed with water and re-dispersed in alcohol to centrifuge again for 1 h. Finally, powdered silica (5), so obtained after repeated filtrations was dried in vacuum oven and calcinated at 400 • C in a furnace (Scheme 2).
Further, to create the organic molecule holding sites these synthesisednano-com sites (5) and (7) were functionalized with 3-aminopropyl triethoxysilane, which prov binding sites to the organic chemosensor.
Likewise, using similar approach, synthesis of the titania (7) was also carried ou taking tetraisopropylorthotitanate (TIPT) (6) as precursor (Scheme 3) and nano-com sites were obtained in good amount. Further, to create the organic molecule holding sites these synthesisednano-com sites (5) and (7) were functionalized with 3-aminopropyl triethoxysilane, which prov binding sites to the organic chemosensor.
Similarly, synthesis of the APTES@TiO 2 (10) was also carried out by using above procedure with titania(7) as a core material (Scheme 4). APTES@TiO 2 was also furnished in good amount. Similarly, synthesis of the APTES@TiO2 (10) was also carried out by using above pro cedure with titania(7) as a core material (Scheme 4). APTES@TiO2 was also furnished in good amount. Scheme 4. Synthesis of functionalized nano-particles of silica (9) and titania nano-particles (10).
Chemoreceptor Spectral Studies
The photo-physical behavior of the nano-composites (11) and (12) was tested independently followed by study of variation obtained in emission analysis of nano-composites (11) and (12) in the presence of various metal ions and anions. Initially, the emission spectra of the nano-composites (11) and (12) of 10 ppm concentration were analyzed before and after addition of metal ions and anions with excitation at 290 nm. The result indicated that the specific variations in emission intensity of (11) was obtained with Hg 2+ ions only and no other metal ion was able to alter the emission peak of the nano-composites (11). Additionally, linear fitting equation between the fluorescence intensity of nano-composite (11) and Hg 2+ ion concentration (by varying concentration from 4-50 nM) was applied to verify the emission response of (11) with Hg 2+ ion. Similarly, emission spectral analysis of nano-composite (12) at same concentration was recorded by adding various metal ions and anions that also showed selectivity towards Hg 2+ ions. Molar increment experiment of Hg 2+ ion was conducted between 2-35 nM concentrations as per the range of detection (Sections 3.2-3.4).
Chemistry of Nano-Composites (11) and (12) and Their Turn-On Emission Due to Hg 2+ Ions
The structures of nano-composites (11) and (12) were designed by keeping in mind the need of heteroatomic sites for binding of the analytes; that can adhere either because of its specific size or due to atom selective coordination linkage [44]. Further, nanoparticles provided solid phase for attachment which can hold it for long period and had strong binding with the surface.
Prior to investigation of Hg 2+ ion selective emission studies of (11) (10 ppm), emission profile of (11) was recorded as a free sensor with excitation wavelength of 290 nm. Fluorescence emission data revealed that the nano-composites (11) exhibited a distinct peak at 445 nm (blue emission) with very low intensity corresponding to excitation at 290 nm and envisaged that nano-composites (11) were not fluorogenic in nature. Further, nano-composites (11) (10 ppm) were tested against various metal ions (Al 3+ , Ag + , Ba 2+ , Ca 2+ , Cd 2+ , Cu 2+ , Cr 3+ , Co 2+ , Fe 3+ , Hg 2+ , K + , Li + , Mn 2+ , Mg 2+ , Na + , Ni 2+ , Pb 2+ and Zn 2+ ), merely Hg 2+ showed an unambiguous intensity growth ( Figure 1). This increment in intensity is considered as one of the relevant and valid ways to check specific ion presence as turn-on fluorescence. Auspiciously, fluorescent enhancement factor (FEF) was found to be 10.9 times hiked in intensity of peak at 445 nm from fluorescence plot of (11) and Hg 2+ ions in aqueous medium. The photo-physical behavior of the nano-composites (11) and (12) was tested independently followed by study of variation obtained in emission analysis of nano-composites (11) and (12) in the presence of various metal ions and anions. Initially, the emission spectra of the nano-composites (11) and (12) of 10 ppm concentration were analyzed before and after addition of metal ions and anions with excitation at 290 nm. The result indicated that the specific variations in emission intensity of (11) was obtained with Hg 2+ ions only and no other metal ion was able to alter the emission peak of the nano-composites (11). Additionally, linear fitting equation between the fluorescence intensity of nanocomposite (11) and Hg 2+ ion concentration (by varying concentration from 4-50 nM) was applied to verify the emission response of (11) with Hg 2+ ion. Similarly, emission spectral analysis of nano-composite (12) at same concentration was recorded by adding various metal ions and anions that also showed selectivity towards Hg 2+ ions. Molar increment experiment of Hg 2+ ion was conducted between 2-35 nM concentrations as per the range of detection (Sections 3.2-3.4).
Chemistry of Nano-Composites (11) and (12) and Their Turn-On Emission due to Hg 2+ Ions
The structures of nano-composites (11) and (12) were designed by keeping in mind the need of heteroatomic sites for binding of the analytes; that can adhere either because of its specific size or due to atom selective coordination linkage [44]. Further, nanoparticles provided solid phase for attachment which can hold it for long period and had strong binding with the surface.
Prior to investigation of Hg 2+ ion selective emission studies of (11) (10 ppm), emission profile of (11) was recorded as a free sensor with excitation wavelength of 290 nm. Fluorescence emission data revealed that the nano-composites (11) exhibited a distinct peak at 445 nm (blue emission) with very low intensity corresponding to excitation at 290 nm and envisaged that nano-composites (11) were not fluorogenic in nature. Further, nano-composites (11) (10 ppm) were tested against various metal ions (Al 3+ , Ag + , Ba 2+ , Ca 2+ , Cd 2+ , Cu 2+ , Cr 3+ , Co 2+ , Fe 3+ , Hg 2+ , K + , Li + , Mn 2+ , Mg 2+ , Na + , Ni 2+ , Pb 2+ and Zn 2+ ), merely Hg 2+ showed an unambiguous intensity growth ( Figure 1). This increment in intensity is considered as one of the relevant and valid ways to check specific ion presence as turn-on fluorescence. Auspiciously, fluorescent enhancement factor (FEF) was found to be 10.9 times hiked in intensity of peak at 445 nm from fluorescence plot of (11) and Hg 2+ ions in aqueous medium. Whereas, emission profile of nano-composites (12) was also recorded as a free sensor with excitation wavelength of 290 nm. Emission spectral analysis of the nano-composites Whereas, emission profile of nano-composites (12) was also recorded as a free sensor with excitation wavelength of 290 nm. Emission spectral analysis of the nano-composites (12) exhibited a distinct peak at 520 nm (green emission) corresponding to excitation at 290 nm with negligible intensity of 50.00 a.u. Additionally, nano-composites (12) (10 ppm) were tested against various metal ions and herein, again, only Hg 2+ ion showed emission intensity enhancement ( Figure 2). This also shifted the sensor emission to the fluorescence turn-on mode for the toxic Hg 2+ ions. In case of titania coated ligand (12), FEF was found to be 11.1 times hiked in intensity of peak at 520 nm from fluorescence plot of (12) and Hg 2+ ions in aqueous medium, which was found to be more than the silica based nano-composites (11). Whereas, emission profile of nano-composites (12) was also recorded as a free sensor with excitation wavelength of 290 nm. Emission spectral analysis of the nano-composites (12) exhibited a distinct peak at 520 nm (green emission) corresponding to excitation at 290 nm with negligible intensity of 50.00 a.u. Additionally, nano-composites (12) (10 ppm) were tested against various metal ions and herein, again, only Hg 2+ ion showed emission intensity enhancement (Figure 2). This also shifted the sensor emission to the fluorescence turn-on mode for the toxic Hg 2+ ions. In case of titania coated ligand (12), FEF was found to be 11.1 times hiked in intensity of peak at 520 nm from fluorescence plot of (12) and Hg 2+ ions in aqueous medium, which was found to be more than the silica based nanocomposites (11). The fluorescence titration profile is shown in Figure 3. The intensity of the peak at 445 nm of (11) gradually increased with increase in the concentration of the Hg 2+ ions without any shift in band emission wavelength. By applying the linear fitting to the plot of Hg 2+ ions concentration and intensity change, the values of LOD and limit of quantification (LOQ) were assessed with the help of equation LOD = 3Sd/slop and LOQ = 10Sd/slop, where Sd is standard deviation. The values of LOD and LOQ were found to be 41.2 nM and 137.3 nM (R 2 = 0.972), respectively ( Figure 3).
Further, molar increment titration experiment was conducted to study the binding interactions between (12) and Hg 2+ ion. The gradual increase in the weak emissive band of (12) at 520 nm was obtained with increase in concentration of Hg 2+ ions (upto 20 nM) ( Figure 4) and the complex [(12)+Hg 2+ ] became fluorogenic in nature. From the previous demonstration, we observed that the emission intensity of (12) is directly proportional to the addition of the Hg 2+ ions concentration. Further, the LOD was calculated by linear emission fitting for [(12)+Hg 2+ ] and was found to be 18.8 nM, followed by LOQ of 62.83 nM (R 2 = 0.975) (Figure 4 inset). These values of LOD in aqueous medium for the detection of toxic Hg 2+ with silica and titania were found to be very low in the terms of aqueous medium analysis comparative to the some recently reported solid phase dispersive nanocomposite chemosensors (Table 1). From the Table 1, it can be depicted that few sensors are available for the Hg 2+ detection below 20 nM concentration with fluorescence spectroscopy technique and the sensors for detection limit less than 10 nM concentration were dependent on potentiometric and digital information method, and were used for catalytic activities rather than sensing behavior. Further, molar increment titration experiment was conducted to study the binding interactions between (12) and Hg 2+ ion. The gradual increase in the weak emissive band of (12) at 520 nm was obtained with increase in concentration of Hg 2+ ions (upto 20 nM) ( Figure 4) and the complex [(12)+Hg 2+ ] became fluorogenic in nature. From the previous demonstration, we observed that the emission intensity of (12) is directly proportional to the addition of the Hg 2+ ions concentration. Further, the LOD was calculated by linear emission fitting for [(12)+Hg 2+ ] and was found to be 18.8 nM, followed by LOQ of 62.83 nM (R 2 = 0.975) (Figure 4 inset). These values of LOD in aqueous medium for the detection of toxic Hg 2+ with silica and titania were found to be very low in the terms of aqueous medium analysis comparative to the some recently reported solid phase dispersive nanocomposite chemosensors (Table 1). From the Table 1, it can be depicted that few sensors are available for the Hg 2+ detection below 20 nM concentration with fluorescence spectroscopy technique and the sensors for detection limit less than 10 nM concentration were dependent on potentiometric and digital information method, and were used for catalytic activities rather than sensing behavior.
Competitive Binding Analysis (Interference Analysis)
Competitive binding analysis was carried out to calculate the realistic value of (11) as an Hg 2+ ion selective chemosensor in the rival environment of the intrusive metal ions in aqueous medium. The experiment was carried out by taking 10 ppm of the nano-composites (11) and 15 ppb of Hg 2+ ions in deionised water and was spiked with all tested metal ions and anions. No visual alterations in the emission intensities were recorded in the spiked samples of the metal ions (Figure 6a inset). Therefore, it can be implicated that (11) exhibited high sensitivity, selectivity and turn-on fluorescence response towards Hg 2+ ions. Additionally, to check the efficacy of (11), normalized data plot of Hg 2+ ions along with various intruding metal ions in presented in Figure 6a.
Competitive Binding Analysis (Interference Analysis)
Competitive binding analysis was carried out to calculate the realistic value of (11) as an Hg 2+ ion selective chemosensor in the rival environment of the intrusive metal ions in aqueous medium. The experiment was carried out by taking 10 ppm of the nano-composites (11) and 15 ppb of Hg 2+ ions in deionised water and was spiked with all tested metal ions and anions. No visual alterations in the emission intensities were recorded in the spiked samples of the metal ions (Figure 6a inset). Therefore, it can be implicated that (11) exhibited high sensitivity, selectivity and turn-on fluorescence response towards Hg 2+ ions. Additionally, to check the efficacy of (11), normalized data plot of Hg 2+ ions along with various intruding metal ions in presented in Figure 6a.
Plausible Mechanism of Sensing of (11) and (12)
From the above data, it was observed that various factors can be listed to rationalize the observed emission enhancement of nano-composite (11) and (12) by Hg 2+ (Scheme 6). The weak fluorescence of the (11) and (12) in the absence of Hg 2+ can be attributed due to instant cis-trans isomerization across the imine (C=N) bond. The possible binding mechanism of (11) and (12) with Hg 2+ that led to the fluorescence changes is shown in Scheme 6. Nano composites were most likely to bind with Hg 2+ ions through the corresponding oxygen and nitrogen atoms which results in fluorescence enhancement due to the ligand to metal charge transfer [L-MCT] nature of Hg 2+ Upon chelation of probe with Hg 2+ Chelation Enhanced Fluorescence (CHEF) produced within the organic moiety of the nano-composites (11) and (12) [57]. It was also found that the detection limit of Hg 2+ is lesser in case of (a) (b) Additionally, the selectivity of the nano-composites (12) towards Hg 2+ ions was tested with addition of 15 ppb of other interfering ions. It can be seen from the experimental data that the addition of various ions had no or negligible effect on the emission intensity of [(12)+Hg 2+ ] complex (Figure 6b inset). The normalized plot of [(12)+Hg 2+ ] was plotted to compare the emissive behavior of the [(12)+Hg 2+ ] complex with other studied metal ions (Figure 6b). These results proved that nano-composites (11) and (12) are promising emissive sensors for toxic Hg 2+ ions in aqueous samples even in the presence of most intruding metal ions and anions.
Plausible Mechanism of Sensing of (11) and (12)
From the above data, it was observed that various factors can be listed to rationalize the observed emission enhancement of nano-composite (11) and (12) by Hg 2+ (Scheme 6). The weak fluorescence of the (11) and (12) in the absence of Hg 2+ can be attributed due to instant cis-trans isomerization across the imine (C=N) bond. The possible binding mechanism of (11) and (12) with Hg 2+ that led to the fluorescence changes is shown in Scheme 6. Nano composites were most likely to bind with Hg 2+ ions through the corresponding oxygen and nitrogen atoms which results in fluorescence enhancement due to the ligand to metal charge transfer [L-MCT] nature of Hg 2+ Upon chelation of probe with Hg 2+ Chelation Enhanced Fluorescence (CHEF) produced within the organic moiety of the nano-composites (11) and (12) [57]. It was also found that the detection limit of Hg 2+ is lesser in case of titania as compared to the silica coated organic ligand attributed to the fact that the size and the amount of organic ligand coated on the surface of TiO 2 is more than that of the SiO 2 , which was also studied in the Solid state NMR and BET studies mentioned ahead.
titania as compared to the silica coated organic ligand attributed to the fact that and the amount of organic ligand coated on the surface of TiO2 is more than tha SiO2, which was also studied in the Solid state NMR and BET studies mentioned Scheme 6. Plausible mechanism of Hg 2+ sensing via Nano-composite (11) and (12).
Effect of pH on Nano-Composites (11) and (12) with Hg 2+ Ions
pH of the solution is an important variable in sensing studies of aqueous sam it affects the surface of the nano-composites and coordination sites while sensing lytes. Therefore, the effect of pH on the sensing analysis of nano-composites (11) were carried out in the range of pH 2-11. Various solutions of nano-composites (12) over a wide range of pH were prepared along with the different solutions plexes [(11)+Hg 2+ ] and [(12)+Hg 2+ ] to compare their sensing behavioral change NaOH and 0.1 N HCl were utilized to adjust the pH of the different solutions o composites (11) and (12). As shown in the Figure 7a,b; pH values from 1-3 (acidic) the Hg 2+ ion complexes with respective nano-composites (11) and (12). This is at to the blocking of the coordination sites of nano-composites (11) and (12) with e H + ions, which in turn, reduced the ligand to metal charge transfer (L-MCT) and sponsible for decrease in emissive intensity of respective complexes with Hg 2+ io thermore, with increase in pH up to neutral value i.e.,pH 4-10, it was visualized ordination sites became freely available for binding of Hg 2+ ions, which again en the intensity of the nano-composites (11) and (12) to the initial level. Proceeding the alkaline pH due to excess of hydroxide ions in the solutions, tendency of H towards OHincreased due to opposite charges and hence resulted the form Hg(OH)2. According to literature reports, Hg(OH)2 is highly unstable and is conv HgO in aqueous media readily [58]. This was found to be the fundamental reason quenching of the emission intensity of complexes (11) and (12). Thus, the results in that the optimum pH for the detection analysis of toxic Hg 2+ ions is 4 to 10 at roo perature. Scheme 6. Plausible mechanism of Hg 2+ sensing via Nano-composite (11) and (12). (12). This is attributed to the blocking of the coordination sites of nano-composites (11) and (12) with excessive H + ions, which in turn, reduced the ligand to metal charge transfer (L-MCT) and was responsible for decrease in emissive intensity of respective complexes with Hg 2+ ions. Furthermore, with increase in pH up to neutral value i.e., pH 4-10, it was visualized that coordination sites became freely available for binding of Hg 2+ ions, which again enhanced the intensity of the nano-composites (11) and (12) to the initial level. Proceeding towards the alkaline pH due to excess of hydroxide ions in the solutions, tendency of Hg 2+ ions towards OH − increased due to opposite charges and hence resulted the formation of Hg(OH) 2 . According to literature reports, Hg(OH) 2 is highly unstable and is converted to HgO in aqueous media readily [58]. This was found to be the fundamental reason behind quenching of the emission intensity of complexes (11) and (12). Thus, the results indicated that the optimum pH for the detection analysis of toxic Hg 2+ ions is 4 to 10 at room temperature.
FT-IR Studies
Structural analysis of the nano-composites (11) and ( [59]. Similarly, the FT-IR spectral studies were also conducted to verify the insertion of organic moieties into the nano-composites (12). Figure 9 showed the FT-IR spectra of TiO 2 , APTES@TiO 2 and (3)@APTES@TiO 2 . In all samples, the characteristic bands of titania framework at around 800 cm −1 (symmetric stretching vibrations of Ti-O), 960 cm −1 (symmetric stretching vibration of Ti-OH), 1200 cm −1 (asymmetric stretching vibrations of Ti-O-Ti), and 3400 cm −1 (physisorbed water molecules) and 3437 cm −1 (stretching vibrations of OH groups) were present. The new bands within the range of 2409-2486 cm −1 are characteristic of aliphatic alkyl-chain C-H vibrations (Figure 9b). In Figure 9c, new bands at around 1179 and 1454 cm −1 were assigned to the S=O and C=N stretching vibrations of imine linkage. The other observed band around 1538 cm −1 assigned to -NH vibrations. The band at 1652 cm −1 was assigned to the imine bond formed which confirmed the coating of functionalized nanoparticles (7).Thus, the FT-IR spectra of titania (7) and functionalized titania (12) also confirmed the incorporation of the fluorophore groups in the TiO 2 framework. Similarly, the FT-IR spectral studies were also conducted to verify the insertion of organic moieties into the nano-composites (12). Figure 9 showed the FT-IR spectra of TiO2, APTES@TiO2 and (3)@APTES@TiO2. In all samples, the characteristic bands of titania framework at around 800 cm −1 (symmetric stretching vibrations of Ti-O), 960 cm −1 (symmetric stretching vibration of Ti-OH), 1200 cm −1 (asymmetric stretching vibrations of Ti-O-Ti), and 3400 cm −1 (physisorbed water molecules) and 3437 cm −1 (stretching vibrations of OH groups) were present. The new bands within the range of 2409-2486 cm −1 are characteristic of aliphatic alkyl-chain C-H vibrations (Figure 9b). In Figure 9c, new bands at around 1179 and 1454 cm −1 were assigned to the S=O and C=N stretching vibrations of imine linkage. The other observed band around 1538 cm −1 assigned to -NH vibrations. The band at 1652 cm −1 was assigned to the imine bond formed which confirmed the coating of functionalized nanoparticles (7).Thus, the FT-IR spectra of titania (7) and functionalized titania(12) also confirmed the incorporation of the fluorophore groups in the TiO2 framework. Similarly, the FT-IR spectral studies were also conducted to verify the insertion of organic moieties into the nano-composites (12). Figure 9 showed the FT-IR spectra of TiO2, APTES@TiO2 and (3)@APTES@TiO2. In all samples, the characteristic bands of titania framework at around 800 cm −1 (symmetric stretching vibrations of Ti-O), 960 cm −1 (symmetric stretching vibration of Ti-OH), 1200 cm −1 (asymmetric stretching vibrations of Ti-O-Ti), and 3400 cm −1 (physisorbed water molecules) and 3437 cm −1 (stretching vibrations of OH groups) were present. The new bands within the range of 2409-2486 cm −1 are characteristic of aliphatic alkyl-chain C-H vibrations (Figure 9b). In Figure 9c, new bands at around 1179 and 1454 cm −1 were assigned to the S=O and C=N stretching vibrations of imine linkage. The other observed band around 1538 cm −1 assigned to -NH vibrations. The band at 1652 cm −1 was assigned to the imine bond formed which confirmed the coating of functionalized nanoparticles (7).Thus, the FT-IR spectra of titania (7) and functionalized titania(12) also confirmed the incorporation of the fluorophore groups in the TiO2 framework.
Solid-State 13 C CPMAS and 29 Si CPMAS NMR Spectroscopy
The fuctionalization of silica and titania nanospheres with APTES and the organic compound (3) was investigated with 13 C and 29 Si CPMAS NMR spectroscopy studies. In the 13 C spectra of APTES@SiO 2 , three resonance peaks appeared around δ 22.18, 24.31 amd 30.83 ppm, which were assigned to three carbon atoms of the integrated 3-aminopropyl chain of APTES. Similarly the 13 C CPMAS NMR spectrum of APTES@TiO 2 also showed three peaks corresponding to δ = 10.57, 22.07 and 42.67 ppm, which were also assigned to the incorporated aminopropyl chain. But in the 13 C spectrum of the APTES@SiO 2 , two low intensity additional peaks were also seen at δ 66.15 ppm and 42.69 ppm indicating the existance of tiny amounts of unreacted ethoxy group of APTES as shown in Figure 10. The above data showed that as compared to silica, titania surface was more effectively covered and strongly bind to APTES that in turn also helped the organic compound (3) to adhere the nano-composite surface more efficiently.
The solid-state 29 Si CPMAS NMR spectra of (3)@APTES@SiO 2 and (3)@APTES@TiO 2 are shown in Figure 11. The peaks appeared around −52.21ppm and −57.32 ppm corresponded to silanol group of the C-Si(OSi) 2 (OH) group (T 2 ) and the C-Si(OSi) 3 group (T 3 ), respectively, provided clear evidence that the nano-composite sensing material (3)@APTES@SiO 2 (11) was made up of a silica scaffold with an organic group covalently bonded to SiO 2 nanoparticles. In addition, the spectrum showed additional peaks that associated to silica's inorganic polymeric structure: Si(OSi) 4 (3D) group (Q 4 ) was allocated −111.49 ppm and 113.63 ppm, while the free silanol group of Si(OSi) 3 the incorporated aminopropyl chain. But in the 13 C spectrum of the APTES@SiO2, two low intensity additional peaks were also seen at δ 66.15 ppm and 42.69 ppm indicating the existance of tiny amounts of unreacted ethoxy group of APTES as shown in Figure 10. The above data showed that as compared to silica, titania surface was more effectively covered and strongly bind to APTES that in turn also helped the organic compound (3) to adhere the nano-composite surface more efficiently.
Elemental (C, H, N) and Surface Area Analysis (BET Studies)
Elemental studies for the determination of C, N and H percentages were conducted to verify the successful modifications of nanoparticles (5) and (7) with organic ligand (3). The presence of the appropriate percentage of 'C' and 'N' in (9), (10), (11) and (12) materials confirmed the formation of organic ligand (3) coated nano-composites (11) and (12) as shown in Table 2. To ensure the surface modifications with organic ligand, surface studies of the prepared materials (9), (10), (11) and (12) were conducted. In the BET studies, the surface area of the nano-composites (11) and (12) were compared with bare silica nanoparticles (5) and bare titania nano-particles (7).
As listed in Table 2, the BET surface area of the silica and functionalized materials were found to be 201.81 m 2 g −1 and 113.21 m 2 g −1 , respectively which were further decreased to 77.56 m 2 g −1 for nano-composites (11). As expected, the BET studies revealed that the surface area of the nanoparticles decreased in order of SiO 2 (5) > APTES@SiO 2 (9) > (3)@APTES@SiO 2 (11) nano-composites, which confirmed the modification of the silica surface. It was seen that the immobilization of nano-particles SiO 2 (5) with the ligand (3) and APTES blocked nitrogen assess onto the surface of (5). These results were in good agreement with the previous studies.
Entry
Elemental The size, morphology and topographical studies of synthetic nanoparticles were examined using a Field Emission Stimulated Electron Emission (FE SEM). Figure 12 showed the FE SEM images of nano-hybrid sensing material (11). FE SEM micrographs revealed that nanoparticles were spherical in shape having rough coating of organic ligand over their surface and were not covalently attached all over, which was maintained throughout and much of particles did not agglomerate into clusters. It was also found that the average particle size of functionalized SiO 2 nanoparticles was approximately 300 nm.
Energy Dispersive X-ray studies (EDX) revealed the presence of sulfur and carbon which confirmed the coating and functionalization of the nano-composites (11). The percentage of all the elements present in (11) depicted that oxygen was present in highest amount followed by the silicon which form the core of nano-composites (11). Presence of carbon and sulfur confirmed the coating of the organic ligand (3) over APTES modified nano-particles (9) (Figure 13, Table S1).
The size, morphology and topographical studies of synthetic nanoparticles were examined using a Field Emission Stimulated Electron Emission (FE SEM). Figure 12 showed the FE SEM images of nano-hybrid sensing material (11). FE SEM micrographs revealed that nanoparticles were spherical in shape having rough coating of organic ligand over their surface and were not covalently attached all over, which was maintained throughout and much of particles did not agglomerate into clusters. It was also found that the average particle size of functionalized SiO2 nanoparticles was approximately 300 nm. Energy Dispersive X-ray studies (EDX) revealed the presence of sulfur and carbon which confirmed the coating and functionalization of the nano-composites (11). The percentage of all the elements present in (11) depicted that oxygen was present in highest amount followed by the silicon which form the core of nano-composites (11). Presence of carbon and sulfur confirmed the coating of the organic ligand (3) over APTES modified nano-particles (9) (Figure 13, Table S1). The FE SEM micrographs of (12) was recorded and it showed that the nano-composites (12) possessed rough coating of organic ligand over its surface and was not covalently Energy Dispersive X-ray studies (EDX) revealed the presence of sulfur and carbon which confirmed the coating and functionalization of the nano-composites (11). The percentage of all the elements present in (11) depicted that oxygen was present in highest amount followed by the silicon which form the core of nano-composites (11). Presence of carbon and sulfur confirmed the coating of the organic ligand (3) over APTES modified nano-particles (9) (Figure 13, Table S1). The FE SEM micrographs of (12) was recorded and it showed that the nano-composites (12) possessed rough coating of organic ligand over its surface and was not covalently The FE SEM micrographs of (12) was recorded and it showed that the nano-composites (12) possessed rough coating of organic ligand over its surface and was not covalently attached all over uniformly and the average particle size of functionalized TiO 2 nanocomposite was also approximately 210 nm. Additionally, the presence of sulfur and carbon in EDX spectra of (12) confirmed the coating of organic ligand (3) on functionalized nanoparticles (10) to finally obtain nano-composites (12). The percentage of all the elements present in (12) depicted that oxygen was present in highest amount followed by the titanium which formed the core of nano-composites (12) (Figures 14 and 15; Table S2). attached all over uniformly and the average particle size of functionalized TiO2nano-composite was also approximately 210 nm. Additionally, the presence of sulfur and carbon in EDX spectra of (12) confirmed the coating of organic ligand (3) on functionalized nanoparticles (10) to finally obtain nano-composites (12). The percentage of all the elements present in (12) depicted that oxygen was present in highest amount followed by the titanium which formed the core of nano-composites
Transmission Electron Microscopy (TEM) Analysis
Tem studies were also conducted to check the size and morphology of the synthesized nano-composites (11) and (12) by using field emission gun at voltage of 300kV. The samples were prepared by suspending nano-composites in absolute alcohol and then by drying a drop of the same on carbon coated copper TEM grid. TEM micrographs revealed that nano-composites (11) and (12) were mostly mono-dispersive and with an average size of 210 ± 7.73 and 295 ± 8.82 nm of nano-composite (11) and (12), respectively ( Figure 16).
Transmission Electron Microscopy (TEM) Analysis
Tem studies were also conducted to check the size and morphology of the synthesized nano-composites (11) and (12) by using field emission gun at voltage of 300 kV. The samples were prepared by suspending nano-composites in absolute alcohol and then by drying a drop of the same on carbon coated copper TEM grid. TEM micrographs revealed that nano-composites (11) and (12) were mostly mono-dispersive and with an average size of 210 ± 7.73 and 295 ± 8.82 nm of nano-composite (11) and (12), respectively ( Figure 16). 3.7.6. X-ray Diffraction (XRD) Studies X-ray powder diffraction studies were conducted to confirm the structural characterization of SiO2(5), APTES@SiO2(9) and (3)@APTES@SiO2nano-composites (11). Powder XRD pattern was obtained from Bragg's equation λ = 2dsinθ using CuKα radiations, as shown in Figure 17a. An amorphous peak with an equivalent Bragg's angle appeared at 2θ = 23°, corresponding to the SiO2 prepared by modified Stöber method after thermal treatment at 400 °C temperature. The single broad halo is due to average molecular separation in the amorphous phase and it confirmed the non-crystalline nature of silica prepared by modified Stöber's method. The literature citation revealed that 2θ value of amorphous silica depends upon temperature treatment and water to tetraethoxysilane (TEOS) ratio. Our measured 2θ values of silica were found to analogous to the reported 2θ value of silica [61]. Further, the XRD studies of the APTES functionalized silica (9) and nano-composites (11) showed 2θ at 23° only with increase in peak intensities confirmed the immobilization of APTES and organic ligand (3) on silica surface with confined amorphous character of silica. 3.7.6. X-ray Diffraction (XRD) Studies X-ray powder diffraction studies were conducted to confirm the structural characterization of SiO 2 (5), APTES@SiO 2 (9) and (3)@APTES@SiO 2 nano-composites (11). Powder XRD pattern was obtained from Bragg's equation λ = 2dsinθ using CuKα radiations, as shown in Figure 17a. An amorphous peak with an equivalent Bragg's angle appeared at 2θ = 23 • , corresponding to the SiO 2 prepared by modified Stöber method after thermal treatment at 400 • C temperature. The single broad halo is due to average molecular separation in the amorphous phase and it confirmed the non-crystalline nature of silica prepared by modified Stöber's method. The literature citation revealed that 2θ value of amorphous silica depends upon temperature treatment and water to tetraethoxysilane (TEOS) ratio. Our measured 2θ values of silica were found to analogous to the reported 2θ value of silica [61]. Further, the XRD studies of the APTES functionalized silica (9) and nano-composites (11) showed 2θ at 23 • only with increase in peak intensities confirmed the immobilization of APTES and organic ligand (3) on silica surface with confined amorphous character of silica. 2θ = 23°, corresponding to the SiO2 prepared by modified Stöber method after thermal treatment at 400 °C temperature. The single broad halo is due to average molecular separation in the amorphous phase and it confirmed the non-crystalline nature of silica prepared by modified Stöber's method. The literature citation revealed that 2θ value of amorphous silica depends upon temperature treatment and water to tetraethoxysilane (TEOS) ratio. Our measured 2θ values of silica were found to analogous to the reported 2θ value of silica [61]. Further, the XRD studies of the APTES functionalized silica (9) and nanocomposites (11) showed 2θ at 23° only with increase in peak intensities confirmed the immobilization of APTES and organic ligand (3) on silica surface with confined amorphous character of silica. Similarly, structural characterization of TiO2, APTES@TiO2 and (3)@APTES@TiO2(12), was carried out with powder XRD analysis and the resulting patterns are presented in Figure 17b. All samples showed a single broad peak indicating their amorphous nature and preserved the non-crystalline nature even after functionalization with APTES and organic moieties, which implied that TiO2 nanoparticles were stable enough to experience the chemical modification reactions same as that of silica nano-composites (11). However, XRD peak intensities decreased on moving from TiO2 to Similarly, structural characterization of TiO 2 , APTES@TiO 2 and (3)@APTES@TiO 2 (12), was carried out with powder XRD analysis and the resulting patterns are presented in Figure 17b. All samples showed a single broad peak indicating their amorphous nature and preserved the non-crystalline nature even after functionalization with APTES and organic moieties, which implied that TiO 2 nanoparticles were stable enough to experience the chemical modification reactions same as that of silica nano-composites (11). However, XRD peak intensities decreased on moving from TiO 2 to APTES@TiO 2 to (3)@APTES@TiO 2 (12) which also indicated the successive immobilization of APTES and (3) onto the TiO 2 matrix.
Application on Real Samples
To authenticate the practical applicability of the nano-composites (11) and (12), the composites were also applied to the mercury determination in real samples. Nanocomposites (11) and (12) were successfully applied in three different types of water (tap, distilled and bottled water) for the detection of Hg 2+ ions. Tap water was filtered through Whatman filter paper prior to its use. After dispersing (11) and (12) in each sample, the fluorescence spectra of the prepared samples were recorded thrice. Further, the samples were spiked with known amounts of Hg 2+ ions solution and their emissions intensities were analyzed. From the respective calibration curves of [(11)+Hg 2+ ] and [(12)+Hg 2+ ] complexes, concentrations of Hg 2+ ions were determined in spiked samples. The results given in Table 3 indicated that there was a good agreement between the spiked and measured number of ions. It was observed that recovery percentages for the known amount of spiked Hg 2+ ions were found between 98-100%, which made the present approach authentic and reliable for real sample assessment.
Conclusions
In conclusion, we have successfully synthesized and characterized nano-composites (11) and (12) of silica and titania (non-fluorogenic), which were evaluated as optical sensors/fluorescence turn-on sensors for Hg 2+ ions. It was found that the particle size of nano-composites (12) were lesser than that of (11). Pleasantly, only Hg 2+ ion induced the metal-ligand chelation enhanced fluorescence in both nano-composites, while all other ions showed negligible response. None of the intruding ion altered the sensitivity and selectivity of nano-composites towards Hg 2+ ions.The detection limits of the (11) and (12) were found to be 41.2 nM and 18.8 nM, respectively. Data obtained from p-XRD, BET and EDX studies showed that there were more amount of ligand (3) adhered on the nano-composite (12) in comparison to (11), which was found to be the plausible reason behind the lower detection limit of (12). In addition, the present emission based analytical method provided an economic and simple synthetic route for a selective and a sensitive quantification of the one of the toxic metal Hg 2+ ionin environmental samples. Table S1: EDX% of the elements present in the nano-composites (11); Table S2: EDX% of the elements present in the nanocomposites
|
2021-11-18T16:19:36.620Z
|
2021-11-01T00:00:00.000
|
{
"year": 2021,
"sha1": "fe8a785c01070c64e4719fc6f62f98282a0a0920",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-4991/11/11/3082/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5029e982da041a3fee4b09087bb61d6325e16925",
"s2fieldsofstudy": [
"Chemistry",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
52270098
|
pes2o/s2orc
|
v3-fos-license
|
Pollination niche availability facilitates colonization of Guettarda speciosa with heteromorphic self-incompatibility on oceanic islands
Obligate out-breeding plants are considered relatively disadvantageous comparing with self-breeding plants when colonizing oceanic islets following long-distance dispersal owing to mate and pollinator limitation. The rarity of heterostyly, a typical out-breeding system, on oceanic islands seems a good proof. However, a heterostylous plant, Guettarda speciosa, is widely distributed on most tropical oceanic islets. Our research demonstrates that its heteromorphic self-incompatibility, plus herkogamy and long flower tube make it rely on pollinator for sexual reproduction, which is generally considered “disadvantageous” for island colonization. We hypothesize that available pollination niche will be a key factor for its colonization on islands. Our studies on remote coral islands show that G. speciosa has built equilibrium population with a 1:1 morph ratio. It could obtain pollination niche from the hawkmoth Agrius convolvuli. A pioneer island plant Ipomoea pes-caprae sustain the pollination niche by providing trophic resource for the larvae of the pollinator. Geographic pattern drawn by Ecological Niche Modelling further indicates the interaction between G. speciosa, A. convolvuli and I. pes-caprae can be bounded on those remote oceanic islands, explaining the colonization of G. speciosa distylous population. These findings demonstrated obligate out-breeding system could be maintained to acclimatize long distance dispersal, if the pollination niche is available.
sustain the abundance and occupancy rate during initial colonization 6 . After first landing, sexual reproduction is the precondition to build stable population apart from the species with strong vegetative reproduction ability. Except few anemophilous species, most out-breeding plants rely on pollinator for sexual reproduction. Actually, mate limitation is primarily conducted by pollinator 7 . Even though the plant has established its initial group, the paucity of effective pollinator strongly limited pollen flow resulting in pollen limitation, in another word, mate limitation. Usually, plants with strong dispersal ability is considered to be more adapted to generalized pollinator in order to attain enough pollination service 6 .
The pollination resource that a plant can get from its effective pollinators reflects its pollination niche in community, consisting of fundamental niches and other physical environmental factors [11][12][13][14][15] . Like the general ecological niches, it represent the fit of species to natural selection, determining where species occur and whether they coexist 16,17 . The local pollinator community operates as a habitat filter on plant invasion and colonization accomplished with certain characters of breeding system, and then pollinator-mediated interactions will impact on species establishment and character persistence 13,18,19 . Obviously, for plants that are self-incompatible and hence outbreeding, pollination niche availability is a necessary condition to establish in new habitats through sexual reproduction. Obviously, it also will be especially important for obligate out-breeding plants to establish on oceanic islands after long-distance dispersal. It's reasonable to hypothesize that their adaptation modes are related to the availability of pollination niche 20,21 .
In order to explore the hypothesis mentioned above, a heterostylous plant Guettarda speciosa L. is selected as a model in this study. The first reason why we choose this plant species is that heterostyly system, a genetically controlled floral polymorphism, is considered as a typical mechanism for promoting out-breeding 22,23 . Populations of heterostylous species are composed of two (distyly) or three (tristyly) distinct floral morphs that differ reciprocally in the heights of stigmas and anthers with a significant herkogamy in flowers. Generally, most heterostylous species are obligate out-breeding due to heteromorphic (self and intra-morph) incompatibility [24][25][26] , posing the intrinsic barrier for its persistence on small oceanic islands [27][28][29] . Even for self-compatible species, pollinator paucity will generally lead to mate limitation in sexual reproduction. As the flowers of heterostylous plants are often tubular and herkogamy, proper pollinators with matched proboscis length are required for both morphs of heterostylous plant to be successfully pollinated 30,31 . Isoplethic morph-ratio is another prerequisite for the maintenance of heterostylous populations 32,33 ; in other words, few compatible individuals with unbalanced morph ratio will also lead to population depression. All these characteristics added constraints for heterostylous plants to acclimatize on small oceanic islands through sexual reproduction. Indeed, heterostylous plants are rarely recorded on islands 29,34 . Besides, heterostylous system on islands broke-down with lost or weakened self-incompatibility and heteromorphism in some cases, which is regarded as a kind of adaptive change to overcome the unfavorable conditions on oceanic islands such as paucity of proper pollinators 28,35,36 . The second reason is that G. speciosa is a widely distributed island plant 29 . Despite that it's long tubular flower lead to a dependence on pollinator for its sexual reproduction (see Results), it distributes on almost all tropical oceanic islands, and even becomes a dominant species, suggesting its successful adaptation modes for long distance dispersal and island habitat. The heterosylous system of G. speciosa provides an ideal model to test our pollination niche hypothesis.
Similar to the other basic ecological niche, pollination niche sustains the plants, and in turn restricts the plants' distribution, which means the geographic distribution of the plants will be interfered by the pollinators likewise climate 12,13 . Similar interspecies relationship often exists between the host and the parasites. The host is considered as a key environmental factor to predict the potential distribution of parasites in many studies 15 . Analogously, if pollination niche availability is essential for the acclimatization of heterostylous plants on islands, their relationship and interaction is expected to be interpreted by geographic distribution patterns. Here we apply Ecological Niche Modeling (ENM) to explore the test. ENM is a multidisciplinary tool mainly applied to predict species' geographic distributions or niche space, offering reliable global scale information for biogeographical, evolutionary, and ecological analyses [37][38][39] . The pattern of biotic interactions, such as host-parasite, flower-pollinator, co-occurring, in geographic scale can be investigated by ENM [40][41][42][43] , though precise quantification is considered as unlikely by current methodology 44 . We expect that with the effect of pollination niche, the obligate out-breeding plant will concentrated in the pollination niche available areas.
In the meanwhile, pollination niche availability on small oceanic islands was confined by many factors. Besides the pollinators' dispersal ability and basic climatic conditions on islands, food resource is a key limiting factor, especially for insects with specific hosts, such as most Lepidoptera pollinators. The importance of food plants as range determinants has been illustrated by researches on butterflies and the host plants of their larvae 45,46 . The trophic dependence of their larvae encourages the insects and plants develop close evolutionary relationships (e.g. co-evolution, co-existence) [47][48][49] . And therefore it's necessary to take the pollinator's host plants into consideration when analyzing the geographic signature of such kinds of pollinators.
In this study, we explored why G. speciosa can colonize widely and successfully on oceanic islets following long-distance dispersal with heterostyly system. Its floral traits, morph ratio, compatibility system and pollinators (hawk moths) were investigated, and the host plants of pollinators was surveyed. Ecological Niche Modelling was applied to predict the distribution of G. speciosa, its pollinators and the host plants of pollinators, testing if its success relates to pollination niche availability. We expect that if pollination niche affects its colonization on oceanic islets: (i) The distribution range of G. speciosa will be contained in the pollinators' distribution (nested models) and its occurrence will concentrate in pollinator-available regions due to its dependence on pollinator; (ii) if there's any host plant that is necessary to sustain the pollination niche, a co-existence pattern is expected between G. speciosa and the pollinators' host plants. Table S1]. The S-morph presents larger flower, longer anther and stigma than the L-morph. However, the exine sculpture of pollen and the papilla of stigma show similar appearance between the two morphs [Supplementary Data, Fig. S1].
In the study sites, white flowers of G. speciosa opened around 19:00-20:00 in the night with strong aroma and anther dehisced. Sticky pollen grains adhered together to form bars in anthers. Stigma is also sticky when flower opened. Secretion was observed on top of the anthers [Supplementary Data, Fig. S2]. Nectar was secreted in the bottom of corolla tube. Corolla withered around 09:00 next morning but the styles persisted. Legitimate pollen grains scarcely germinated on the stigmas of flowers which had opened for 12 hours, suggesting that the pollen viability or stigma receptivity only persisted during the first night. Flower longevity was thus determined to be less than 12 hours.
Morph ratios.
In the three island populations we investigated, the ratios of L-morph and S-morph individuals did not deviate from the expected 1:1 equilibrium ( Table 1). The results indicate that symmetrical disassortative mating occurs in the three populations. In 7d observations at dawn, various kinds of visitors were witnessed, including bees and flies, but they just acted as robbers for nectar or pollen, as none of them could touch the sexual organs due to their small body size. Hawkmoth Cephonodes hylas L. is a visitor touched the sexual organs, but we just witnessed it visiting the flowers at around 08:00-10:00, while the flower had lost viability. Thus, it denies the C. hyla as an effective pollinator. We surveyed every recorded plant species (about 220) on Xisha Islands, and found the larvae of A. convolvuli parasitized only on the leaves of Ipomoea pes-caprae L. (Convolvulaceae) (Fig. 3D,E).
Distribution pattern of G. speciosa, the pollinator and its host. The modelled potential distribution of the three species is shown in Fig. 4. In total, 988 records for G. speciosa, 15208 records for A. convolvuli and 5339 records for I. pes-caprae were used in the modelling. Model evaluation showed high scores of performance, and all the AUC values were above 0.9. In the predicted distribution, G. speciosa is in most Pacific islands, some tropical coast areas of Australia, India, Africa and some islands in the Caribbean Sea. The hawkmoth pollinator A. convolvuli has a world-wide range and spreads almost everywhere in Western Europe, with distribution in sub-tropical and tropical areas from Southeastern Asia to Eastern Australia, Southern Africa to Eastern Madagascar, and Southeastern coast of North America. The host of A. convolvuli, I. pes-caprae, distributes in similar but larger areas with G. speciosa. ENM predicts the three species have the ability to dispread on many ocean islands. Figure 5 shows the ENM predicted distribution regions of the three species in 10 th percentile training presence. Overlapping areas with G. speciosa is highlighted in red and the occurrence records of G. speciosa are marked by black dots. The ratio of overlapping area to predicted distribution area of G. speciosa for different species was 91.84% (I. pes-caprae vs. G. speciosa), 22.45% (A. convolvuli vs. G. speciosa), 22.40% (I. pes-caprae and A. convolvuli vs. G. speciosa), respectively.
Discussion
Our results indicate that G. speciosa is distylous with self-and intra-morph incompatibility, and the heterostylous traits are stable on oceanic islets. Guettarda speciosa is a nocturnal flowering plant with hawkmoth A. convolvuli as the principal legitimate pollinator in the study area. ENM results show the recorded occurrence of G. speciosa on oceanic islands mostly locates in the predicted distribution area of A. convolvuli. What's more, it reveals a geographic signal that relationships between G. speciosa, A. convolvuli and I. pes-caprae observed on Yongxing Island is able to be bound on other insular regions, suggesting the persistence of obligate out-breeding system in G. speciosa on oceanic islands colonization is closely related to the available pollination niche.
G. speciosa is an obligate out-breeding species with stable distylous population on islands.
Floral traits indicate that G. speciosa is morphologically distylous, with imprecise reciprocal herkogamy. The stigma-anther reciprocity is much more precise in higher level (R = 0.845) than in lower level (R = 0.697). Comparing to typical distyly, the distyly in G. speciosa is a kind of "anomalous" heterostyly, a phenomenon designated by Barrett and Richards 50 for species displaying imprecise reciprocity. Resembling many typical distylous species, G. speciosa also showed dimorphism in corolla size, anther length, as well as stigma and pollen sizes. Guettarda speciosa is strictly self-and intra-morph incompatible, and the equilibrium of morph ratio in the three populations suggests that the species is capable of maintaining stability on Xisha Islands. It is widely distributed in coastal habitats in tropical areas around the Pacific Ocean. Although reproductive biological data of G. speciosa is limited on other islands, the report of distyly on Lanyu Island 28 suggested that distyly in G. speciosa is persistent in different populations. Besides, Yongxing Island and other small and young islets we studied here are all far from the continent. The dominance of self-incompatible G. speciosa on remote islets suggests that these reproductive characters are entrenched after long distance dispersal. Two congeneric species, G. scabra and G. platypoda, are also coastal and insular woody plants with distyly 51,52 . Similar "anomalous" distyly has been reported in these species as well, indicating the possibility that the imprecise reciprocity is a general feature of this genus. Guettarda scabra and G. platypoda are self-and intra-morph compatible, and capable of autonomous selfing. Both species, however, are relatively stenochoric, compared to the self-incompatible G. speciosa which is more widely distributed on oceanic islands as a dominant plant 53-55 , suggesting its advantage to acclimatize long distance dispersal under marine environments. This is in contrast with the findings of Grossenbacher et al. 56 , which reported that plants autonomously reproduced via self-pollination consistently had larger geographic ranges than their close relatives which generally required two parents for reproduction. However, it should be noted that fluctuation of pollinator service drives out-crossers to increase fitness via dispersal, so out-crossers show a stronger dispersal ability than selfers 20 . The floral traits, especially long flower tube, of G. speciosa reduce its fitness to general pollinator group, increasing the risk of pollination fluctuation. This may drive G. speciosa to possess a stronger dispersal ability than its relatives and achieve a wider distribution range.
The nocturnal flowering G. speciosa obtains pollination niche from hawkmoth A. convolvuli.
The tubular flower of G. speciosa opened around 19:00-20:00 in the night, with white color and strong aroma, showing typical hawk moth-pollination syndrome 57,58 . The narrow floral tube, unexposed anthers, as well as the deep-hidden nectar, reinforced its dependence on hawkmoth pollinators. Though both A. convolvuli and C. hyla were observed visiting G. speciosa, our data demonstrated that only the nocturnal A. convolvuli is the legitimate pollinator, while C. hyla is not an effective pollinator, as the stigma has lost receptivity in daytime when C. hyla visited the flowers. Agrius convolvuli visited G. speciosa at a very low frequency in this study, which may reflected the restriction of islands on large insects 59 , or the discrepant pollination syndrome of G. speciosa and A. convolvuli. A. convolvuli is a large hawkmoth with very long tongue (approx. 10 cm) and very strong dispersal ability from temperate to tropical zone. It acts as the principal pollinator for various angiosperm groups with long floral tube, including Crinum delagoanse, Gardenia thunbergii, Ipomoea alba 18 , Bonatea steudneri, Datura stramonium 18 and Lilium formosanum 60 . The floral tube length of G. speciosa, however, is shorter, suggesting it adjusted to pollinators with relatively shorter tongue (minimum 4 cm); and thus doesn't well match the tongue length of A. convolvuli.
However, clear specialization tendency was observed on Yongxing Island that A. convolvuli was the only effective pollinator for G. speciosa. This may be explained by the rarity of long tubular flowers (10 cm) (only two species 61 ) on the islands that G. speciosa played as important trophic resource for hawkmoth. For G. speciosa, A. convolvuli is an effective pollinator. Even though it was an irregular visitor, it visited nearly all the flowers during each visitation bout in the observation area. The sticky pollen and stigma of G. speciosa further increased pollination efficiency. The observed visitation frequency (0.14) is quite match with the low natural fruit set (0.14 for two morphs on average), which indicated that G. speciosa successfully and primarily obtains the pollination niche from A. convolvuli. Besides, G. speciosa blooms nearly all year round with abundant flowers opening every day, further ensuring enough offspring for the maintenance of distyly.
The geographic signal of the relationship between G. speciosa, A. convolvuli and I. pes-caprae.
The ENM results show that A. convolvuli have much broader potential distribution areas than that of G. speciosa. From an overall perspective, the potential distribution of G. speciosa isn't completely contained within the predicted range of the pollinator, A. convolvuli, showing a mosaic model. Species with simple and tight interactions showing similar distributive preference usually exhibit nested patterns 41,62 , while the mosaic model is on the contrary. Our results suggest that relationship between G. speciosa and A. convolvuli is not a strictly specialized relationship in global scale, which is coherent with that in different area, G. speciosa may be pollinated by other potential insects, and vice versa, A. convolvuli may visit other plants in other habits.
However, the predicted distribution of G. speciosa is better overlapped with that of its pollinator on small and remote oceanic archipelagos than mainland and large islands (i.e. New Zealand, Papua New Guinea, etc.) (Fig. 5). This suggests a more specialized interaction between G. speciosa and A. convolvuli can build on islands, as observed in our study sites. The specialization degree in pollinator-plant relationship is variable in different biogeographic ranges 9,63 . Because the limited species number and lower animal/plant ratio on oceanic islands, the pollinator-plant interactions were simpler with lower diversity, though there were more generalized pollinator species than mainland and continental islands 63,64 . Therefore, a specialization-like relationship between pollinator and plant will be observed, as plant has no choise but depend on fewer pollinators.
Interestingly, the overlapping pattern doesn't change after adding the distribution data of Ipomoea pes-caprae, the host of A. convolvuli, comparing to the pattern between A. convolvuli and G. speciosa (Fig. 6). Moreover, occurrence records of G. speciosa are concentrated in the overlapping areas of the three species. This condition indicates that G. speciosa is sympatric in areas where A. convolvuli overlaps with I. pes-caprae. Ipomoea pes-caprae has larger potential areas which covers 90% areas of G. speciosa, fitting well with the typical nested model. In many field investigation records and floras on oceanic islands, I. pes-caprae and G. speciosa have been reported to co-exist 54,55,[65][66][67] . Our prediction on their distribution is in good accordance with the empirical data.
The interactions of the focal plant, its pollinator and the host plant of the pollinator on oceanic islands. On Yongxing Island, G. speciosa obtains pollination service from A. convolvuli. On the larger scale, the geographic pattern indicates that the ternary relationship between G. speciosa, A. convolvuli and I. pes-caprae observed on Yongxing Island is able to be bounded on other insular regions, which is a reasonable explanation for the maintenance of distyly in G. speciosa as the species is able to obtain pollination niche from A. convolvuli on those remote oceanic islands. For the pollinator, A. convolvuli, trophic resource will be provided by the widespread I. pes-caprae on islands.
A question to consider is how such relationship between G. speciosa, A. convolvuli and I. pes-caprae developed. Is it developed as a result of interaction or a coincidence by chance? A result of interaction refers to a consequence in evolutionary history (e.g. co-evolution) and the dependence between species has played as a limitation on geographic distributions. A coincidence means that the plant, pollinator and host didn't affect the distribution of each other and spontaneously develop association in their co-existence regions. In other words, the distribution of each species is mediated by its own autecology so that pollination and parasitism occurs on overlapping regions where climatic conditions are suitable for both of them 42 , indicating that the association builds from ecological fitting process 68 . Our present evidence may not be enough for creating a solid conclusion, as it is still difficult to quantify the interaction among the three partners 44 though species interaction is considered as a factor influencing the geographic ranges 45 . However, it's reasonable to postulate that the pollinator A. convolvuli would "mediate" the co-existence of G. speciosa and I. pes-caprae, besides their similar climatic preference. It's reported that I. pes-caprae is a pioneer species on the community succession in oceanic islands 55 . After the establishment of I. pes-caprae population, they will facilitate the colonization of A. convolvuli. Then, the pollination niche for G. speciosa becomes available so that it can colonize the islets while maintaining its heterostylous self-incompatibility system. This scenario provides reasonable evolutionary explanation for the co-existence of G. speciosa and I. pes-caprae with A. convolvuli as key mediator. As plant-pollinator interactions will affect plant species establishment and persistence 19 , cross-regional population studies will shed more light on the inter-relationships among G. speciosa, A. convolvuli and I. pes-caprae.
Conclusion
Baker's law suggested that the capacity for self-fertilization would be favored but self-incompatibility would be filtered out in island floras where mates are scarce 6 . For heterostylous system, it's considered disadvantageous and easily lost self-incompatibility for adaptation. Even dioecious species often display 'leaky' gender to keep self-fertilizing ability 69,70 . However, there's no break-down of heterostyly or "leakiness" in the SI system of G. speciosa, demonstrating an "obligate out-breeding system".
According to the co-occurrence pattern between G. speciosa, its pollinator and host plant of the pollinator in global scale, our present study shows that the obligate out-breeding system distyly in G. speciosa didn't become a disadvantage and could be persistent to acclimatize long distance dispersal as A. convolvuli can provide pollination niche for G. speciosa in a wide scale. Compared to its self-compatible and autogamous sister species, G. speciosa shows much stronger dispersal ability and broader distribution range, which is a striking contrast to previous knowledge 1,8,56,60 . In this case, pollination niche availability seems to be a more important factor affecting on the plants distribution range rather than mating system. It provides an alternative comprehension for the natural selection on plant mating system during dispersal and expanding. Our study wouldn't deny that an out-breeding system will add disadvantage on the new colonizers, but whether it will hinder plants' dispersal and spread remains negotiable.
Materials and Methods
Study sites and species. Field studies were carried out from November 2014 through December 2015, and again on January and August 2017 at Yongxing Island (Woody Island) on Xisha Islands (Paracel Islands). Xisha islands are a series of coral islets, locating in South China Sea (15°40′-17°10′N, 110°-113°E). It was formed about 7000 years ago as the coral growth and crust uplift 71 . The plant species richness in Xisha Islands is very limited (about 220 species) 61 . While morph ratio investigation was conducted at Yongxing Island plus two nearby islets (Ganquan Island and Jinqing Island).The Yongxing Island (16°50′N, 112°20′E), 320 km from Hainan Island, the nearest mainland, with a total area of 1.9km 2 , is the largest islet of this archipelago.
Guettarda speciosa is a rubiaceous tree 2-6 m in height, with axillary cyme inflorescences. The fragrant white flower has a 3-4 cm long corolla tube and 8-10 corolla-lobes. Flowers open in the evening till next morning with a typical hawk moth pollination syndrome. Its sweet-smelling globular fruit is dispersed by animals and can stay afloat 54 . G. speciosa is widely distributed in the tropical islands and coastal zones around the Pacific Ocean, from the coastline of central and northern Queensland and Northern Territory in Australia, to Pacific Islands, including French Polynesia, Micronesia and Fiji, the Malesia, and the east coast of Africa. In Xisha Islands, it is one of the dominant species among the arborous layer 72 . Its style dimorphism has been reported previously by Watanabe and Sugawara 29 .
Floral traits and flower longevity. To examine the floral variation in population we randomly selected 10 trees of each morph, and measured 3-5 flowers from different inflorescences for each plant. In total, 74 flowers of long-styled morph (L-morph) and 42 flowers of short-styled morph (S-morph) in anthesis were collected and 10 morphological traits (Fig. 1) Reciprocity Index, calculated by Recipro-V2 73 , to represent the stigma-stamen reciprocal degree between the two morphs. Furthermore, some mature flower buds fixed in formalin/acetic acid/alcohol (FAA) solution were used to characterize auxiliary difference between morphs by scanning electron microscope (JSM-6360LV, Japan). For each morph, we mixed pollen grains of mature, intact anthers from five plants, and 30 pollen grains were measured. The stigma surface and pollen were observed and digital images were taken. Pollen equatorial axis and polar axis were measured in Image-Pro Plus (v. 6.0).
While the corolla tube persisted till the next morning, insect visits were witnessed in the morning. Therefore, flower longevity was checked by pollen-tube growth after inter-morph hand pollination to determine the effectiveness of diurnal pollinators. Freshly opened flowers (at 8 pm) and caged flowers 12 h after open (at 8am next morning) of L-and S-morphs were hand-pollinated by legitimate pollen grains. Five flowers for each treatment were picked from five individuals. Twelve hours after hand pollination, styles were harvested and then preserved in FAA. Observation on pollen-tube growth following the methods specified in the next section 74 .
All statistical analyses were performed using SPSS (version 13.0). Means (±SE) were calculated for all measurements. We compared the morphological differences between the two floral morphs using Mann-Whitney U test as most data aren't normally distributed.
Morph ratios. Guettarda speciosa populations in three islets of the Xisha Islands were investigated to determine the relative abundance of the two morphs. We randomly sampled the individuals in each population by walking through the whole habitat from east to west and from south to north. Yongxing Island (1.9 km 2 ) population: n = 104 individuals, Ganquan Island (0.29 km 2 ) population: n = 83 individuals, Jinqing Island (0.20 km 2 ) population: n = 55 individuals. Morph ratio data were analyzed using the G-test for inequality of frequencies.
Heteromorphic self-incompatibility system. From November 2014 to February 2015, hand pollination was performed on five labeled individuals of each morph in a natural population at the Yongxing Island. Forty inflorescences with unopened flowers were enclosed separately in 40-mesh bags. We performed four treatments: (1) self-pollination to test self-compatibility; (2) intra-morph pollination (illegitimate cross); (3) inter-morph pollination (legitimate cross); (4) netted without hand-pollination; and also (5) marked some flowers without treatments as natural control. At the end of the flowering period, the mesh bags were removed to allow fruits to mature naturally. Three months after pollination, fruit set was recorded and statistical analyses were performed by G-test.
Pollen-tube growth was examined in vivo. Newly open virgin flowers in the evening were hand-pollinated by fresh pollen with above (1), (2) and (3) treatments. After pollination, the styles with ovary were collected per time interval (1, 3, 6, 12 and 24 h) and fixed in FAA. In lab, after softening in 10% Na 2 SO 3 (100 °C) for 6 h, pistils stained with aniline blue were observed by fluorescence microscope 75 . Pollinators and their host plants. We observed pollinator activities in the Yongxing population during the peak flowering (September to December) of G. speciosa. Fourteen days' observations were carried out at 20:00 to 01:00 and 06:30 to 08:30 in three sites of the island. The presence of floral visitors was recorded, and special attention was paid to their visitation behaviors. Visitors which touched the pistil and stamen were recorded as pollinators.
As the host plant is necessary for the moth's lifecycle, we surveyed the pollinators' host plants on the Yongxing Island during the same season after confirming the pollinators. We collected the larvae, and fed them in the lab till eclosion to confirm the imago.
Distribution of G. speciosa, pollinator and host plant. Ecological Niche Modeling (ENM) was applied to determine the potential geographic distribution of the studied species. ENMs establish relations between the occurrences of species and environmental conditions 76 . Occurrence data for each species were obtained from Global Biodiversity Information Facility (GBIF). We used ENMTools to remove duplicate occurrences based on the resolution of climatic variables, to ensure only one point kept in per grid cell 77 . Nineteen bioclimatic variables for MaxEnt analysis were obtained from the WorldClim website with 2.5 arcmin spatial resolution 78 .
MaxEnt software (v. 3.3.3 K) uses a modeling method called maximum entropy distribution, which estimates the probability distribution for a species' occurrence based on environmental constraints 79 . Runs were conducted with the default variable responses settings. And a logistic output format results in a map of habitat suitability of the species ranging from 0 to 1, where 0 being the lowest and 1 the highest probability. We selected 75% data for training and the rest 25% for testing. In order to observe and compare the potential distribution of each species, we used the 10th percentile training presence as a suitability threshold 80 , and we assumed that a cell is suitable if its suitability score is greater than the 10th percentile of training presence points. Other values were kept as default. The percentage of overlapping area was calculated by ArcGIS 9.3. The models were evaluated with the area under the curve of a receiver-operating characteristic plot 81 . The current occurrence from GBIF is labeled on the maps to compare the real distribution with the predicted distribution.
|
2018-09-15T14:05:15.107Z
|
2018-09-13T00:00:00.000
|
{
"year": 2018,
"sha1": "2f84b9db1080048741ba14813cfc895081dbbdaa",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-018-32143-5.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2b1a30742fc7b8844716c9b6c3337024b10c1f81",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
208139411
|
pes2o/s2orc
|
v3-fos-license
|
A Hybrid Control Algorithm for Gradient-Free Optimization using Conjugate Directions
The problem of steering a particular class of $n$-dimensional continuous-time dynamical systems towards the minima of a function without gradient information is considered. We propose an hybrid controller, implementing a discrete-time Direct Search algorithm based on conjugate directions, able to solve the optimization problem for the resulting closed loop system in an almost global sense. Furthermore, we show that Direct Search algorithms based on asymptotic step size reduction are not robust to measurement noise, and, to achieve robustness, we propose a modified version by imposing a lower bound on the step size and able to achieve robust practical convergence to the optimum. In this context we show a bound relating the supremum norm of the noise signal to the step size by highlighting a trade-off between asymptotic convergence and robustness.
Introduction
In this paper we study the problem of steering a particular class of dynamical systems towards the minimum of an objective function, assumed to not be known but whose measurements are available at fixed intervals of time. We consider continuous-time dynamical systems that can be steered, by a known input, between any two points of the state space. Examples of such systems are completely controllable linear time-invariant systems, as well as nonlinear systems whose reachable set after time T > 0, for all T > 0, is the whole state space, e.g. the Dubin's vehicle (Shkel and Lumelsky (2001)).
The problem at hand has been tackled in the literature with a variety of approaches, mostly related to source-seeking applications. In Burian et al. (1996) a gradient descent method is implemented from a least-squares approximation of the gradient, and combined with an exploration phase based on a simplex algorithm, in order to steer an autonomous underwater vehicle to the deepest part of a pond, or locate hydrothermal vents. A similar problem is solved in a multi-agent framework in Bachmayer and Leonard (2002), where, instead, local gradient measurements are assumed. In Azuma et al. (2012) a modified version of the simultaneous-perturbation stochastic approximation is proposed in order to recursively compute directions of exploration, and in Cochran and Krstic (2009) an extremum seeking controller is adopted assuming continuous availability of the measurements of the objective function.
In Mayhew et al. (2007) (see also Mayhew et al. (2008b) and Mayhew et al. (2008a)) the source-seeking problem is solved by a hybrid controller based on the Recursive Smith-Powell (RSP) algorithm. The latter is an optimization algorithm that, through a series of line minimizations, sequentially compute a set of conjugate directions. For convex quadratic functions, it ensures to reach a neighborhood of the minimizer in a finite amount of line minimizations.
The classic RSP implementation, as in Mayhew et al. (2007), uses discrete line minimizations with fixed step size, able to achieve practical stability of a set of minimizers for the 2dimensional convex quadratic case. In Coope and Price (1999) an extension of the RSP was proposed in the general context of continuously differentiable functions. By using a decreasing step size asymptotically converging to zero, their algorithm ensures asymptotic convergence to a stationary point. While some robustness results of the RSP algorithm where shown in Mayhew et al. (2007), no results are present regarding the algorithm in Coope and Price (1999), and in particular for the more general class of Direct Search methods.
In this paper we study the class of Direct Search methods, to which the RSP algorithm belongs, which are optimization algorithms that minimize (or maximize) an objective function without using (or estimating) derivative information of any order of the objective function (see Lewis et al. (2000) for an overview). In particular we propose a direct search algorithm combining the results of Coope and Price (1999) and of Kolda et al. (2003) and Lucidi and Sciandrone (2002) in order to achieve, contrary to the RSP algorithm, asymptotic convergence to the set of minima. Due to the inherent discrete dynamics of the algorithm and the continuous dynamics of the underlying dynamical system, on the wake of Mayhew et al. (2007), the controller is implemented by relying on the hybrid systems framework of Goebel et al. (2012). The proposed hybrid controller addresses the optimization problem of an n-dimensional continuously differentiable function with a set of global minima, and possibly isolated local maxima, and guarantees almost global asymptotic stability of the set of minima.
As our main focus is developing robust controllers, we also show that asymptotic Direct Search methods based on asymptotic step size reduction are in general not robust to measurement noise. Thus we propose a robust algorithm, addressing n-dimensional objective functions (including the results of Mayhew et al. (2007) as a special case), highlighting that a trade-off between asymptotic convergence and robustness is mandatory.
Notation: R denotes the set of real numbers, and R ≥0 := [0, ∞) and R ≥1 := [1, ∞). We let e denote Euler's number. We denote by | · | the absolute value of a scalar quantity and · the vector 2-norm. For a scalar function f : R n → R, we denote as ∇f : R n → R n the gradient of f . Given a nonempty set A ∈ R n and ε > 0, we denote as εB(A) the set {x ∈ R n : x A < ε}, where x A := inf y∈A x−y . A set valued mapping f from R n to R m is denoted as f : R n ⇒ R m . Define a hybrid system in R n as the 4-tuple H = (C, F, D, G), with C ⊂ R n the flow set, F : R n ⇒ R n the flow map, D ⊂ R n the jump set, and G : R n ⇒ R n the jump map. Solutions to hybrid systems are defined on hybrid time domains (see Goebel et al. (2012) for more details) parameterized by a continuous time variable t ∈ R ≥0 and a discrete time variable j, keeping track, respectively of the continuous and discrete evolution. We denote as dom φ ⊂ R ≥0 × N the hybrid time domain corresponding to the solution φ. We say that for a hybrid system H with state x ∈ R n , the set A ⊂ R n is: stable if for all > 0, there exists δ > 0 such that x(0, 0) ∈ δB(A) implies x(t, j) ∈ B(A) for all (t, j) ∈ dom x; globally attractive if x(t, j) A is bounded and lim t+j→∞ x(t, j) A = 0, with (t, j) ∈ dom(x); globally asymptotically stable if it is both stable and globally attractive; almost globally attractive when it is globally attractive from all initial conditions apart from a set of measure zero; almost globally asymptotically stable if it is both stable and almost globally attractive; semiglobally practically asymptotically stable on the parameter θ ∈ Θ ⊂ R m , with m > 0, if, assuming H complete and dependent on θ, for any
Problem Formulation
In this paper we tackle the following optimization problem subject to the dynamicṡ The state variables x represent the variables involved in the optimization problem, while ζ represents other possible states.
For simplicity we consider φ : R n+l × R m → R n+l to be continuously differentiable in ξ and u. Moreover, given τ > 0, we assume that for each x 0 and x f in R n , with x 0 = x f , there exists t → u(t) such that the solution toξ = ϕ(ξ, u(t)) from ξ 0 = (x 0 , ·), reaches ξ f = (x f , ·) after τ seconds. We assume that for each bounded input u(t) ≤ū > 0 for all t ≥ 0, ζ(t) is bounded for all t ≥ 0. The class of systems represented by (2), includes, for example, pointmass vehicles (ξ = x, with x representing the position) and Dubin's vehicles (ξ = col(x T , ζ), with x and ζ representing position and orientation).
Moreover we make the following assumptions on f : (A0) f is continuously differentiable, lower bounded and it is not assumed to be known, but sampled measurements of it are supposed to be available every τ > 0, with τ a tunable parameter; (A1) the set {x ∈ R n : ∇f (x) = 0} of critical points of f is such that every local minimum is also a global minimum (i.e. all local minima share the same objective function value), every local maximum is an isolated point and f is analytic at every local maximum, and there are no saddle points; (A2) the sublevel sets of f , namely the sets L f (c) := {x ∈ R n : f (x) ≤ c}, are compact for all c ∈ R.
Assumptions (A0) and (A2) are standard for Direct Search methods, see Coope and Price (1999), Kolda et al. (2003) and Lucidi and Sciandrone (2002). Assumption (A0) can be relaxed by considering f to be locally Lipschitz, as shown in Kolda et al. (2003) and Popovi and Teel (2004), which requires the use of generalized gradients for analysis.
The reason for the particular structure of the set of critical points assumed in (A1) stems from the fact that our goal is to prove and guarantee convergence to the set of minima. While the assumptions on the value of the local minima is considered to simplify the structure of the problem, without the other assumptions on local maxima and saddle points, Direct Search algorithms, and our proposed controller derived from it, only guarantee convergence to the set of critical points.
Notice that, contrary to Mayhew et al. (2007), no convexity assumptions have been made on the cost function.
The RSP and the Proposed Algorithm
In this section we will briefly introduce the classic RSP algorithm as proposed by Smith (1962) and Powell (1964), its hybrid implementation in Mayhew et al. (2007) and the algorithm that we propose as an extension of the RSP.
Background
Throughout the paper we call line minimization any procedure that, given a function, a direction and a point, explores the line defined by the direction applied to the point, and returns the position of the minimum, or point in a neighborhood of it, of the function along the line.
We adjective a line minimization as exact when the minimum along the explored direction is exactly reached, and as discrete when the line minimization is an iterative procedure that explores at each iteration a new point at distance ∆ > 0 (fixed or changing at each iteration), called step size, from the previously explored one. A discrete line minimization terminates when the function value of the newly explored point didn't decrease enough with respect to the function value at the last explored one.
Given a set G ⊂ R n of linearly independent directions spanning R n , the classic RSP sequentially computes exact line minimizations along the directions in G in order to minimize the cost function. Moreover, every n line minimizations, a new search direction d new ∈ R n is computed by exploiting the P arallel Subspace P roperty (see Theorem 4.2.1 in Fletcher (2000)) and the set G is updated accordingly.
For a convex quadratic function with Hessian matrix H, the newly computed direction d new is conjugate, by the Parallel Subspace Property, to the last n − 1 directions in G, i.e. such that d T new Hd i = 0, with d i ∈ G and i = 1, ..., n − 1.
The property of conjugacy of directions for a convex quadratic function, implies that the line minimization along one direction is independent of the line minimizations along the other directions. Thus, given a set of n conjugate directions for a convex quadratic function from R n to R, the minimum will be reached after n line minimizations, each along a different conjugate direction. By recursively computing a set of conjugate directions, the RSP algorithm reaches the minimum of a convex quadratic function, starting from a set of linearly independent directions, in at most n 2 line minimizations. This property is usually denoted as quadratic termination property.
The version of the RSP considered in Mayhew et al. (2007) is constrained to a 2-dimensional search space and adopts discrete line minimizations with fixed step size and an additional exploration step based on a rational rotation, whose aim is to prevent the algorithm remaining stuck for "bad" initializations. Asymptotic convergence of the algorithm to a neighborhood of the minimum, function of the step size, is proved.
Proposed algorithm
The algorithm proposed in this paper, shown in Alg. 1, is inspired by Garcia-Palomares and Rodriguez (2002) and improves the results in Mayhew et al. (2007) by guaranteeing, under the less restrictive assumptions (A0) and (A1), asymptotic convergence to the set of minima. The main differences with the RSP considered in Mayhew et al. (2007) are reported in the following. In particular: 1) A different step size ∆ i is associated to each direction d i in order to guarantee more freedom of exploration. As such, when a new direction is computed (lines 28-32) also a new step size is associated to the new direction (line 27); If no improvement is found along any direction, the global step size Φ is reduced to µΦ, with µ ∈ (0, 1/λ t ) (lines 14-21); 3) In case no improvement is made along a direction (lines 8-12), the corresponding step size is reduced. This is the key step guaranteeing asymptotic convergence to the minima of the cost function; 4) The newly computed direction is "accepted" only if it keeps the directions in G linearly independent (lines 28), otherwise the previous set of directions is retained.
Remark 1. The step size associated to the newly computed direction is chosen as the maximum step size associated to the other directions, but any function bounded by the minimum and maximum of the step sizes would do. This is needed in order to guarantee that the step sizes are asymptotically reduced to zero. • Remark 2. The reason to reduce the step size when no improvement is found, stems from Theorem 3.3 in Kolda et al. (2003), where it is reported that the norm of the gradient of the cost function, at points where no improvement was found along any direction, is bounded by a class K function of the step size. Thus, reducing the step size at those iterations, implies reducing the norm of the gradient, hence approaching a stationary point (or minimum in our case). • Remark 3. As pointed out in Byatt et al. (2004), 4) is not necessary for convex quadratic functions, since every pair of conjugate directions are distanced by a minimum angle different from zero, but this is in general not true for functions satisfying assumptions (A0)-(A2). • Alg. 1: New RSP algorithm with line minimization procedure.
The line minimization procedure explores a direction d j from a starting point x kj and returns the distance α j traveled from x kj to the found minimum of f along d j . The main differences in the line minimization procedure with respect to the RSP in Mayhew et al. (2007) are the following: 1) Newly explored points are accepted only if a sufficient decrease condition is satisified (lines 2 and 12), namely the function has decreased at least of ρ(∆) along direction d; 2) When a new iteration is accepted, the step size is, possibly, increased (lines 5 and 15) if the step size does not violate the upper bound imposed by the global step size.
Remark 4. The sufficient decrease condition (lines 2 and 12) guarantees that the Armijo condition, needed for the algorithm convergence, is satisfied (see Section 3.7.1 in Kolda et al. (2003) for more details) and also guarantees a margin of robustness to measurement noise, as we will see in Section VI. In the sufficient decrease condition, we adopt the function ρ : and not as classically an o(∆) function, in order to be able to escape local maxima. The function (3) is a strictly increasing function of ∆, that at ∆ = 0 is smooth (from the right) but nonanalytic, and such that ρ Notice that any other function with the same properties as (3) would also be an appropriate choice for ρ.
• Remark 5. The step size increase during the line minimization procedure helps in better exploiting the directions in which the cost function decreases. This step does not hinder convergence of the algorithm thanks to assumption (A2). • Define the set of global minima of f as A := {x ∈ R n : f (x ) ≤ f (x) ∀x ∈ R n } and as i kj the number of steps computed in the line minimization procedure in Alg. 1 at iteration k along direction d j . We can conclude the following convergence result for the algorithm in Alg. 1.
Theorem 6. Consider the class of cost functions fulfilling (A0)-(A2). Then, for any initial condition x • ∈ R n the sequence of iterate x kji generated by the RSP algorithm and the line minimization procedure in Alg. 1 is such that The proof of Theorem 6 is based on standard arguments for the proof of convergence to stationary points of f in Direct Search algorithms. In particular, under assumptions (A0) and (A2), convergence of the sequence of global step size Φ k → 0 is shown first, which, together with the sufficient decrease condition, guarantees convergence of x kji to a stationary point. Under assumption (A2), and due the particular choice of (3), convergence to the set of minima is shown. The detailed proof of Theorem 6 is reported in the Appendix.
In this section we design a hybrid controller H c implementing the new RSP to solve a minimization problem in R n under the assumptions (A0)-(A2), and steer (2) towards the set of minima of f .
The reason for resorting to the hybrid systems framework is to provide results regarding the stability and robustness of the proposed algorithm when applied to continuous-time dynamical systems, also in the presence of measurement noise. In particular, the resulting hybrid regulator is based on the framework for hybrid systems in Goebel et al. (2012), and its dynamics are given by a flow map F c when the state ranges in the flow set C, and a jump map G c when the state ranges in the jump set D.
The algorithm Alg.1 is implemented as a discrete time system, whose dynamics are setvalued in order to satisfy the hybrid basic conditions (Assumption 6.5 in Goebel et al. (2012)) and have the closed-loop system H cl , given by the interconnection of H c and (2), nominally well-posed (see Definition 6.2 in Goebel et al. (2012)), a property needed for the application of invariance principles in the proofs of the results in the next section.
State of H c
The state of the controller is defined as The state variable τ is a timer, that resets every τ > 0 seconds, and it regulates when new cost function evaluations are available.
Its hybrid dynamics are given bẏ during flow, and during jumps. The states d j ∈ R n and ∆ j ∈ R ≥0 , j = 0, 1, ..., n represent, in Alg. 1, the search directions and the step sizes corresponding to each direction. The state variable λ ∈ R, which keeps track of the distance traveled along the currently explored direction, and the state variable α ∈ R n , which stores the total traveled vector from direction d 0 , are related to the distance traveled along each direction, which is the variable α j introduced in Alg. 1.
The state Φ ∈ R ≥0 represents the global step size andᾱ ∈ R ≥0 the total distance traveled during each cycle of directions exploration.
The positive or negative exploration along the current direction is determined by the state p ∈ {−1, 1}, and the variable m ∈ {0, 1} indicates whether a turn has already been computed along the current direction.
To define in which operating point of the modified RSP algorithm the controller is, the state variables k ∈ {0, 1, ..., n} and q ∈ {0, 1, 2} have been introduced. The variable k represents the state of the RSP, namely which direction is currently being explored. Notice that it has n + 1 components since the direction d n−1 is explored twice to be able to exploit the P arallel Subspace P roperty. The variable q, defining the state of the line minimization, assumes these values q = 0: the positive line minimization; -q = 1: the negative line minimization; -q = 2: the line minimization is completed.
The state variable z ∈ R is a memory state that keeps track of the best minimum value of f found satisfying the sufficient decrease condition.
Two more states have been added for ease of notation, namely ∆ ∈ R and v ∈ R n , that store the currently explored search direction and its corresponding step size.
Hybrid Controller Structure
The structure of H c is given by with sets C, D defined before. The flow map F c is a single-valued constant function with all components equal to zero except for the timer. The jump map G c : X c × R → X c is a set-valued map, composed by the timer discrete dynamics and G c/τ : The set-valued map G c/τ is reported in the Appendix. We stress that, as in the current implementation the step size is reduced and the timer limit is kept constant, the speed of system (2) is reduced proportionally by reduction of the step size. In this way the distance traveled during the flow gets smaller and smaller, and the state x asymptotically converges to the set of minima.
Stability Analysis
Define the hybrid closed-loop H cl as the interconnection of the dynamics (2) and the controller H c developed in the previous section, namely The flow and jump maps of the closed-loop system H cl are thus defined as F (ξ, x c ) := col(ϕ(ξ, We consider the stabilization problem with respect to the sets A ⊂ A e ⊂ R n+l × X c , defined as The set A represents the desired equilibrium set, namely the subset of Notice that invariance of A is guaranteed by all the step size variables being zero, so that , namely no optimization step is computed, also for an initialization with Φ(0, 0) = 0 and/or d j (0, 0) = 0, even if x(0, 0) / ∈ A . The set A e takes into account this consideration, indeed it is the set of equilibrium points for H cl for which no optimization step is performed due to an initialization with Φ = 0 and/or d j = 0 for all j = 0, 1, ..., n − 1.
The proof of Theorem 7 and of the results in the next section, based on Lyapunov arguments and invariance principles, applied considering the Lyapunov candidate function V (ξ, x c ) := z − f (A ), are included in the Appendix.
Remark 8. From Theorem 7 and the structure of A and A e , it follows in particular that, for any initialization such that det(col(d 0 , d 1 , ..., d n−1 )) = 0 and Φ = 0, boundedness of the closedloop trajectories and asymptotic convergence to the set A are guaranteed.
• Remark 9. Notice that, depending on the values of the constants δ det , the quadratic termination property can be lost. Nonetheless, the asymptotic convergence property is preserved. •
About Robustness of the Algorithm
In this section we investigate the robustness of the proposed algorithm to noise acting on the cost function measurements. We start with a negative result showing that general Direct Search Algorithms based on line minimizations and asymptotic step size reduction are not robust to any bounded measurement noise.
Theorem 10. Consider the class of Direct Search algorithms based on line minimizations and with asymptotic step size reduction, to which the algorithm Alg. 1 belongs to, acting on a function f : R n → R satisfying assumptions (A0) and (A2). Then, for any boundn s > 0, there exists a noise n s : R → R, with |n s (t)| ≤n s ∀t ∈ R, such that, for noisy cost function measurements, namely f (x(t)) + n s (t), and all initial conditions apart from a set of measure zero, the sequence of iterate produced by such algorithms escapes any compact sub-level set of f .
The above result shows that there is no robustness guarantee for the modified RSP algorithm, even if stability has been shown and convergence results are attainable for a proper choice of initial conditions. Robustness to measurement noise for the hybrid closed-loop system H cl is recovered by imposing a lower bound Φ > 0 on the global step size Φ, and modifying accordingly G c\τ . In particular, in g 5 , the discrete dynamics of Φ can be modified as follows.
Moreover, given δ det > 0, we restrict the domain of all the directions d j to be such that det(col(d 0 , d 1 , ..., d n−1 )) ≥ δ det . Without loss of generality, we will denote the desired equilibrium set within the restricted domain for the directions as A.
The lower bound on Φ also guarantees an explicit bound on the allowable maximum noise that can be accepted without losing robustness.
Remark 13. In Mayhew et al. (2007) an explicit characterization of the practical neighborhood of convergence to A, as function of the step size, is provided. As the dense exploration procedure adopted in Mayhew et al. (2007) to guarantee such bounds cannot be extended to n-dimensional search spaces, a similar result cannot be achieved without further assumptions on f . Nonetheless, the norm of the gradient of f can be bounded at steady state by a function of Φ and the equilibrium set of exploring directions (see Theorem 3.3 in Kolda et al. (2003)). • Remark 14. The trade-off between practical global asymptotic stability and almost global asymptotic stability is, also, related to the lack of knowledge of A or f (A ). By assuming, for example, knowledge of f (A ), the discrete dynamics of Φ can be extended with the addition of a term ρ f (|f ( This term would prevent the algorithm to remain stuck at the initial position when Φ is initialized at zero and, thus, Theorem 7 could be extended to guarantee global asymptotic stability of the set of minimizers. •
Simulations Results
In this section we show the results of different simulations of the proposed hybrid controller to the minimization of different objective functions. Fig. 1 illustrates the level sets of the quadratic convex function where x = col(x 1 , x 2 ). The trajectory of a point-mass vehicle, steered by the proposed hybrid controller in order to minimize (13), is superimposed to the level sets of (13) It can be noticed as in both Fig. 1(a) and Fig.1(b), the distance to the minimizer tends asymptotically to zero as the step size converges to zero. The simulation reported in Fig. 2, instead, considered the nonconvex Rosenbrock function and the Dubin's vehicle dynamicsẋ with (x 1 , x 2 ) ∈ R 2 the position, V > 0 the velocity constant, ζ ∈ R the orientation, and u ∈ R the control input. The initial conditions and parameter values were kept the same of the previous simulation. In this case the minimizer is given by x = (1, 1) and, in spite of the nonconvex optimization problem, the trajectory of the state variable x is converging towards it, remarkably.
Conclusion
This paper presents an extension of the results in Mayhew et al. (2007). In particular, an hybrid controller based on a modified RSP algorithm, which optimizes an objective function without gradient information, and that is able to achieve almost global asymptotic stability of the closed loop composed by the controller and a particular class of continuous-time dynamical systems is proposed. As direct search methods are shown to not be robust to measurement noise, a modified practical scheme is proposed, a bound relating the minimum allowable step size and the measurement noise supremum norm is reported, and stating how a trade-off between asymptotic convergence and robustness is inevitable for this class of algorithms. Simulations results are provided to validate the proposed approach. Future developments include the extension of the proposed controller to the multiagent scenario, in order to efficiently exploit the parallel subspace property, and to more general objective functions, e.g. to nonsmooth functions.
The map G c\τ
The set-valued map G c/τ is presented next. It is given by the composition of the maps g i (x c , f (x c )) defined on the subsets D i , i = 1, 2, ..., 5 of the jump set D, We omit the update law of the state variables that remain constant at jumps.
The sets D i define the conditions under which the different operations of the algorithm proposed, integrated in the functions g i , take place. 1) Continue a positive line search:
4) Continue a negative line search:
) Update direction and start positive line search: if ((k = n − 1 and |λ| ≤ ∆ n−2 2 )or (k = n and |λ| ≤ ∆ n−1 2 )) and θ∆ n−2 ≤ λ s Φ ∆ n−2 if k = 0, ..., n − 2 or (k = n − 1 and |λ| ≥ ∆ n−2 2 ) θ∆ n−1 if k = n and |λ| ≤ ∆ n−1 The computation of the new conjugate direction in g 5 is addressed by the function φ : R n × R n × R n×(n−1) × R n defined as where M 1,n−1 := col(d T 1 , ..., d T n−1 ) T . The conditions in φ check if the new direction α + β, that is going to be computed exploiting the P arallel Subspace P roperty, is linearly independent from the last n − 1 directions, namely if the determinant of the concatenation of M 1,n−1 and the new direction is bigger than a tunable parameter δ det > 0. In case this condition is not satisfied, the previous set of directions is retained.
The update rule of the states ∆ j , j = 0, 1, ..., n − 1 also needs clarification. Let us consider ∆ + n−1 since the same reasoning applies to the other state variables. The condition |λ| < ∆ n−1 /2 is a different way to express the condition λ = 0, while at the same time satisfying outer semicontinuity of the map g 5 . Indeed |λ| < ∆ n−1 /2 is satisfied only for λ = 0, except perhaps at the initialization, since along direction d n−1 , ∆ n−1 is the minimum displacement possible for λ. Moreover it is checked if ∆ still satisfies the bounds imposed by the global step size Φ, if this is not the case, it is updated to the corresponding upper (or lower) bound.
Proof of Theorem 1
Denote as blocked points all the points x kj such that ∀j = 0, 1, ..., n − 1 namely points where no improvement is found along any of the exploring directions d j .
Theorem 17. Every limit point x of the sequence of blocked points generated by Alg. 2 satisfies ∇f (x) = 0.
Proof. Denote as {x k } the sequence of blocked points. Then Notice that, by det(col(d T 0 , d T 1 , ..., d T n−1 )) > δ det , compactness of the sublevel sets of f and the fact that the length of new directions, computed in line 29 of Alg. 1, is the distance between two explored points (and thus bounded by the diameter of the initial compact sublevel set), the norm of d kj , for all j = 0, 1, ..., n − 1 and k ≥ 0, is upper bounded by d max := max j=0,1,...,n {d 0j , diam(L f (x o ))}, as well as lower bounded. The sequence {d kj } is thus bounded, and as such, considering any limit pointd j , we can conclude that Since this result is valid also for −d j , it follows that ∇f (x) Td j = 0. Moreover, since {d 0 ,d 1 , ...,d n−1 } span R n , we can conclude that ∇f (x) = 0.
Theorem 18. Every limit point x of the sequence of blocked points generated by Alg. 2 is a minimum.
Proof. By assumption (A1) and Theorem 17, we only need to show that every limit point of the sequence of blocked points is not a maximum. As, by (A1), we are assuming that every maximum is an isolated point, it follows by definition that, considering a local maximumx ∈ R n , there exists m > 0 such that ∀x =x ∈ R n such that x −x ≤ m , f (x) < f (x).
If every term of the sequence {x l } is such that x l =x, then, by the sufficient decrease condition and the definition of local maximum, it follows that x l →x, since f (x) > f (x l ) for all l ≥l, contradicting that such a sequence exists.
So the only way for such a sequence to exist is if for somel ≥l, x l =x for all l ≥l.
As f is analytic atx, there exists an even m > 0 such that the m − th derivative of f with respect x is different from zero and, beingx a maximum, its norm is lower than zero. Denote it as f m (x). Then, considering the Taylor expansion of f (x + ∆d) aroundx, and noticing that ∆ 1 ∆ is o(∆ m ) and d is lower bounded, there exists a∆ ∈ (0, 1) such that for all ∆ ∈ (0,∆) and thus there exists l ≥l such that x l =x and f (x l ) < f (x).
Thus every limit point of the sequence of blocked points cannot be a maximum, hence they will all be minima.
We prove now Theorem 1, namely that Denote, without loss of generality, the sequence {x kji } as {x k }. Notice that the subsequence of blocked points {x b k } of {x k } converges to A , by Theorem 18. Suppose, by contradiction, that a subsequence {x p k } of {x k } does converge to a pointx / ∈ A , with x A > p , for some p > 0. By definition of converging sequence, there exists a p > 0 such that, for all p ≥ p , Then there exists a b > 0 such that for all Pick χ = max(p , b ) and define asb ≥ χ the smallest k such that xb is a blocked point andp ≥ χ the smallest k such that xp belongs to the sequence x p k . Then, clearly, since f (x k ) is a non-increasing sequence (by the sufficient decrease condition), for k ≥p , no point in {x : x −x ≤ p } can be selected, thus reaching a contradiction.
Proof of Theorem 7
We first show that H cl is nominally well-posed and all maximal solutions are complete.
Proof. The set C and D are clearly closed.
Both F and K are continuous functions in C and thus outer semicontinuous and locally bounded. Moreover, being both single-valued, they are also convex for every (x, x c ) ∈ C.
The set-valued map G(·, f (·)) is composed by linear functions, apart for an instance of α + where the norm operator is present, which is continuous in the set of definition, and an instance of ∆ + n−1 where the max function is used, which is continuous as well. The map G c\τ is thus piecewise continuous. As all the inequalities in the discrete dynamics are not strict, at at the points of discontinuity, it includes both left and right limit. It is thus outer semicontinous by definition.
Since G(·, f (·)) is piecewise continuous, it is locally bounded by continuity. The hybrid closed-loop system H cl thus satisfies the hybrid basic conditions (Assumption 6.5 in Goebel et al. (2012)) and is nominally well-posed by Theorem 6.8 in Goebel et al. (2012).
Proof. We prove completeness of maximal solutions to H cl by invoking Proposition 6.10 in Goebel et al. (2012) on existence of solutions, and showing that no maximal solution jumps outside of C ∪ D or has finite escape time.
We first show that the viability condition in Proposition 6.10 in Goebel et al. (2012) holds for all (ξ, x c ) ∈ C \ D, namely that F (ξ, x c ) ∩ T C (ξ, x c ) = ∅, with T C : R n+l × X c → R n+l × X c the Bouligand tangent cone of C at (ξ, x c ). Since 0 ∈ T C (ξ, x c ) always, the viability condition is readily satisfied for all the state variables apart from ξ and τ . As the projection onto the ξ-subspace of C \ D is empty, the viability condition is satisfied also for the ξ state variable. Regarding the timer τ , define the projection of C and D onto the τ -subspace as C τ := [0, τ ] and D τ := [τ , ∞). As the set C τ \ D τ = [0, τ ) is open to the right, we only need to check the viability condition at τ = 0. Since at τ = 0,τ = 1, the viability condition is satisfied also for τ .
Then, by Proposition 6.10 in Goebel et al. (2012), there exists a nontrivial solution from every initial condition in R n+l × X c . Moreover, since G(C ∪ D) ⊂ C ∪ D, the solutions to H cl or have finite time escape or are complete. Notice that for all solutions to H cl , ζ(t) does not have finite escape time by assumption. We show completeness by showing that all the other components of (ξ, x c ) for all solutions to H cl are bounded. Indeed, by condition (A2) and the update rule for the new directions (??), for all initial conditions (ξ(0, 0), x c (0, 0)) ∈ R n+l × X c , the state variables d j , with j = 0, 1, ..., n, are upper bounded in norm by where, given A ⊂ R n , diam(A) := sup x,y∈A x − y . Moreover, as the determinant of the matrix composed by the set of directions is lower bounded by δ det > 0, the directions d j are also lower bounded in norm. Denote the lower bound as d min ∈ R. Then ∆ j , j = 0, 1, ..., n, are upper bounded by ∆ max := (1 + γ) max max j=0,1,...,n ∆ j (0, 0), Based on the same reasoning, Hence any state variable of H cl is bounded, thus all the maximal solutions to H cl are complete.
In order to prove stability of A, define the Lyapunov function V (ξ, x c ) = z − f (A ). We stress that, given assumption (A1), f (A ) is a scalar. Since V is C 1 , it is possible to bound the growth of V along any maximal solution φ to H cl as where t(j) and j(t) denote respectively the least time t and the least index j such that (t, j) ∈ dom φ, D 2,5 := D 2 ∪ D 5 and D 1,3,4 := D 1 ∪ D 3 ∪ D 4 . The above conditions follow directly from the definition of F c and G c . Indeed z changes only during jumps, and in that case, for x / ∈ A , it can decrease for x c ∈ D 1,3,4 and remain unchanged for x c ∈ D 2,5 . However the Lyapunov function V is not strictly nonincreasing since there exist initial conditions for z and x such that z(0, 0) < f (x(0, 0)). However, after at most 3 timer-cycles, when D 3 is reached, z gets updated to f (x).
The above nonincreasing conditions on V are thus only valid for t ≥ 3τ and j ≥ 2, where (t, j) = (0, 0) initially. As we show next, this does not hinder the stability of the set A and convergence to the set A e for the hybrid system H cl .
To show attractivity of A e we invoke Theorem 4.7 in Sanfelice et al. (2007), setting U := R ≥3τ +2 (R n+l × X c ), namely the set of states that are reachable after 3τ + 2 (see Definition 6.15 in Goebel et al. (2012)). Notice that U is forward invariant due to Lemma 20 and the definition of reachable set. By referring to the remark at the bottom of the proof of Theorem 4.7, we set T = 3τ and J = 2, and defining u C and u D in the statement of Theorem 4.7 respectively as (17) and (18), for some r ∈ V (R n+l × X c ), the trajectories of H cl approach the largest weakly invariant subset of The Lyapunov function V is constant along solutions to H cl in D 2 , D 5 and the set A e . By m + = 1 in g 2 and by q + = 0 in g 5 we can conclude that neither D 2 nor D 5 are (weakly) invariant. Indeed A e is actually the largest (weakly) invariant set contained in U where V is constant along maximal solutions whose range is contained in U.
Proof of Theorem 10
We will first show that, for anyn s > 0, these class of algorithms can potentially remain stuck at every x ∈ R n . As such, by continuous differentiability of f , for every compact set C ⊂ R n , there exists a maximum gradient norm ∇f C . Consider, without loss of generality, a unique step size variable ∆ > 0 and a single direction d ∈ R n . By the mean value theorem, it follows that, for all x, y ∈ C, |f (x) − f (w)| ≤ ∇f C x − w , and, for w = x − p∆d, at iteration k in the algorithm where, by continuous differentiability of f , ∇f C < ∞ for all compact C ⊂ R n , and d k ≤d > 0.
Given a noise boundn s > 0, and remembering that ∆ k → 0 for k → ∞, there exists a k > 0 such that with ι > 0 and ∆ k > 0 the value of the series ∞ n=0 (θ n ∆ k ) 1 θ n ∆ k , proved to be convergent in Lemma 16. The term 1/(1 − θ) follows by noticing that given iteration k 1 , after k 2 iterations of blocked points, then ∆ k 1 +k 2 = θ k 2 ∆ k 1 , and, as k 2 → ∞, if we sum all the terms, we have a geometric series. We defined the bound in this way, since we build a noise function by iteratively summing previous noise values to produce the new one.
A noise signal defined to be n s (k) = 0 for k < k and n s (k) = ∇f C ∆ kd +ρ(∆ k )+n s (k−1) for k ≥ k will keep the algorithm stuck in x = x k for all k ≥ k .
The reason is that the following relationship will always be satisfied where f (x k + p k ∆ k d k ) + n s (k) is the cost function measurement obtained at iteration k and f (x k ) + n s (k − 1) is the cost function measurement obtained at the previous iteration. Namely no improvement is ever found in any direction, since Now notice that at iterations where namely at iterations where no improvement would be found in case of no noise, the noise could act in order to mistakenly consider an improvement. Indeed in that case, with a noise of the form n s (k) = −∇f C ∆ kd − ρ(∆ k ) + n s (k − 1), for k ≥ k 1 ≥ 0 and n s (k 1 ) = 0, a wrong descent direction will be picked from everywhere in C.
Alternating the noise values of (24) and (25), by considering n s (k − 1) = 0 when switching strategy, as long as ∆ k ≤ ∆ k , can steer the algorithm to every point in C.
Consider now a compact set C 1 ⊃ C and denote the maximum gradient norm of f on C 1 as ∇f C 1 , where ∇f C 1 ≥ ∇f C .
Applying the noise (24) in C, it is possible to notice that there exists a k 1 ≥ k such that for k = k 1 condition (23) is satisfied for ∇f C 1 . Now, by switching between noise expressions (24) and (25), guaranteeing that ∆ k ≤ ∆ k 1 , makes it possible to steer the sequence of iterate everywhere in C 1 and in particular outside C.
It is thus clear that repeating this procedure iteratively can make the sequence of iterate leave any compact sub-level set of f .
Notice that, by the bounds on the state variables defined in the proof of Lemma 20, it is always possible to choose δ > 0 to be the maximum radius of all the balls, one per state variable composing (x, x c ), such that the maximum of the bounds reported in the proof of Lemma 20 is upper bounded by 1 . Namely pick δ such that for all initial conditions in δB(A), max δ>0 max (ξ(0,0),xc(0,0))∈δB (A) {d max , ∆ max ,Φ(0, 0), d max ∆ max , nd max ∆ max , max{z(0, 0), f (x(0, 0))} − f (A ), d 2 max ∆ 2 max } < 1 .
Pick B := cl{x ∈ R n : x ∈ 1 B(A) and x / ∈ L }. By assumptions (A0)-(A2) and the fact that the set of directions d j , with j = 0, 1, ..., n − 1, always span R n , it follows (from Theorem 18 and the fact that for outside any neighborhood small enough of the local maxima and local minima, the norm of the gradient of f is lower bounded away from zero) that for every compact set in R n not containing a local minimum, there exists aΦ > 0, such that for all Φ ∈ (0,Φ), there exists at least one direction, that, rescaled by λ s Φ, produces a sufficient decrease of f from every point in that compact set.
Since B is compact and does not contain a local minimum, it implies, by the above reasoning, that there existsΦ > 0, such that for all Φ bound ∈ (0,Φ) at least one direction is a descent direction for Φ = Φ bound , hence, after at most n iterations, z decreases.
|
2019-11-19T02:00:38.954Z
|
2019-11-18T00:00:00.000
|
{
"year": 2019,
"sha1": "54742e34e4a9c3e6b3bcc9f556d7a4cda617becc",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/j.ifacol.2020.12.1627",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "54742e34e4a9c3e6b3bcc9f556d7a4cda617becc",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Engineering",
"Computer Science"
]
}
|
226948266
|
pes2o/s2orc
|
v3-fos-license
|
Racial and Ethnic Differences in Myopia Progression in a Large, Diverse Cohort of Pediatric Patients
Purpose The purpose of this study was to characterize the differences in myopic progression in children by race/ethnicity and age. Methods Patients enrolled in Kaiser Permanente Southern California between 2011 and 2016 and between the ages of 4 and 11 years old with a documented refraction between −6 and −1 diopters (Ds) were included in this retrospective cohort study. Patients with a history of amblyopia, strabismus, retinopathy of prematurity, or prior ocular surgery were excluded from analyses. Patients’ race/ethnicity and language information were used to create the following groups for analysis: white, Black, Hispanic, South Asian, East/Southeast Asian, Other Asian, and other/unknown. A growth curve analysis using linear mixed-effects modeling was used to trace longitudinal progression of spherical equivalents over time, modeled by race/ethnicity. Analyses adjusted for potential confounders, including body mass index (BMI), screen time, and physical activity. Results There were 11,595 patients who met the inclusion criteria. Patients were 53% girls, 55% Latino, 15% white, 9% black, 9% East/Southeast Asian, and 2% South Asian. Mean age (standard deviation [SD]) at the time of initial refraction was 8.9 years (1.6 years). Patients had an average (SD) of 3.4 (1.5) refractions, including the baseline measurement, during the study period. A three-way interaction model that assessed the effects of age at baseline, time since baseline, and race/ethnicity found that children of East/Southeast Asian descent showed significantly faster myopia progression across time (P < 0.001). East/Southeast Asian patients who presented with myopia between 6 to < 8 years progressed similarly to white patients in the same age group and significantly faster compared with white patients in other age groups. Conclusions Myopia progression differed significantly between East/Southeast Asian and white patients depending on the patients’ age.
M yopia is increasingly appreciated as a major global public health concern. Although myopia has long been established as a common cause of vision impairment, 1,2 myopia's growing prevalence, especially in East Asia, necessitates greater exploration into the risk factors for myopia onset and progression. Approximately one-third of American and European adults are myopic, whereas the prevalence of myopia in many East Asian countries now reaches 80-90%. [3][4][5][6][7] It is estimated that approximately 49.8% of the global population will have myopia by 2050 and 9.8% will have high myopia of -5.0 D or less. 2 The concerns around myopia extend beyond the need for corrective lenses. Being myopic increases the patients' risk of irreversible vision loss from multiple secondary sequelae, including retinal detachments, maculopathy, choroidal neovascular membranes, and optic neuropathy. 8 Patients with high myopia (≤-10.0 D) experience diminished quality of life comparable to those with keratoconus. 9 Visual impair-ment from uncorrected myopia is estimated to result in a global potential productivity loss of US $244 billion dollars, with the Southern and Eastern parts of Asia taking on the greatest burden. 10 The risk factors for myopia progression are multifactorial and incompletely understood. The risk factors driving myopia incidence in children are of particular importance as the incidence of childhood onset of myopia has increased. 11 Myopia that begins earlier in childhood has been shown to progress faster than adult-onset myopia. 12,13 Pärssinen et al. examined the risk factors for pediatric myopic progression into adulthood and found that higher myopia in adults was associated with less time spent on sports and outdoor activities during childhood and higher parental myopia. 14 Hu et al. found that older age, female sex, and lower initial refractive error were associated with faster myopia progression in Chinese patients. 15 Donovan et al.'s meta-analysis of children wearing single-vision spectacles found that myopia progression rates were higher in urban Asians compared to urban Europeans with younger children and girls having greater annual rates of progression. 16 Considerable research has examined interventions to slow myopia progression and a one-size-fits-all approach may not be appropriate. However, most studies on myopia follow ethnically homogenous cohorts, which limit the generalizability of results. Although racial differences in myopic progression have been examined previously, the exact role that race plays in the development and progression of myopia remains incompletely understood. Some studies have compared the prevalence of myopia across different geographic regions to assess racial differences. However, this approach generates questions around confounding variables as the diversity of countries and cultures bring about differences in risk factors other than race. In addition, as myopia often develops at younger ages, studying children will identify which groups are at greatest risk for progression.
The purpose of the current study is to compare progression data between races from a large real-world population. The value of using real-world population data is that the information comes from the same source population to minimize selection bias and confounding. This study is a retrospective cohort study that includes over 36,000 refractions from over 11,000 children with myopia. Information from this study may help in designing racially and culturally specific interventions and in planning clinical trials.
METHODS
We conducted a retrospective cohort study of pediatric patients enrolled in Kaiser Permanente Southern California (KPSC), an integrated health care organization whose patient population is reflective of the socioeconomic and racial diversity of Southern California. 17 KPSC's electronic health records (EHRs) from 2011 to 2016 were used to identify study-eligible patients.
We focused on children with early onset myopia who were between 4 and 11 years old when they had a refraction measurement between -6 to -1 diopters (Ds). The first measurement where the refractive error was ≤-1 D defined the baseline measurement and all follow-up measurements were included in the analysis. Patients also must have at least one follow-up refraction ≥21 months after the baseline measurement and before the end of 2017. Patients with amblyopia, strabismus, or retinopathy of prematurity were identified through International Classification of Diseases (ICD) codes and excluded from the sample. Patients with strabismus or cataract surgery were identified by Current Procedural Terminology (CPT) codes prior to their first qualifying refraction measurement and were also excluded. Furthermore, patients whose medical records lacked information on gender were excluded from analysis (n = 18).
Patient information on race, ethnicity, and language preferences were abstracted from the KPSC EHR. Patients were surveyed on this information upon enrollment within KPSC and additional details could be added at any time during their care. For children under the age of 12 years old, parents were asked for this information. Patients older than 12 years old were asked to self-report this information. Patients born at KSPC had their maternal race and ethnicity used for identification purposes unless otherwise specified. For race, patients could identify as American Indian or Alaska Native, Asian, Black or African American, Hispanic or Latino, Native Hawaiian or Pacific Islander, white, decline to state, other, or unknown. For ethnicity, patients could select from a list of over 250 groups or select "Decline to State," "Other," or "Unknown." For our study, race/ethnicity categories were collapsed to white, Black, Hispanic, South Asian, East/Southeast Asian, other Asian, and other/unknown. Patients were classified as South Asian if the patient self-identified, or-in the case of children under the age of 12 years-were identified by their parent(s), as Afghan, Asian Indian, Bangladeshi, East Indian, Nepalese, Pakistani, or Sri Lankan, or indicated that their written or spoken language was Bengali, Gujarati, Hindi, Malayalam, Panjabi, Pashto, Punjabi, Sinhalese, Urdu, or Urdu Pakistan. Although other languages are spoken in South Asia, the aforementioned languages were the only ones that patients within this cohort identified as using. Patients were classified as East/Southeast Asian if they were identified as a racial/ethnic group related to or had a primary, spoken, or written language pertaining to East/Southeast Asia. The East/Southeast Asian group included the following racial/ethnic groups: Asian/Pacific Islander, Cambodian, Chinese, Filipino, Indonesian, Japanese, Kinh/Viet, Korean, Laotian, Malaysian, Tagalog, Taiwanese, Thai, and Vietnamese. Languages classifying a patient as East/Southeast Asian were the following: Burmese, Chinese, Dzongkha, Hakka, Japanese, Khmer, Korean, Laotian, Mandarin, Philippine, Tagalog, Thai, Toishanese, and Vietnamese. Patients who were identified as Asian race but were missing more specific race-ethnicity information, specified their language as English only, spoke languages not typically associated with South Asian or East/Southeast Asian regions, or lacked information to further classify the Asian group were categorized as other Asian.
Cycloplegic, manifest, final, and wearing refractions were included for analysis. If a patient had more than one refraction on the same day, the measurement was selected in the same order of priority. The eye with the more negative refractive error at baseline was chosen for analysis. Measurement or recording errors were possible and patients with a biologically implausible average yearly refraction change (calculated using the baseline and final measurements of refractive errors) of ≥ 10 D were excluded from analyses.
Covariates of interest included age, sex, race/ethnicity, body mass index (BMI), year of first examination, screen time, physical activity, and outdoor time. Age at baseline was defined as the patient's age at the time of the first refraction measurement. BMI was calculated using height and weight measurements closest to the date of the initial refraction. Screen time, physical activity, and outdoor time were abstracted from the EHR. At well-child visits, patients were asked whether they had < 2 hours of screen time per day, > 1 hour of physical activity per day, and > 2 hours of outdoor time per day. Responses from the visit closest to baseline were abstracted for analyses. Data on outdoor time were only available in 2017.
A growth curve analysis using linear mixed-effects models was used to trace longitudinal progression of spherical equivalents (SEs) over time by age at baseline. As this longitudinal model relies on person-time, this model traces an average trend across observations among patients of the same age or the same time since onset, rather than trac- ing each individual's trajectory and then averaging those trajectories.
Analyses adjusted for potential confounders or proxies of confounders including BMI z-score percentiles (< 5%, 5-< 85%, 85-< 95%, and 95-100%), screen time (< 2 vs. ≥ 2 hours per day), and physical activity (≥ 1 vs. < 1 hour per day). We used a conditional growth model with refractive error as the outcome to estimate the fixed and random effects of time since baseline measure. These time effects allowed us to trace the trend of myopia progression by age at baseline and across time, conditional on potential confounders. The intrapatient correlation was specified as an autocorrelation structure of order 1. To understand whether the growth trajectory varied with different baseline ages and race/ethnicities, we included a three-way interaction between the time of refractive error measurement, age at baseline measurement, and race/ethnicity. The post hoc tests of pairwise comparisons of the estimated growth trends between race/ethnicity groups were performed using Tukey's method. 18 Patients missing data on screen time, physical activity, or outdoor time were categorized as unknown for these variables and were included in analyses. Analyses were performed using SAS version 9.4 (SAS Institute Inc., Cary, NC, USA) and R (R version 3.4.3).
Institutional Review Board (IRB) approval was obtained. This research also adhered to the tenets of the Declaration of Helsinki.
RESULTS
A total of 11,595 patients met inclusion criteria (Fig. 1) and contributed 39,690 measurements for analyses. The cohort consisted of 6327 (55%) patients of Latino race/ethnicity and 6122 (53%) girls ( Table 1). The average age at baseline (standard deviation [SD]) was 8.9 years (1.6 years). Of these children, 7% were between 4 and < 6 years of age, 21% were between 6 and < 8 years, 41% were between 8 and < 10 years, and 31% were between 10 and < 12 years. The average length of follow-up (SD) was 3.1 years (0.9 years) with a range of 1.8 to 5.9 years (Tables S1 & S3). Data on screen time and physical activity were missing for 13% and 10% patients, respectively. Among patients with available information, 90% patients reported < 2 hours of screen time per day and 94% patients had physical activity of ≥ 1 hour per day.
Patients underwent an average of 3.4 (SD = 1.5) refractions, including the baseline measurement, during the study period, and 75% patients had at least 3 measurements for analyses (see Table 1). The average SE at baseline was -2.0 (SD = 1.0) diopters (Ds) and varied between -2.2 and -1.9 D across race/ethnicity groups (see Table 1). Of all refractive errors at baseline, 5.8% were cycloplegic, 84.9% were final, 8.1% were manifest, and 1.1% were wearing (Table S1). Of all refractive errors used in the analysis, including baseline, 4.6% were cycloplegic, 85.7% were final, 5.1% were manifest, and 4.7% were wearing (Table S1). Among all 39,690 measurements, 26% of measurements were taken when the patient was between 12 and 16.2 years of age.
Of the 11,595 patients in the cohort, 26 children were missing information on BMI, leaving 11,569 children for the growth model analyses. Table 2 model A shows results for mixed-effects models controlling for potential confounders, such as screen time and physical activity. Model A shows that, on average, SE decreased by 0.37 D per year postbaseline. Boys had a slightly higher SE by 0.02 D compared to girls (P = 0.007). We did not find significant differences by levels of screen time and physical activity. Compared to younger patients between 4 and < 6 years of age, older patients were found to have more severe myopia (see Table 2, model A). Only children of Latino, East/Southeast Asian, and other Asian race showed significant differences in their severity of myopia compared to white children controlling for sex, age at baseline, and change over time. Table 2 model B shows all significant effects of a threeway interaction model that assessed the effects of age at baseline, time since baseline, and race/ethnicity. Only children of East/Southeast Asian descent showed demonstrable different growth trajectory across time (P = 0.001). Figure 2A traces the change over time and suggested that East/Southeast Asian children's myopia progressed faster than that of white children. Although the average SE at the time of initial refraction is more negative for white children than East/Southeast Asian children, crossover occurs at 1-year follow-up when progression is higher for East/Southeast Asian children compared to white children. Figure 2B used age at diagnoses and time since baseline to calculate myopia trajectories across age and by age of onset among white and East/Southeast children. A pairwise test of slopes (Table 3) showed that white children appeared to progress independently of the age of myopia onset. Conversely, East/Southeast Asian children had different trajectories across age and trajectories that varied significantly by age of onset, when compared to white children (see Table 3, model B). Overall, East/Southeast Asian children demonstrated a greater degree of progression compared to their white counterparts (see Table 2). Furthermore, East and Southeast Asian children who presented with myopia between 10 and < 12 years of age had significantly different changes over time compared to children of the same race who were diagnosed at younger ages (see Fig. 2, Table 3).
DISCUSSION
The current study presents myopic progression data across race and ethnicity within one population. The study does show that race/ethnicity is a significant predictor for myopia
Number of measurements at baseline and during follow-up
Mean (SD) 3.6 (1. progression. Only East/Southeast Asian differed in terms of their overall trajectory from white children by having steeper declines in SE. White children tended to have similar degrees of myopia progression across ages of < 10 years. Myopia is a complex and multifactorial disease that includes genetic and environmental factors. Increased outdoor time, low-dose atropine, and orthokeratology had been used with variable success to prevent the onset or progression of myopia. 19 Understanding which patients are at risk for myopia progression and at what ages can help focus attention on possible interventions to higher risk patients. Hu et al.'s Chinese cohort (n = 495, mean age 5.12 years) found that 35.8% of children demonstrated refractive stability over at least 2 years. Further, the authors found that older age, female sex, and lower initial refractive error were associated with faster myopia progression. 15 Donovan et al.'s meta-analysis of children wearing singlevision spectacles found that myopia progression rates were higher in urban Asians than urban European populations with younger children and girls having greater annual rates of progression. 16 Our findings support Donovan's finding in a cohort that shares the same physical environment.
Consistent with the findings from Hu et al., we found that myopia progression is instantaneous from the time of baseline measure and continuous over time.
In our current study, information on screen time and physical activity had high proportions of missing data, 13% and 10%, respectively, and these proportions were larger than the proportion of patients with > 2 hours of screen time per day (9%) and patients with < 1 hour of physical activity per day (5%). Additionally, the available data showed little distinction between race-ethnicity groups and might be subject to recall or response bias. Given the high proportion of missing data and the lack of statistical significance of screen time and physical activity in the univariate results, we conducted a sensitivity analyses without these two variables. We found that the effect and significances of regression coefficients of time were consistent between the two models with and without physical activity and screen time (Table S4). In a prospective longitudinal study of 10,000 children between 5 and 15 years of age, Saxena et al. found that use of computers/video games and watching television had been found to be significant risk factors for myopia progression within 1 year. 20 Additionally, in a 2-year prospective cohort study of 156 medical students, Jacobsen et al. found a significant, inverse association between physical activity and refractive change toward myopia. 21 Although our current study found no association between screen time or physical activity and myopia progression, future work can investigate screen time using finer categories and physical activity in younger populations with the distinction between outdoor and indoor physical activity. Our study has some limitations. Although our sample is larger than that of population-based cohort studies, such as the Guangzhou Twin Eye Study (GTES; n = 1831), 22 the Generation R study (n = 3422), 23 and the Avon Longitudinal Study of Parents and Children (ALSPAC; n = 2833), 24 and our results are similar to prior studies in many ways, results may not be fully generalizable to other white or East/Southeast Asian populations. As we were interested in the trajectories of children who present with myopia earlier in life, we did not recruit children older than 11 years into this study, leaving fewer, yet numerically sufficient numbers to estimate trends beyond ages 11 years. Another limitation is the study's real-world setting, where cycloplegic refractions were not performed routinely in patients with myopia in this age group. The lack of cycloplegia results in overestimation of myopia in young children and, as a result, the values presented herein may overestimate myopic error; however, the purpose of this study was not to characterize [6,8) 0.040 * <0.001 * [8,10) <0.001 * [10,12) Model B shows testing modification effect of age at baseline examination and years from baseline on racial/ethnic differences in myopia progression (3-way interaction between years from baseline, baseline age, and race/ethnicity; n = 11,569). * Significant at P < 0.05. absolute refractive error in children but instead to determine the differences in myopic trajectories based on race. Given cycloplegia was not the norm in children and there was no differential application between race and ethnic groups, we do not anticipate the lack of cycloplegia would affect the differences in progression seen between races. We also assumed that the progression was linear and we verified this assumption by reviewing a spaghetti plot of the refractive errors and performing a test for curvature, which was not significant.
The strength of our study lies in the real-world analysis of a large, racially and ethnically diverse cohort of 11,595 patients. Additionally, the use of an EHR-based dataset allows us to longitudinally assess refractive errors in a large cohort of patients, similar to the GTES and ALSPAC studies. 22,24 With the size and diversity of our cohort, we were able to analyze 39,690 refractive error measurements and identify differences in myopia progression between major race and ethnicity groups and groups within the Asian category. Such analysis has been able to reveal differences between groups that would have been masked with a smaller or less diverse study population.
Our findings suggested that prevention efforts and clinical trials should consider race. Attention on East and Southeast Asian children should be considered as they demonstrate higher progression of myopia than any other race.
|
2020-11-15T14:01:33.783Z
|
2020-11-01T00:00:00.000
|
{
"year": 2020,
"sha1": "05a5d4af73cdc4fcd260f564af3997aa23b2399c",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1167/iovs.61.13.20",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8ee81a4d83e9a86a4353e153bfa648bd57a615fe",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
255390984
|
pes2o/s2orc
|
v3-fos-license
|
Comparative quasi-static mechanical characterization of fresh and stored porcine trachea specimens
Tissues of the upper airways of critically ill patients are particularly vulnerable to mechanical damage associated with the use of ventilators. Ventilation is known to disrupt the structural integrity of respiratory tissues and their function. This damage contributes to the vulnerability of these tissues to infection. We are currently developing tissue models of damage and infection to the upper airways. As part of our studies, we have compared how tissue storage conditions affect mechanical properties of excised respiratory tissues using a quasi-static platform. Data presented here show considerable differences in mechanical responses of stored specimens compared to freshly excised specimens. These data indicate that implementation of storage and maintenance procedures that minimize rapid degradation of tissue structure are essential for retaining the material properties in our tissue trauma models.
Introduction
The most common life-threatening hospital-acquired infection is pneumonia, primarily associated with mechanical ventilation and known as ventilator-associated pneumonia (VAP) [1]. The processes that result in these deep lung infections are thought to involve the transfer of bacteria from the oral cavity into upper respiratory tracheal and bronchial tissues, which may have been mechanically damaged from the insertion of a ventilator tube [2]. This damage may contribute to the occurrence of localized infections in these tissues known as ventilator-associated tracheobronchitis (VAT) [3]. Understanding the relationship between damage and infection of upper respiratory tract tissues has considerable potential to inform and stimulate new therapies aimed at mitigating VAT and VAP.
The trachea is a multi-layered, fiber-oriented composite of soft tissues with viscoelastic material properties [4]. The structural stability of the trachea arises from its fibrous, collagen-rich hyaline cartilage. The non-linear and anisotropic material behaviors of the trachea are attributed to the extensive meshwork of fibers in its soft connective tissues [5]. The mechanical properties of trachea and its subcomponent tissues have been studied in a wide variety of mammalian species including specimens sourced from human cadavers. Samples have been studied under both compression and tension. For these specimens, the moduli ranges from the order of 10 kPA to the order of 100 MPa [6][7][8][9][10]. These variations in modulus values have been attributed to species-specific differences in the tissues and to sample orientation [6]. These variations have not been widely considered to be due to differences in sample preparation and storage.
As part of our aim to develop respiratory tissue-based models of damage and infection, we have undertaken comparative studies of trachea tissue specimens, stored under different conditions reported in the literature. In this paper, specimens of porcine tracheal tissues were excised and used immediately, stored in buffer, or frozen under conditions described in published studies that reported mechanical properties under compressive forces. Compression studies have been commonly used to assess the mechanical properties of fresh and stored tissues in a number of systems. For example, reports have recently been published on brain tissues [11], and intervertebral and temporomandibular joint discs [12,13]. In our studies, rapid preparation of samples enabled collection of compression data from very fresh tissues using an Instron platform. The main thrust of this work is to demonstrate the importance of sample storage as it relates to the interpretation of published mechanical properties of this tissue class, and as a driver for the development of realistic tissue models of mechanical injury to the trachea that can lead to complications such as VAT.
Tissue preparation
Isogenic trachea tissues were obtained from a single six-week old piglet, sourced from a specific pathogen free (SPF) closed herd. The piglet was sacrificed by intravenous administration of sodium pentobarbitone at a dosage of 0.8 mg per kg body mass. The bronchi were excised and opened with a scalpel to reveal the inner epithelial surface. Full thickness tissue samples from the trachea, containing all tissue components, were taken as circular discs using an 8-mm diameter biopsy punch.
Appropriately handled explants are extracted from host tissue and then kept at an air-liquid interface at room temperature. Samples sit on a 2% agarose plug held in a plastic well containing phosphate buffered saline (PBS) solution, at the interface between the solution and water. All samples are tested within 12 h of extraction. As a point of comparison, two other storage protocols were examined. In the first, samples were completely submerged in PBS for 48 h and stored at 4 • C (fridge). In the second, dehydrated samples were stored at −20 • C for two weeks prior to testing (freezer).
Prior to testing, the discs were removed from PBS solution, if necessary, lightly dried, and relevant dimensions (diameter, thickness and weight) were measured (Tab. 1). The sample surfaces were often not precisely circular and, therefore, three measurements of the diameter (labelled 1, 2 and 3) were made at 60 • relative to one another and the resulting lengths averaged in the estimation of the samples' surface area for the purposes of calculating engineering stress. Although the samples were obtained using a circular biopsy punch, it is possible that pre-stress of the tracheal tissue may have contributed to the resulting elliptical shapes. The strain rate of all the experiments shown is 10 0 s −1 .
Quasi-static measurements
All data shown were collected with an Instron Model-6655 screw-driven press (Fig. 1). The samples were set between the lubricated faces of the anvil and the load cell. BlueHill software running on an integrated PC was used to input sample dimensions, testing parameters and to record the resulting data. The faces of the anvil and load cell both had an approximate diameter of ∅30 mm and were sterilized with ethanol prior to the experiment. For the experiments shown, a ±50 N load cell was used. This load cell has a published linearity and repeatability of <0.25%. With each load cell used, an initial test was conducted in the absence of any sample up to a safe load threshold to ascertain the load-dependent error in the measurement of sample height, due to the yield of the apparatus. This error correction factor was subtracted from all subsequent measurements.
Both Microsoft Excel and Matlab software was used to analyze the collected data. The data was plotted and viewed at a fixed magnification and the point at which the detected load deviated from 0 N was taken to be the full height of the sample. Engineering strain was back-calculated utilising this new figure for 0% strain.
Results and discussion
The averaged engineering stress-strain curves calculated for unconstrained trachea samples, collected as duplicates or triplicates from the same animal, is shown in Figure 2. The studies that we conducted into preferable storage and preparation protocols were based on published conditions used for studying the mechanical properties of trachea (e.g., [6,9,10]), namely, dry storage at −20 • C and buffered storage at 4 • C. The results shown indicate both methods, but certainly the latter, have deleterious Fig. 1. Three-dimensional schematic image of the key components of the Instron screwdriven press. Biological samples were placed on the lightly-lubricated top surface of the stationary anvil. A load cell, certified to 50 N in the case of these experiments, attached to a movable cross frame is used to compress the sample at a pre-selected strain rate. The apparatus is controlled and data is extracted using BlueHill software running on a Windows PC. effects on the structure and, consequently, the material properties of biological samples. This conclusion appears to hold for samples obtained from different animals of the sample species. The less physiological the conditions in which the sample is kept, the greater the effect on it mechanical properties. Specifically, the act of freezing appears to greatly reduce the material stiffness of these biological samples.
Alterations in cell and tissue properties associated with freezing are generally associated with the physical processes involving water such as ice formation or dehydration that can affect cellular integrity and structures such as the extracellular matrix and its associated proteins [13][14][15]. Similar observations were made in a study by Ternifi et al. [16] that reported a large drop in the elastic modulus of kidney tissues that were subjected to freezing. These authors cited cellular crystallization, cell bursting and small vessel damage as possible causes. It is also probable that freezing has adverse affects on integrity of its collagen-rich structure that would in turn affect its biomechanical properties. In a related example, porcine growth explants of highly cartilaginous tissues associated with bone were stored in various conditions and studied under compression [17]. Interestingly, freezing of these explants resulted in a reduction of the collagen fibril modulus. This result suggested that the structural organization and/or composition associated with collagen in those samples had been disturbed.
The majority of the studies into the effects of long-term storage on tissue are restricted to the field of bioengineering. This is because it is of paramount importance that both decellularised scaffolds and recellularised, bioengineered replacements can be stored for prolonged periods of time prior to clinical use. For example, Baiguera et al. [18] studied the effects of long-term (1 year) storage on both the microstructure and mechanical properties of human decellularised tracheas. To investigate the impact of storing the samples at 4 • C in phosphate buffered saline on tissue structure required the use of histological techniques. A combination of ultrastructural and connective tissue staining highlighted the formation of pores and large interfibrillar spaces [7]. As could be hypothesised, this structural change had a detrimental effect upon the mechanical behaviours of stored tissues in comparison to fresh. The group conducted tensile experiments on samples utilising a universal testing machine. Tissues stored in this way exhibited a lower tensile modulus at both low and high strain (as determined by the knee of the curve on a stress-strain plot), lower tensile strength and a lower strain at the point of breakage. All of these parameters were reduced by approximately half as compared to fresh tissue.
Other studies have attempted to isolate the effect of hydration on tissue mechanical properties. Shahmirzadi and colleagues [19,20] found that decreased hydration not only made the tissue stiffer but also slowed stress relaxation. Those results were obtained using aortic tissue submerged in liquids and required maintaining the samples at a hydration equilibrium. However, the utility of undertaking similar studies on tissues located air-liquid interfaces is questionable. For example, although respiratory mucosal tissues including trachea are coated in mucus and other surfactant molecules, full immersion and of these tissues into liquids does not represent a normal physiological environment and results in submersion stress that can alter tissue structure and function [21].
In summary, the studies presented here provide a clear indication that the mechanical properties of soft biological tissues alter significantly with short-term and long-term storage. Storing tissues in buffer at non-physiological temperatures degrades the integrity of the tissue and this effect is further exacerbated by storing the samples in even colder environments, over longer times, whilst dehydrated. These storage conditions (e.g., in buffer and frozen) are typical of conditions found throughout the literature of material studies of soft tissues. Our data suggest that past published studies may require re-interpretation, especially in cases where the information is used to understand tissue function under physiological conditions. The Centre for Blast Injury Studies acknowledges the support of the Royal British Legion and Imperial College London. The Institute of Shock Physics acknowledges the support of the Atomic Weapons Establishment, Aldermaston, UK and Imperial College London. KAB acknowledges additional support from the Isaac Newton Trust, University of Cambridge and the European Office of Aerospace Research and Development. The on-going technical support of the machine shops at the Cavendish Laboratory and Imperial College London was crucial to this project.
Author contribution statement
KAB and BJB conceived the study. AWT and AW undertook animal studies related to isolation of the porcine tissues. BJB prepared tissues samples and carried out experimental characterization of material properties. Analysis of material properties was primarily carried 60 The European Physical Journal Special Topics by BJB with input from WGP. BJB and KAB interpreted the results and prepared the manuscript.
Open Access This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
|
2023-01-03T14:39:31.271Z
|
2018-09-01T00:00:00.000
|
{
"year": 2018,
"sha1": "3c1d541d8c7e831ba18c117119439da6effd0481",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1140/epjst/e2018-00104-5.pdf",
"oa_status": "HYBRID",
"pdf_src": "SpringerNature",
"pdf_hash": "3c1d541d8c7e831ba18c117119439da6effd0481",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": []
}
|
147848926
|
pes2o/s2orc
|
v3-fos-license
|
William Faulkner's "That Evening Sun": Multiple Views of Oppression
People throughout the history have been subject to discrimination from three distinct perspectives of class, race, and gender. Those who were richer used the lower class as a tool in their service to have a comfortable life. The white oppressed the black as the other who was not similar to him in the color of skin. The male dominated the female as she was different in gender lacking the Phallus. The amalgamation of these ideas towards human being has masterly been presented in the story “That Evening Sun,” by William Faulkner. The present study, by applying Marxist approach on this story, tends to analyze how human being may be oppressed from different aspects.
INTRODUCTION
William Faulkner (1897Faulkner ( -1962, a prominent American novelist who was born in New Albany, Mississippi. Not only was he a novelist, but also he was a short story writer. One of his best short stories is "That Evening Sun" in which he discusses the problems of the blacks, on the one hand, and the problems of women and members of lower-class, on the other. The story is about a low class, black family which is oppressed by a high-class white family. The concept of oppression can be traced in this short story from three different perspectives of class, race, and gender.
In the discussion part, by applying Marxist approach about hegemony-, racial studies, and gender studies, the writer traces the subject of oppression in this short story and shows how this oppression leads to the isolation and "otherness" of those who have been oppressed.
The last part is the conclusion of what is discussed in the discussion section.
Hegemony and Oppression:
Bressler (2007) says, the term, "hegemony," which has been used by Antonio Gramsci, an Italian Marxist, ... is the assumptions, values, and meanings that shape meaning and define reality for the majority of people in a given culture. Because the bourgeoisie actually control the economic base and establish all the elements that comprise the superstructure-music, literature, art, and so forth-they gain the spontaneous accolades of the working class. The working people themselves give their consent to the bourgeoisie and adopt bourgeois values and beliefs. As sustainers of the economic base, the dominant class enjoys the prestige of the masses and controls the ideology-a term often used synonymously with hegemony-that shapes individual consciousness. This shaping of a people's ideologies is, according to Gramsci, a kind of deception whereby the majority of people forget about or abandon their own interests and desires and accept the dominant values and beliefs as their own. (198) As it is seen, this is the bourgeoisie who defines even life for others. In the story, one can see this process of domination as, "Father told Jesus to stay away from the house," (Crane, 1952: 354) 1 and "Father said for you [Nancy] to go home and lock the door, and you will be all right" (Crane, 1952: 360).
One should bear in mind that this hegemony or ideology is not just a theory, but as David Hawkes (2003) believes, what Gramsci has in mind is "... a form of praxis" (114) by which the dominant class continues its life. In this story, the black family, as a low class, is supposed to work in favor of the white family as a dominant, high-class: "Father says for you to come on and get breakfast" (Crane, 1952: 353). M. H. Abrams (1999) believes that this upper class does not dominate the lower-class "... by direct and overt means, but by succeeding in making its ideological view of society so pervasive that the subordinate classes unwittingly accept and participate in their own oppression;" (151) as it is seen in the story, Nancy herself accepts her valuelessness when she says, "I ain't nothing but a nigger" (Crane, 1952: 355).
Race and Oppression:
For many years, the subject of race has been, as a good pretext, in the hands of power and special dominant races, such as the white, to oppress specially the blacks; that's why William Faulkner shows this custom in his works, as R. P. Warren (1965) maintains, "the actual role of the Negro in Faulkner's fiction is consistently one of pathos or heroism. It is not merely, as has been suggested more than once, that Faulkner condescends to the good and faithful servant, the "white folk's nigger."" (121) Nancy, as a Negress, is the main character of the story around whom the story forms. This theme of oppression, against the black, is seen, again and again, in the works of William Faulkner. In another article, R. P. Warren (1966) is eager to elaborate on it, when he mentions, In Faulkner's work we find, over and over again, this theme of the crime, the curse, for it is clear that for him the Civil War merely transferred the crime against the Negro into a new set of terms. Even in the works treating the post-bellium period, the Negro remains a central figure-one is even tempted to say the central figure. (257) Once upon a time in America, the white thought that they were superior to the black and that was the trigger of what we nowadays call racism. "Racism," as believed by L. L. Snyder (1962), Assumes inherent racial superiority or the purity and superiority of certain races; also, it donates any doctrine or program of racial domination based on such assumption. Less specifically, it refers to race hatred and discrimination. Racialism assumes similar ideas, but describes especially race hatred and racial prejudice. (10) It should not be forgotten that those who believe in racial discrimination, not only do they accept the upper class's superiority, but also they believe that the opposite race, by nature, lacks something which makes it subhuman. The superiority and inferiority are quite conspicuous in the story, as one sees "Negro women who still take in white people's washing after the old custom, ...," (Crane, 1952: 352) and "I can't have Negroes sleeping in the bedroom," (Crane, 1952: 359) which shows that the white think the black is subhuman.
From the very beginning of racial discrimination and slavery as its consequence, the minor races, specially the blacks were aware of the process of the exploitation, as V. Ware (1996) states, "in talking about the social construction of whiteness it is also important to acknowledge that it has certainly not been invisible to those identified as black;" (143) this awareness of the black can be seen in this story too, when Jesus says, I can't hang around white man's kitchen. But white man can hang around mine. White man can come in my house, but I can't stop him. When white man want to come in my house, I ain't got no house. I can't stop him, but he can't kick me outen it. He can't do that (Crane, 1952: 354). One can notice here how Jesus, as a black man, is aware of the domination of the white in the society he lives.
The crisis of race and racial discrimination in one specific race dominated society causes the minority to be repressed and rejected so far as C. E. Silberman (1954)
mentions,
Negroes are taught to despise themselves almost from the first moments of consciousness; even without any direct experience with discrimination, they learn in earliest childhood of the stigma attached to color in the United States: "if you're white,
ILSHS Volume 69
you're right," a Negro folk saying goes: "if you're brown, stick around; if you're black, stay back." And they do stay back. (11) As it is seen in the story, Jesus despises himself because he cannot do anything to keep the whites away from his house, and elsewhere Nancy mentions, "I ain't nothing but a nigger" (Crane, 1952: 355).
In a white dominated society, the blacks were not able to do or possess whatever they wanted, but according to W. J. Wilson (1973) "... the status and behavior of the minority group are defined and redefined with respect to the dominant group," (35) as it is seen in the story, when Father tells Jesus to get away from the house, he disappears. It is the Father who says whether or not Nancy should come to get the breakfast and when to leave. On the other hand, the behavior of white people affects the behavior or viewpoints of their children to learn how to behave while confronting the blacks; as R. Redfield (1958) believes, For the small children there is, characteristically, no significance in race. There is surely no instinct of racial prejudice or of racial recognition. Children brought up in societies where there are racial prejudices ordinarily begin to share them-or perhaps to rebel against them-at the age when self-consciousness begins;" (69) This prejudice in the story is shown when Jason begins to understand what 'nigger' means by saying "Jesus is a nigger... Dilsey is a nigger too... I ain't a nigger" (Crane, 1952: 358).
Having done all the blacks can do for the whites, no longer would the whites pay attention to the blacks when they face with a problem, as William Faulkner himself mentions, "the point I was making [with "That Evening Sun"] ... was that this Negro woman who had given devotion to the white family knew that when the crisis of her need came, the white family wouldn't be there" (qtd. in Barnwell 71). Gender and Oppression: In the process of history, the subject of gender has always been in the hands of power, and men too, to oppress and humiliate women. As it is believed, ... gender discrimination is referred to a treatment or act, which based on the individuals gender is seeking to humiliate, reject, belittle, and stereotype them. And in a more extensive concept, gender discrimination is the tendency in which to glorify a sex, one belittles the opposite one; ((ODV), 2001: 32) And elsewhere A. Lorde (2004) states, "sexism, the belief in the inherent superiority of one sex over the other and thereby the right to dominance" (855). This spirit of rejection and dominance is clear in the story as one sees the Father leaves his wife alone to take Nancy home, which shows Father's conscious or unconscious act of rejection towards his wife although his wife protests against this action of her husband. Elsewhere in the story one notices the domination of Jesus over his wife because his wife scares of him after being pregnant by a white: "if Jesus is hid there, he can see us, can t he?" (Crane, 1952: 366).
Moreover, L. Goodman (1996) believes that "women of color were long excluded from higher education, from learning and teaching about creative writing, by a double or even triple oppression: race, class, and gender" (153). Nancy, as a woman and a low class Negress, is the most proper sample of this oppression. The language she speaks shows that she is an uneducated person, for example when she says, "who says I is?" (Crane, 1952: 353).
CONCLUSION
The subjects of class, race, and gender have been and are in the hands of dominated group to oppress others. This problem has been shown by many writers in their works of art. One of these writers is William Faulkner who in his short story "That Evening Sun" shows that how one group or family by using the subjects of class, race, and gender causes another group to be oppressed. This oppression causes the oppressed group to be isolated from other groups and leads them to be an "other" compare with the dominated social group. This sense of "otherness" day by day causes them to lose their sense of humanity and to consider themselves as an object in the hands of other dominant group; as it is seen about Nancy and Jesus as a sample of this "other" group.
International Letters of Social and Humanistic Sciences Vol. 69
After all, it should not be forgotten that, according to A. Amoko (2006) "in the practice of everyday life, race continues to be one of the principal ways by which we identify each other-and ourselves;" (129) so, it should not be a subject for superiority of one group, race, or gender over other.
|
2019-05-09T13:11:58.713Z
|
2016-12-31T00:00:00.000
|
{
"year": 2016,
"sha1": "2aaeb86fe10c0090db8a3ce1036880a972dcc744",
"oa_license": "CCBY",
"oa_url": "https://www.scipress.com/ILSHS.69.1.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "252fc33230d706fffb3eba23d9706e4d27daf49b",
"s2fieldsofstudy": [
"History",
"Sociology"
],
"extfieldsofstudy": [
"Sociology"
]
}
|
6206355
|
pes2o/s2orc
|
v3-fos-license
|
Cognitive function and advanced kidney disease: longitudinal trends and impact on decision-making
Background: Cognitive impairment commonly affects renal patients. But little is known about the influence of dialysis modality on cognitive trends or the influence of cognitive impairment on decision-making in renal patients. This study evaluated cognitive trends amongst chronic kidney disease (CKD), haemodialysis (HD) and peritoneal dialysis (PD) patients. The relationship between cognitive impairment and decision-making capacity (DMC) was also assessed. Methods: Patients were recruited from three outpatient clinics. Cognitive function was assessed 4-monthly for up to 2 years, using the Montreal Cognitive Assessment (MoCA) tool. Cognitive trends were assessed using mixed model analysis. DMC was assessed using the Macarthur Competency Assessment tool (MacCAT-T). MacCAT-T scores were compared between patients with cognitive impairment (MoCA <26) and those without. Results: In total, 102 (41 HD, 25 PD and 36 CKD) patients were recruited into the prospective study. After multivariate analysis, the total MoCA scores declined faster in dialysis compared with CKD patients [coefficient = −0.03, 95% confidence interval (95% CI) = −0.056 to − 0.004; P = 0.025]. The MoCA executive scores declined faster in the HD compared with PD patients (coefficient = −0.12, 95% CI = −0.233 to − 0.007; P = 0.037). DMC was assessed in 10 patients. Those with cognitive impairment had lower MacCAT-T compared with those without [median (interquartile range) 19 (17.9–19.6) versus 17.4 (16.3–18.4); P = 0.049]. Conclusions: Cognition declines faster in dialysis patients compared with CKD patients and in HD patients compared with PD patients. Cognitive impairment affects DMC in patients with advanced kidney disease.
Introduction
Older people are the fastest growing cohort on dialysis. Although cognitive impairment is more common in patients with CKD than in the general population, it remains poorly recognized clinically. The REasons for Geographic and Racial Differences in Stroke (REGARDS) Study, reported an 11% increase in the risk of cognitive impairment for every 10 mL decrease in estimated glomerular filtration rate (eGFR) below 60 mL/min/1.73 m 2 with a 20% prevalence in those with eGFR <20 mL/min/1.73 m 2 [1]. In the HD population, the prevalence of cognitive impairment approached 70% in one cross-sectional study, but only 2.9% of the studied population had a prior clinical diagnosis [2]. Similar prevalence rates have been reported in patients on PD [3]. Executive function has been shown to be the predominant cognitive domain affected. It is often impaired before global cognitive dysfunction becomes apparent [4].
Although cerebrovascular disease is thought to underpin cognitive impairment in patients with advanced kidney disease, a complex interaction between vascular-, nephrogenic-and dialysis-related factors has been proposed as a pathogenetic basis [5]. The potential role for dialysis in cognitive impairment is supported by transient changes in cognition that occur during dialysis [6] and improvements in cognitive deficits after transplantation [7,8]. Yet, direct comparisons between predialysis and dialysis patients are lacking. The influence of dialysis modality on cognitive function is also unclear. A large retrospective study of 121 623 patients found that those on PD had a lower 5-year cumulative risk of dementia compared with those on HD [9]. Small cross-sectional studies have, however, reported similar cognitive performances in HD and PD patients [10]. Prospective studies that evaluate variation in longitudinal cognitive trends between dialysis modalities are lacking.
Cognitive impairment is associated with adverse outcomes, not least of which includes an impaired capacity to make decisions. Terawaki et al., in a pilot study of 26 patients with CKD 5, evaluated capacity to consent to treatment and cognitive function using the Macarthur Competency Assessment Tool (MacCAT-T) and the MiniMental State Examination (MMSE), respectively. They reported poor performances in the domains of understanding, reasoning and appreciation. In addition to expression of choice, these are recognized as the four domains of DMC. These poor performances were attributed to attentional deficits found after the MMSE [11]. The specific influence of executive dysfunction, commonly affected in renal patients, on DMC has not been evaluated.
This observational study aimed to compare cognitive trends between dialysis and CKD patients and subsequently between HD and PD patients. The relationship between cognitive impairment and DMC was also evaluated.
Materials and methods
Patients were recruited from three outpatient clinics (one HD, one PD and one CKD) at Imperial College Healthcare NHS trust, between November 2013 and October 2015. Ethical approvals were obtained from the West of Scotland and London -Fulham Research Ethics Committees: reference 13/WS/0241 and 14/LO/ 2223. All participants gave written informed consent.
Patient selection and recruitment
The study cohort was obtained based on convenience sampling. It consisted of patients who were enrolled into a prospective cohort study assessing cognitive trends and a small group of patients who participated in a pilot study assessing DMC. For the prospective cohort study, eligible patients were over 55 years of age and free from hospital admission for at least 30 days. Eligible dialysis patients had a vintage of at least 3 months, while CKD patients had an eGFR 30 mL/min/1.73 m 2 . Patients with a life expectancy of <6 months, significant cognitive impairment as well as those unable to understand English, were excluded from the study. The selection criteria were the same for those participating in the decision-making pilot, except for a lower age threshold of over 40 years.
Study assessment
Study assessments were performed at routine clinic visits in the outpatient department, usually after patients' clinical assessment. For the HD patient, these clinic visits coincided with their midweek dialysis sessions, prior to the start of dialysis. For those enrolled in the prospective cohort study, follow-up assessments were carried out at subsequent clinic visits, every 4 months for up to 2 years.
Demographic and clinical characteristics were collected at baseline, from medical records and during the assessment. Comorbidity burden was evaluated using the Stoke-Davies comorbidity score [12]. This is a validated score that assigns a value of 1 for the presence of each of the following: diabetes mellitus, ischaemic heart disease, peripheral vascular disease, left ventricular dysfunction, malignancy, systemic collagenous vascular disorders and other diagnoses that impact on survival.
'Cognitive function' was assessed using the Montreal Cognitive Assessment (MoCA). It assesses cognitive function in seven domains with scores ranging from 0 to 30. It has advantages over the widely used MMSE. This is because it assesses executive function, a domain that is commonly affected in patients with CKD. It has been shown to be sensitive to changes in cognition in patients on dialysis. A score <26 is suggestive of cognitive impairment, although a cut-off of 24 has been suggested for HD patients [13].
The Patient Health Questionnaire-9 (PHQ-9) was used to evaluate depressive symptoms. It is a 9-item questionnaire that evaluates symptoms of depression over the preceding 2 weeks. Scores range from 0 to 27, with higher scores indicative of more severe depression [14]. A score above 5, 10 and 15 are indicative of mild, moderate and severe depression, respectively.
MacCAT-T was used to evaluate capacity to consent to treatment as a surrogate for decision-making abilities. This semistructured interview, which is considered to be the gold standard for assessing capacity to consent, was administered by a trained researcher. The interview was based on proposed treatment options discussed at the preceding clinic visit. The four recognized domains of mental capacity (understanding, appreciation, reasoning and expression of choice) were evaluated. The MacCAT-T is not designed to provide a no cut-off score that designates a lack of capacity. Rather, it assists with what is ultimately a clinical judgement.
Statistical analysis
All analyses were carried out using the SPSS programme (version 22). Continuous variables were expressed as mean and standard errors (SE) for parametric data, and as median and interquartile ranges (IQR) for non-parametric data. Categorical variables were expressed as percentages.
In the prospective cohort study, the baseline cognitive scores were compared between the HD, PD and CKD cohorts, using the Kruskal-Wallis and Fisher's exact tests where appropriate. Generalized linear mixed model (GLMM) analysis was used to evaluate changes in cognitive scores over time. The outcome variables of interest were the MoCA score and the MoCA executive score, respectively. As the cognitive scores followed a skewed distribution, a gamma error structure was used. Multivariable models were used to compare cognitive trends first between the dialysis and CKD cohorts and subsequently between the HD and PD cohorts.
To evaluate influence of cognitive function on decision-making in patients with advanced kidney disease, the median MacCAT-T scores were compared between patients with cognitive impairment and those without, using the Mann-Whitney test. Patients were deemed to have cognitive impairment if the MoCA score was <26. Spearman's correlation was used to evaluate the relationship between the cognitive domains assessed by MoCA and the four MacCAT-T domains.
Results
In total, 198 patients were eligible for the prospective cohort study at baseline. A total of 39 patients refused consent (19.6%), 16 moved out of area (8.1%), 9 patients were transplanted (4.5%), 11 patients died prior to enrolment (5.6%) and 1 patient was discharged from clinic (0.5%). In total, 20 patients could not be approached due to lack of regular clinic attendance; 102 patients were eventually recruited; 10 other patients participated in the pilot interviews, assessing cognitive function and the capacity to consent to treatment.
Of the 102 participants, 41 were on HD, 25 on PD and 36 were CKD patients. The median follow-up period was 12 months (interquartile range (IQR) 6-18 months). Table 1 shows the baseline characteristics for the study cohort. The case mix was similar between study participants and nonparticipants, in terms of age, gender and ethnicity. The HD cohort had longer dialysis vintage compared with the PD (P < 0.001). There was also a trend towards a lower mean age in the HD cohort (P ¼ 0.068). The study cohort was predominantly male (70%) and 72.5% of the study cohort had been educated for at least 12 years. There were no significant differences in gender, ethnicity or level of education between HD, CKD and PD participants. For the CKD group, the mean baseline eGFR was 17 6 0.9 mL/min/1.73 m 2 . The eGFR did not change significantly during follow-up [estimated change in eGFR/year ¼ À1.2 (À6.9 to 4.7); P ¼ 0.45].
Patient characteristics
The prevalence of diabetes and ischaemic heart disease in the cohort was 53% and 46%, respectively. Diabetic nephropathy was the most common cause of renal failure (57%). The comorbidity burden did not differ significantly between HD, PD and CKD participants.
Baseline study measures
Overall, 60.5% of the study cohort met the criterion for cognitive impairment (MoCA score <26), while 19.6% met the criteria for depression (PHQ-9 >9). Totally, 24% of those with cognitive impairment also met the criteria for depression. There were no significant differences in cognitive or depression scores at baseline, between the HD, PD and CKD cohorts ( Table 2).
Effect of dialysis on cognitive trends
GLMM analysis was used to compare changes in the total MoCA score over time between dialysis and CKD patients. In univariate analysis, age, ethnicity and years of education were significantly associated with the MoCA scores, while comorbidities, dialysis vintage and laboratory parameters were not. After adjusting for these variables, the total MoCA scores declined faster in the dialysis cohort compared with CKD Table 3). Figure 1 shows the profile plot of estimated MoCA scores over time using the model in Table 3. There was no significant difference in the rate of change of the MoCA executive score between the dialysis and CKD cohorts (coefficient ¼ À0.07; P ¼ 0.10).
Effect of dialysis modality on cognitive trends
To compare cognitive trends between HD and PD, the mixed model analysis was repeated in the dialysis cohort (n ¼ 66; HD ¼ 41, PD ¼25), adjusting for age, ethnicity, years of education and dialysis vintage. There was no significant difference in the rate of change in the total MoCA scores between PD and HD patients. The MoCA executive score did, however, decline more rapidly in the HD patients compared with PD patients (Table 4, Figure 2), after adjusting for the same variables.
Cognitive function and DMC in renal disease
In total, 10 patients participated in a pilot study evaluating the relationship between cognitive function and DMC. This cohort consisted of five PD, three CKD and two HD patients. All patients were of male gender with a mean age of 54.3 years; seven patients had a degree of cognitive impairment (MoCA <26). and those with normal cognitive function. There was also no significant correlation between the four domains of the MacCAT-T [understanding (r ¼ À0.13; P ¼ 0.71), reasoning (r ¼ 0.32; P ¼ 0.37), appreciation (r ¼ 0.36; P ¼ 0.34), expression of choice (r ¼ À0. 19; P ¼ 0.60)] and executive function.
Discussion
In this study, we hypothesized that dialysis would be associated with a decline in cognitive function compared with CKD patients. The results suggest that global cognitive function declines over time for patients on dialysis compared with CKD patients. We had also anticipated that cognitive function would be better preserved in patients on PD compared with those on HD. Global cognitive trends did not differ between the HD and PD cohorts. However, executive function was better preserved over time in the PD group compared with those on the HD. It is well recognized that the risk of cognitive impairment increases as renal function declines [15,16]. In addition, several studies have shown that cognitive performance is poorer in dialysis patients compared with that in matched healthy controls [2]. Dialysis potentially exerts an independent effect on cognitive function in patients with advanced CKD. This is supported by the improvement in cognitive function in dialysis patients following transplantation [8]. Dialysis is thought to affect cognitive function by a variety of mechanisms. However, supportive evidence for these mechanisms is limited. Observational studies have so far failed to show an association between cognitive function and small solute clearance [17] or dialysis frequency [18]. Data on the influence of intradialytic hypotension (IDH) on cognition are conflicting. Kurella et al. found no significant relationship between IDH and cognitive impairment in HD patients enrolled in the Frequent Haemodialysis Network (FHN) trial [4]. More recently, IDH has been directly linked with ischaemic brain injury and potentially, cognitive impairment in HD patients [19].
The faster decline in executive function in HD patients compared with that in PD reported in this study is noteworthy, despite the lack of significant differences in global cognitive trends. It is consistent with studies reporting a lower cumulative incidence of dementia (predominantly vascular in origin) in PD compared with HD patients [9]. Kurella Tamura et al. reported a 19% prevalence of isolated executive dysfunction (executive dysfunction despite normal global cognitive function) in a cross-sectional study of 383 HD patients [4]. It is therefore plausible that a decline in executive function predates global cognitive decline. HD has been shown to exert injurious ischaemic effects on the brain. These features are far less common in PD patients and may explain the differences reported in this study.
Patients who met the criteria for cognitive impairment (MoCA score < 26) had lower capacity assessment scores. In Terawaki et al.'s study of 26 predialysis patients [11], attentional deficits in MMSE correlated significantly with poor understanding and reasoning evaluated by MacCAT-T. While cognitive function was linked with MacCAT-T scores in broad terms, there were no associations between specific cognitive domains and the four domains assessed by MacCAT-T. The mean scores were lower in Terawaki et al.'s study compared with those in this pilot study (understanding-3.72 6 1.11 versus 5.76 6 0.08, appreciation-2.88 6 0.88 versus 3.88 6 0.11, reasoning-4.30 6 2.11 versus 6.446 0.41). The exclusion of patients with significant cognitive impairment may have contributed to these differences. In addition, the sample was too small to detect significant associations. The results may have been confounded by patient characteristics as the participants were all male and predominantly on PD.
There are other noteworthy limitations. The study was single centre and observational in nature. As such, causality cannot be established and the findings are unlikely to be generalizable. The sample size was not determined by a power calculation. Due to convenience sampling, one cannot exclude the possibility of selection bias. In addition, the effect estimates from the mixed model analysis were small. Longer follow-up would be required to detect clinically important differences in the cognitive trends.
Nevertheless, the findings suggest that dialysis and possibly dialysis modality exert an influence on cognitive function. Future research should aim to identify dialysis techniques that minimize the effect on cognitive function. For example, a recent randomized clinical trial in HD patients has shown that dialysate cooling may reduce the burden of white matter disease [20]. As white matter disease has been recently linked with cognitive impairment [19], dialysate cooling may be beneficial for cognitive function in HD patients. There is also a role for regular screening in dialysis patients, to identify and investigate otherwise unrecognized cognitive impairment. There are implications for treatment compliance and with severe impairment, daily functioning. The potential impact of cognitive impairment on DMC is also relevant to clinical practice. Dialysis education could be adapted to ensure understanding in affected patients and family members. There is also a role for advance care planning in patients with significant cognitive impairment.
In summary, this study suggests that cognitive function declines faster in dialysis patients compared with similar patients with advanced CKD not on dialysis and more so in HD patients compared with those on PD. Cognitive impairment has an impact on DMC in patients with advanced kidney disease. Larger studies are needed to corroborate these findings. Meanwhile, cognitive screening should be incorporated into routine clinical practice in patients with advanced kidney disease.
|
2018-04-03T00:11:01.820Z
|
2017-01-07T00:00:00.000
|
{
"year": 2017,
"sha1": "b3a77bf5df3e334fae41187118c5282e41ff01fe",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/ckj/article-pdf/10/1/89/17690980/sfw128.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b3a77bf5df3e334fae41187118c5282e41ff01fe",
"s2fieldsofstudy": [
"Medicine",
"Biology",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
214352042
|
pes2o/s2orc
|
v3-fos-license
|
Serum Magnesium Levels in Second and Third Trimesters of Pregnancy in Patients That Developed Pre-Eclampsia and Feto-Maternal Outcome
Introduction: Pregnancy is a physiological process that may be complicated by a number of clinical conditions. Gestational diabetes and pre-eclampsia are known complications in pregnancy. Pre-eclampsia is a disease of hypo-thesis in which the pathogenesis is yet to be fully explained. The role of magnesium in the pathogenesis of pre-eclampsia has been suggested by studies and it is being investigated all over the world. The study aimed to compare serum magnesium levels in pre-eclampsia and control groups from second trimester of pregnancy and assessed maternofetal outcome. Materials and Methods: This was a nested case control study in which consenting three hundred and sixty (360) normal pregnant women were enrolled. These women were recruited in their second trimester of pregnancy. Blood samples for serum magnesium estimation were obtained from subjects and controls at recruitment and after development of pre-eclampsia. Results: Thirty seven pregnant women that developed pre-eclampsia were nested as cases and were matched with 37 controls (apparently healthy pregnant women). The mean serum magnesium at recruitment was 0.75 ± 0.028 mmol/l (cases) and 0.76 ± ment of the disease. Serum level of this biomarker affects maternofetal outcome significantly.
Introduction
Pre-eclampsia develops after 20 weeks of gestation [1]. Despite knowing the predisposing factors for its development, when the process starts and what initiates the development are still poorly understood. There are reports that the process of developing pre-eclampsia starts in first half of pregnancy; the woman that will develop pre-eclampsia can be predicted from first half of pregnancy [2] [3]. Preventive measures may be visible if the trend is detected early; this to a greater extent will reduce the burden of pre-eclampsia in our environment. The prevalence of pre-eclampsia is 7% -18% in developing countries [4] [5] [6].
Some studies have reported that changes in serum levels of magnesium observed in pre-eclamptic patients may contribute to its pathogenesis [7] [8] [9] [10]. Meanwhile, a study in Nigeria reported no significant difference in serum magnesium level of women with pre-eclampsia [11].
The possible role of magnesium deficiency in the genesis of pre-eclampsia, preterm delivery and low birth weight babies continues to be the subject of considerable debate. Therefore, there is a need for this study to observe the serum magnesium levels early during pregnancy in our environment as it relates to the development of pre-eclampsia and its other effect on maternal and fetal health.
In view of the above, this study was designed to compare serum levels of magnesium in patients that developed pre-eclampsia and its effect(s) on maternofetal outcome (which includes development of pre-eclampsia, preterm delivery, low birth weight and need for special care baby unit (SCBU) admission).
Methodology
This was a nested case control study comparing the association between those who developed pre-eclampsia and those who did not in Osogbo over eight Three hundred and sixty (360) apparently normal pregnant women in their second trimester (17 -23 weeks of gestation) were recruited at booking and during routine antenatal clinic visits into the study after obtaining written informed consent.
Sample size was determined using a statistical calculator for comparing two means. Considering a mean serum level of 0.58 mmol/L in pre-eclamptic women compared to 0.73 mmol/L in healthy pregnant women and standard deviation for an outcome of interest (serum magnesium concentration in patients with pre-eclampsia) of 0.17 from a previous study [16], at a power of 90% and 5% significance level. The minimum sample size calculated was 30 per group. Thirty seven (37) pregnant women that developed pre-eclampsia were compared with thirty seven (37) age and gestational age match controls. Exclusion criteria were chronic hypertension, chronic renal disease, pre-gestational diabetes, sickle cell anaemia, multiple pregnancy, patients on magnesium and or calcium supplements. Patients who developed pre-eclampsia before recruitment were also excluded.
Venous blood sample (5 mls) was drawn from pregnant women at recruitment and after development of pre-eclampsia using routine aseptic procedure of phlebotomy.
The samples were left undisturbed for between 30 and 60 minutes to clot and retract. Subsequently these were centrifuged at 3000×g for 10 minutes; the supernatant (serum) was then extracted into another plain specimen bottle. All the batches of serum samples were kept frozen (at temperature of −20˚C) till the time of analysis. Serum magnesium was analyzed by the use of Atomic Absorption Spectrophotometer [18].
Statistical analysis was done using statistical package for social science (SPSS) version 23. Results were tested for statistical significance using the student t-test, chi-square test and multivariate analysis. The significant value was put at 5%.
Ethical clearance for this study was obtained from the research and ethical review committee of the LAUTECH Teaching Hospital Osogbo (PROTOCOL NUMBER LTH/EC/2017/02/292).
Results
Thirty seven patients (10.27%) developed pre-eclampsia out of 360 women recruited for this study. Majority (48%) of the study population was between ages of 20 and 29 years, majorly are of Yoruba ethnicity (90%), Christianity accounted for 58%, followed by Islam 41%, others are traditional worshipers (1%).
Other demographic characteristics of the study population are as shown in Table 1.
The study showed no statistical significant difference in the mean age of the case and control groups 30.00 ± 5.06 and 30.08 ± 5.20 year respectively (P = 0.946). Also there were no statistical differences between the body mass index, estimated gestational age at recruitment, mean blood pressure at recruitment and social class of the study groups (P > 0.05). The mean systolic and diastolic blood pressure became statistically significant between the two groups at the point of diagnosis of pre-eclampsia and remained so at delivery (P < 0.001) as shown in Table 2.
Pre-eclampsia is referred to as the "disease of theories'' making its prevention and management an ongoing global challenge [6]. Its etiology is yet to be elucidated, some studies have reported that changes in levels of blood metals including magnesium observed in pre-eclamptic patients may contribute to its pathogenesis [7] [8] [9] [10].
The reduced serum magnesium level observed in patient with pre-eclampsia in our study is in agreement with reports from various studies [14] [15] [19].
The study of Adekanle et al. [16] also observed a significant low level of serum magnesium in patient with pre-eclampsia. Patients were recruited early in their pregnancy in this study, at a time when they had not developed pre-eclampsia. At recruitment there was no significant difference in the serum magnesium levels in study groups. However, both case and control groups had lower serum magnesium at diagnosis than at recruitment. It is shown in this study that apparently healthy pregnant women also have a decrease in serum magnesium level as pregnancy advances. This is in agreement with previous studies [13] [15] [17] and it is explained to be due to increase in demand for mother and growing fetus, increase renal excretion through increased glomerular filtration rate and haemodilution which is seen more in third trimester of pregnancy.
Pre-eclampsia is diagnosed in a pregnant woman with onset of hypertension (systolic and diastolic blood pressure of ≥140 and 90 mmHg, respectively on two occasions, at least 6 hours apart and urine protein of ≥300 mg in 24 hour urine sample, or a dipstick of ≥2+), this usually occurs above 20 weeks of gestation. Serum magnesium level along with calcium has roles to play in regulation of blood pressure through modification of vascular system [12] [20].
Magnesium is an intracellular ion that is important for cellular metabolism such as muscle contractility and neuronal activity. A proper balance between it and calcium is vital to regulation of blood pressure, while calcium enables the blood vessels to contract, magnesium is required for the vascular relaxation [12]. Magnesium acts as calcium channels blocker by opposing calcium dependent arterial constriction thus antagonizes increase in intracellular calcium concentration leading to vasodilatation [12] [20]. The vasodilating effect of magnesium aside increase in blood flow, has been shown to prevent pre-eclampsia/eclampsia by selectively dilating cerebral vasculature and relieving cerebral spasm associated with pre-eclampsia [12].
The majority of patients in this study were of low parity. This finding is not different from reports from literature [14] [15] [16] [19]. It is known that women who are carrying pregnancy for the first time are more prone and likely to develop pre-eclampsia. However, our study observed further, more women in their second pregnancy developed pre-eclampsia contrary to higher prevalence Women with pre-eclampsia in their first pregnancy have increased risk of having their next pregnancy complicated as such [21]. Also, a woman with increased number of risk factors for developing pre-eclampsia can have the disease even in her subsequent pregnancies especially when inter-pregnancy interval is short (less than 18 months) [21] [22]. Contrary to the belief that women of low socio-economic status have higher risk of developing pre-eclampsia [15] [23], our study observed no significant difference in social classes of the study groups.
Low serum level of magnesium at diagnosis shows significant relationship with preterm delivery which corroborates the finding of Okunade et al. [24] However Parizadeh and co-workers found no significant association between low serum magnesium and preterm delivery [25]. Also there was significant relationship between low serum levels of magnesium and low birth weight as well as SCBU admission. These findings may be related to intervention offered to patients with pre-eclampsia, which is the delivery of the fetus, because the only known curative treatment is the delivery of the placenta which is an important cause of preterm delivery leading to low birth weight and need for SCBU admission. Further analysis to control for the effect of pre-eclampsia on preterm delivery, low birth weight and the need for SCBU admission seen in our study shows that these findings were only indirectly related to low serum magnesium levels in mothers who developed pre-eclampsia. While low serum magnesium level is directly linked with the development of pre-eclampsia in this study, preterm delivery, low birth weight and the need for SCBU admission appeared to be a direct effect/complication of pre-eclampsia and its management rather than low serum magnesium. The perinatal death and stillbirth observed in our study could be due to any causes other than pre-eclampsia or low serum magnesium level. They were observed in both groups. However, it is not out of place to think that the stillbirth observed in case group was due to pre-eclampsia or low serum magnesium levels and that observed in control group was due to other causes. Gibbins et al. [26] observed in their study that placental insufficiency is often implicated in stillbirth, especially in women with preeclampsia.
Conclusion
Findings from this study revealed that hypomagnesaemia appears to be a complication of pre-eclampsia. Serum levels of magnesium were normal until the development of the disease. Serum level of this biomarker affects maternofetal outcome significantly; however further studies are needed to establish direct relationship or otherwise between hypomagnesaemia and these fetal outcomes.
|
2020-01-09T09:13:46.663Z
|
2020-01-09T00:00:00.000
|
{
"year": 2020,
"sha1": "f2ac4a325c596e52fd56b64413986b7a31e8b00a",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=97716",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "7b9c130232bdc0100bc6dd6612a9846cd58cf632",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
2549075
|
pes2o/s2orc
|
v3-fos-license
|
Deep 610-MHz GMRT observations of the Spitzer extragalactic First Look Survey field - I. Observations, data analysis and source catalogue
Observations of the Spitzer extragalactic First Look Survey field taken at 610 MHz with the Giant Metrewave Radio Telescope are presented. Seven individual pointings were observed, covering an area of 4 square degrees with a resolution of 5.8'' x 4.7'', PA 60 deg. The r.m.s. noise at the centre of the pointings is between 27 and 30 microJy before correction for the GMRT primary beam. The techniques used for data reduction and production of a mosaicked image of the region are described, and the final mosaic, along with a catalogue of 3944 sources detected above 5 sigma, are presented. The survey complements existing radio and infrared data available for this region.
INTRODUCTION
The Spitzer First Look Survey was the first major scientific program carried out by the Spitzer Space Telescope (Werner et al. 2004), observing for ∼110 hours of Director's Discretionary Time in December 2003. The aim of the extragalactic component of the First Look Survey (xFLS) was to study a region with low Galactic background to a significantly deeper level than any previous largearea infrared survey, in order to accurately characterise the dominant infrared source populations. Observations were made over four square degrees centred on 17 h 18 m 00 s , +59 • 30 ′ 00 ′′ (J2000 coordinates, which are used throughout this paper) with two instruments -the Infrared Array Camera (IRAC, Fazio et al. 2004), using the 3.6, 4.5, 5.8 and 8 µm bands, and the Multiband Imaging Photometer for Spitzer (MIPS, Rieke et al. 2004) at 24, 70 and 160 µm. The data have been processed and maps and source catalogues are currently available (IRAC -Lacy et al. 2005; 24 µm 70 and 160 µm -Frayer et al. 2006).
Complementary observations have been taken at a range of wavelengths to fully exploit the new deep infrared data. Deep optical surveys of the region have been completed in the Rband (KPNO 4 m, Fadda et al. 2004), and in the u * -and gbands (CFHT 3.6 m, Shim et al. 2006). The xFLS region was covered by the early data release of the Sloan Digital Sky Survey (Stoughton et al. 2002). A further redshift survey targeting selected 24-µm sources was made with the MMT/Hectospec fiber spectrograph (Papovich et al. 2006), and a total of 1587 redshifts are publicly available.
There are two existing 1.4-GHz radio surveys of the xFLS re-⋆ E-mail: tsg25@cam.ac.uk gion, made with the Very Large Array (VLA) in B-array configuration, and the Westerbork Synthesis Radio Telescope (WSRT). The VLA survey covered ∼4 deg 2 with a resolution of 5 ′′ , to a 1σ depth of ∼23 µJy (Condon et al. 2003) and contains 3565 sources, while the WSRT survey (Morganti et al. 2004) concentrated on ∼1 deg 2 of the xFLS region to a depth of 8.5 µJy beam −1 with a resolution of 14 ′′ × 11 ′′ , PA 0 • , and contains 1048 sources.
There is a tight correlation between the far-infrared and radio luminosities of galaxies, which has been known for many years (see e.g. Helou et al. 1985;Condon 1992). The correlation applies to both the local (Hughes et al. 2006) and global (Murphy et al. 2006) properties of galaxies in the local and distant universe (Garrett 2002;Gruppioni et al. 2003;Chapman et al. 2005;Luo & Wu 2005). There have been two previous studies of the IR/radio correlation in the xFLS region (Appleton et al. 2004;Frayer et al. 2006). Appleton found no evidence for a variation of the correlation with redshift, while Frayer, using more sensitive infrared data, found a decrease in the infrared/radio flux density ratio with z. To study the luminosity variation it is necessary to accurately k-correct the observed radio flux densities to their restframe values using the spectral index α (where α is here defined so that flux density scales with frequency as S ∝ ν −α ). Both previous works in this region assumed that all sources have the same radio spectral index (taken to be α = 0.7 and 0.8 respectively), but in order to perform this correction accurately it is necessary to have a detection in at least two radio frequencies.
The large amount of existing data on this field makes it a good candidate for a further deep radio survey. We have imaged the xFLS region at 610 MHz with the Giant Metrewave Radio Telescope (GMRT), reaching an r.m.s. noise level of around 30 µJy before primary beam correction. Our survey has a comparable res- olution (5. ′′ 8 × 4. ′′ 7 compared with 5 ′′ ) to the VLA 1.4-GHz survey, allowing a direct comparison between source properties at the two wavelengths to be made. Our observations are deeper than the VLA survey over most of the image, for a typical radio source with α = 0.8.
Section 2 describes the observations, and data reduction techniques. In Section 3 we present the mosaicked image and source catalogue. We show a selection of extended objects visible in our mosaic, and discuss the artefacts seen near bright sources. In Section 4 we compare our catalogue with the VLA survey, in order to test the positional accuracy of our data. Further analyses of the data, including detailed comparisions with Spitzer data will be made in forthcoming papers. Appendix A presents some technical details of corrections made to the observed GMRT data, and Appendix B gives details of a correction made to the integrated flux densities of sources.
OBSERVATIONS AND DATA REDUCTION
Observations of the xFLS region were made at 610 MHz with the GMRT (Ananthakrishnan 2005), located near Pune, India, over four days in March 2004. Seven pointings were observed, in a hexagonal grid centred on 17 h 18 m 00 s , +59 • 30 ′ 00 ′′ as shown in Fig. 1. The pointings were spaced by 43 arcmin, approximately the half-power bandwidth of the GMRT primary beam at 610 MHz, which gives nearly uniform noise coverage over most of the xFLS region.
Observations of 3C48 or 3C286 were made at the beginning and end of each observing session in order to calibrate the flux density scale. The AIPS task SETJY was used to calculate flux densities at 610 MHz of 29.4 and 21.1 Jy respectively, using an extension of the Baars et al. (1977) scale. Two sidebands were observed, each of 16 MHz with 128 spectral channels, and a 16.9 s integration time was used. Each field was observed for ∼200 min over four 10 hour observing sessions made between March 23rd and 27th. The observations consisted of interleaved 20 min scans of each pointing, in order to improve uv coverage. A nearby phase calibrator, J1634+627, was observed for three minutes between the scans of each pointing to monitor any time-dependant phase and amplitude fluctuations of the telescope. The measured phase of J1634+627 varied smoothly between observations, with a typical variation between calibrator observations of less than 40 • for the long baseline antennas and below 10 • for the short baseline antennas.
Standard AIPS tasks were used to flag bad baselines, antennas, channels that were suffering from narrow band interference, and the first and last 16.9 s integration period of each scan. A bandpass correction was applied using the flux calibrators, for each antenna. A pseudo-continuum channel was then made by combining the central ten channels together, and an antenna-based amplitude and phase calibration created using the observations of J1634+627. This calibration was applied back to the original 128 channel data set, which was compressed into 11 channels, each containing ten of the original spectral channels (so the first and last few channels, which tended to be the noisiest, were discarded). The small width of these new channels ensures that bandwidth smearing is not a problem, and all 11 channels could be individually inspected to remove additional interference. After the flagging and calibration was complete the two sidebands were combined into a single data set (see Appendix A1) to improve the uv coverage. The coverage for the central pointing is shown in Fig. 2.
The large field of view of the GMRT leads to significant errors if the whole field is imaged directly, due to the non-planar nature of the sky. To minimise these errors, each pointing was broken down into 19 smaller facets which were imaged separately, with a different assumed phase centre, and then recombined to deal with the transformation from planes to the sphere. Images were made with an elliptical synthesised beam with size 5. ′′ 8 × 4. ′′ 7, position an- gle +60 • , with a pixel size of 1. ′′ 5 to ensure that the beam was well oversampled.
The GMRT has a large number of small baselines, due to the cluster of 14 antennas in the central 1 km of the array. This dominates the uv coverage and affects the beam shape, so baselines less than 1 kλ were omitted in the imaging. The images went through three iterations of phase self-calibration at 10, 3 and 1 minute intervals, and then a final round of self-calibration correcting both phase and amplitude errors. The overall amplitude gain was held constant in order not to alter the flux density of sources. The initial self-calibration step corrected the phase errors by up to 10 • , with later self-calibration making much smaller changes and having a smaller effect on the r.m.s. image noise.
Two problems were identified and corrected during imaging (see Appendix A for further details).
(i) An error in the time-stamps of the uv data, and hence the uvw coordinates, was corrected using a custom-made AIPS task.
(ii) The GMRT primary beam centre was shifted by ∼2.5 arcmin to compensate for a position-dependant flux density error.
The theoretical r.m.s. noise of each pointing, before primary beam correction, is where T s ≈ 92 K is the system temperature, G ≈ 0.32 K Jy −1 is the antenna gain -values taken from the GMRT website 1 -n is the number of working antennas, which was typically 28 during our observations, N IF = 2 is the number of sidebands, ∆ν = 13.75 MHz is the effective bandwidth of each sideband, and τ ≈ 12000 s is the average integration time per pointing. 30 µJy beam −1 before primary beam correction, which is very close to the theoretical limit of ∼26 µJy, although dynamic range issues limit the quality of the images near bright sources where the noise is greater. The seven pointings were corrected for the primary beam of the GMRT, taking into account the offset beam position as discussed in Appendix A2. The beam correction was performed using an 8th-order polynomial, with coefficients taken from Kantharia & Rao (2001). The pointings were then mosaicked together, weighting the final image by the r.m.s. noises of each individual pointing, and the mosaic was cut off at the point where the primary beam correction factor dropped to 20% of its central value. Figure 1 illustrates the variation in noise across the mosaic. The noise level is smooth and around 30 µJy across the interior of the map, and increases towards the edges to about 150 µJy where the primary beam correction was 20%.
RESULTS AND DISCUSSION
The final mosaic is shown in Fig. 3. There are phase artefacts visible around the brightest sources, which have not been successfully removed during self-calibration. An enlarged image of one of the bright sources is shown in Fig. 4. It is thought that the artefacts are due to an elevation-dependant error in the position of the GMRT primary beam (see Appendix A2), which will lead to image distortion near the brightest objects since the observations of each pointing were taken in a series of scans with varying elevations. A small portion of the mosaic is shown in Fig. 5, demonstrating the quality of the image away from the bright sources, with the VLA map of the same area shown in Fig. 6 for comparison. The greyscale has been set on Figs 5 and 6 so that an object with a spectral index of 0.8 will appear equally bright in both images. Most sources are unresolved in the 610-MHz image, although there are some objects present with extended structures -we present a sample of these in Fig. 7.
A catalogue of 3944 sources was created using Source Extractor (Bertin & Arnouts 1996) with peak brightness greater than ∼ 5σ -see Appendix B for further details. Source Extractor has a significant advantage over AIPS tasks such as SAD when creating a source list, in that it is capable of calculating the local background and noise level on the image. Phase errors near the bright sources lead to an increase in noise, but by using a box of 16 × 16 pixels to estimate the local noise, the number of spurious detections in the final catalogue was reduced considerably -this means that the fit is less deep near the brightest sources. Table 1 presents a sample of 60 entries in the catalogue, which is sorted by right ascension. The full table, and radio image of the Spitzer extragalactic First Look Survey field will be available via http://www.mrao.cam.ac.uk/surveys/. Column 1 gives the IAU designation of the source, in the form FLS-GMRT Jhhmmss.s+ddmmss, where J represents J2000.0 coordinates, hhmmss.s represents right ascension in hours, minutes and truncated tenths of seconds, and ddmmss represents the declination in degrees, arcminutes and truncated arcseconds. Columns 2 and 3 give the right ascension and declination of the source, calculated by first moments of the relevant pixel flux densities to give a centroid position. Column 4 gives the brightness of the peak pixel in each source, in mJy beam −1 , and column 5 gives the local r.m.s. noise in µJy beam −1 . Column 6 gives the integrated flux density in mJy, calculated from the mosaic and applying the flux density correction factor described in Appendix B. Column 7 gives the error in integrated flux density, calculated from the local noise level and (Condon et al. 2003). The greyscale ranges between −0.1 and 0.5 mJy beam −1 , equivalent to the GMRT image for objects with a spectral index of 0.8. The image resolution is 5 arcsec 2 . give the X, Y pixel coordinates from the mosaic image of the source centroid. Column 10 is the Source Extractor deblended object flag -0 for most objects, but 1 when a source has been split up into two or more components. For deblended objects it is necessary to examine the image in order to distinguish between the case where two astronomically distinct objects have been split up, and when one extended object has been represented by more than one entry. There are 211 deblended sources in our catalogue. The non-uniform noise characteristics of our survey make it important to quantify the area that has been surveyed with each noise level. Figure 8 shows the fraction of pixels with a particular noise level (taken from the Source Extractor r.m.s. map), and the cumulative fraction of pixels at that noise level. The peak brightness distribution has been plotted in Fig. 9, along with a distribution that has been corrected for the varying amount of solid angle being surveyed to each sensitivity level. Previous studies have shown (Hopkins et al. 2002) that Source Extractor has a false detection rate of below 5%, and detects above 90% of sources with peak brightness close to the detection threshold.
COMPARISON WITH THE VLA SURVEY
The VLA survey of the xFLS region (Condon et al. 2003) has a uniform noise level of 23 µJy. This is equivalent to a noise of ∼ 45 µJy at 610 MHz, assuming a spectral index of 0.8. Our observations have significantly lower noise levels than this in the centre of our mosaic, with the noise in the overlap regions between two pointings being approximately 45 µJy (see Fig. 1). Figure 8 shows that the noise is below 45 µJy for about half the area of our mosaic.
In order to make a quantitative comparison between the VLA xFLS survey and our GMRT survey, we have run Source Extractor on our mosaic and the 1.4-GHz image. At the ∼5σ level, there were 3826 sources detected at 610 MHz, and 3091 detected at 1.4 GHz, in the region covered by both surveys. The source lists were matched using a pairing radius of 6 ′′ , which is approximately the resolution of the two catalogues, and 1580 unique matches were found. Figure 10 shows the source distribution of objects found by the GMRT but not by the VLA -the majority of the unmatched sources are found in regions where our observations are deeper than the VLA image.
It is likely that some of the sources that are not detected at 610 MHz but are detected at 1.4 GHz are flat spectrum objects, with spectral index ≤ 0.5. The VLA noise of 23 µJy would be equivalent to a 610-MHz noise level of ∼ 35 µJy for α = 0.5, so faint flat spectrum objects detectable throughout the VLA image would only be detectable in a small region of our mosaic. Figure 11 shows the position offsets of the matched GMRT sources compared with their VLA counterparts. The offsets have an approximately Gaussian distribution, with mean offset in right ascension of 0. ′′ 4, standard deviation 0. ′′ 5 and in declination 0. ′′ 2 with a standard deviation of 0. ′′ 6. These offsets have not been applied to our catalogue, since it is uncertain as to which survey the errors come from.
The spectral index distribution of objects detected at both frequencies is shown in Fig. 12. The integrated flux densities of sources have been used for the calculation, where the flux density of VLA sources was corrected using the same method as for the GMRT image (details in Appendix B). Correction factor Peak signal-to-noise ratio Figure A1. Flux density correction factor for point sources data, so that it was on the same frequency scale as the USB points, and so the sidebands could be combined before imaging.
(ii) The coordinates of sources originally found in our GMRT images were seen to be slightly rotated near the edge of each pointing compared with their VLA positions. This was due to incorrect time-stamps being used in the GMRT online software, leading to a slight error in the uv data. We wrote a customised AIPS task, UVFXT, to increase the time-stamps by 7 s and correctly recompute the uvw co-ordinates. This was performed before the final images were created.
A2 Primary beam correction
During the preliminary analysis of the data, we compared the properties of several hundred relatively bright sources with peaks greater than 1 mJy beam −1 , visible in the overlapping regions between two pointings. This comparison revealed a systematic difference between the apparent brightness of sources in adjacent pointings; sources located to the north-west of a pointing were consistently brighter than the same source when viewed in the south-east of an adjacent pointing. We were able to model this as the effective, average pointing centre of the telescope being offset by ∼ 2.5 arcmin in a north-west direction, compared with the nominal pointing centre. The amount and direction of the offset was consistent between all pairs of pointings. After applying the correction, the systematic effects were removed and the r.m.s. flux density errors were below 10% near the edge of the primary beam, compared with nearer 20% before the correction was applied.
It is thought that the primary beam offset depends on telescope elevation. Our scans of each source were not taken symmetrically about source transit, and the detected offset is likely to be an average over the individual offsets of each scan. Sources will have a slightly different position and flux density in each scan, due to this elevation-dependant error. Self-calibration assumes that the source remains constant with time, and this is likely to be the reason why residual artefacts are seen near the brightest sources.
APPENDIX B: SOURCE EXTRACTOR
Source Extractor calculates the flux density of an object by summing all pixels greater than some user-defined threshold. For an object to be included in our catalogue, we required it to have at least five connected pixels with brightness above 2σ , and a peak pixel brightness of greater than 5.25σ . Because of the oversampling of the beam, the peak of a source was taken to be the value of the brightest pixel in the island of flux. The peak brightness requirement for our catalogue was set to the slightly more conservative 5.25σ rather than 5σ due to the increased number of spurious sources near the edges of the image that were being detected at the lower cutoff. 3944 sources were identified and included in the catalogue.
The integrated flux density measured by Source Extractor only comes from pixels above 2σ . This means that bright sources with a large signal-to-noise ratio (SNR) will have almost all their flux measured, whereas for faint sources an appreciable fraction of the flux is missed. The required correction factor will depend only on the peak SNR, for unresolved sources. We modelled a point source with a range of SNRs, and varied the exact location of the centre of the source across a pixel. Figure A1 shows the correction factor that has been applied to an unresolved source in order to obtain the true flux density. In our catalogue we give only the corrected flux densities -for extended objects that do not have the same shape as the beam, it is necessary to examine the image to obtain an accurate flux density.
|
2014-10-01T00:00:00.000Z
|
2007-01-18T00:00:00.000
|
{
"year": 2007,
"sha1": "9ea7c93aa26cb525dca6ed648917edfd389121eb",
"oa_license": null,
"oa_url": "https://academic.oup.com/mnras/article-pdf/376/3/1251/17317496/mnras0376-1251.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "2816ad2b5d120b56410e4b7e0699acab7ce01943",
"s2fieldsofstudy": [
"Physics",
"Geology"
],
"extfieldsofstudy": [
"Physics"
]
}
|
84181149
|
pes2o/s2orc
|
v3-fos-license
|
Effect of thermal treatment on points defects of AlN codoped ZnO films
The effect of annealing temperature on the structural properties of Al-N codoped ZnO films were studied by X-ray diffraction, photoluminescence and Raman spectroscopy. ZnO films were deposited by sputtering technique on silicon substrates at 20 oC, Al-concentration was kept constant and N-flow was changed to 6, 12 and 15 sccm. A thermal treatment was performed by annealing the sample during 30 minutes at 300, 400, 500, 600 and 700 °C. Before annealing, Raman spectra shows two vibration modes located at 275 and 580 cm-1 associated to the nitrogen incorporation and the presence of point defects. Both Raman intensities of modes I275 and I580 decreases when the nitrogen flow increases from 6 to 12 and 15 sccm, which is originated by a decreasing interstitial defects density. The improving of the crystal quality was confirmed by x-ray diffraction and room temperature photoluminescence measurements. After annealing, in the Raman spectra it was observed that I275 increases as the temperature increase, reaches a maximum intensity between 500 and 600 °C, and decreases for higher temperatures. X-ray diffraction measurements show that after annealing the compressive stress decrease progressively as the annealing temperature increase. This study suggests that 275 Raman mode could be used to estimate the optimal thermal treatment in order to achieve pdoping ZnO.
INTRODUCTION
Zinc oxide (ZnO) is a semiconductor with special interest because it has a wide band gap (3.36 eV at 300K) and large exciton binding energy (60 meV), which can be widely used in the manufacture of optoelectronic devices such light-emitting diodes, laser diodes, photo-detectors, transparent electrodes, gas sensors and solar cells.One of the big challenges is to control the p-type doping due to its high activation energy and the low solubility of acceptor dopants.Another feature of ZnO not favorable to the p-type doping is the presence of native defects such as interstitial and vacancies acting as an n-type doping, generating a phenomenon known as self-compensation [1][2][3].Nitrogen is the doping elements frequently used in order to replace the oxygen atoms and consequently increases the hole concentration.However, another factor limiting the production of high levels of p-type doping is the low solubility of nitrogen originated by a weak bond N-Zn, which is easily broken at high growth temperatures (300 -600 ºC).Codoping is an alternative method that has been proposed, using acceptors and reactive donors simultaneously in order to achieve p-type ZnO [4].In several studies were used elements such as P [5,6], In [7,8], Be [9], Ag [10] and Al [11][12][13][14][15][16][17][18][19] in addition to nitrogen atoms, in order to increase incorporation of N into the ZnO crystalline lattice.Other important factors that help to p-doping are the growth temperature and the post-annealing process.Chen et al. [8] studied In-N codoped ZnO films grown on different substrates and they found that 540 ºC is the optimal temperature; may be, this is the explanation why Zeng et al. [11] got p-conductivity in Al-N codoped ZnO films sputtered at 500 °C and Shinho et al. [12] observed n-conductivity in Al-N codoped ZnO sputtered at 300 °C.Other studies used thermal annealing in order to ensure the nitrogen incorporation in oxygen sites (N O ) and to achieve p-type doping [3,7,10,[13][14][15][16][17].Li et al. [10] found that p-doping is gotten with post-annealing at 615 ºC during 25 minutes on Ag-N doped ZnO films.With the Al-N codoping system: Liu et al. [13] developed p-type films with post-annealing during 30 min at temperatures between 575 and 600 ºC.Kumar et al. [14] observed p-type at temperatures higher that 400 °C, they remark that 600 °C is the best.Yang et al. [16] studied sol-gel codoped ZnO films and they found optimized p-type conduction at 550 °C.
In this paper, we study the influence of annealing temperature on the structural properties of Al-N codoped ZnO films grown on Si (100) at 20 ºC by X-ray diffraction, photoluminescence and Raman microscopy in order to get a best understanding of both the intrinsic defects and NO density behavior during the annealing process.
MATERIALS AND METHODS
Al-N codoped ZnO films were deposited on silicon (100) substrates by sputtering technique.All Si substrates were ultrasonically cleaned sequentially in acetone, methanol, and then deionized water.All samples were grown using a co-sputtering technique of dual targets: a ceramic ZnO target at RF power of 150 W and a pure Al (purity 99.9%) at DC power of 50 W.During the film deposition substrate temperature and deposition time were kept constant at 20ºC and 17 min, respectively.High purity argon (99.999%) and nitrogen (99.999%) were used as sputtering gases, the flow rate of argon was fixed at 6 sccm, whereas the flow rate of nitrogen was varied at 6, 12 and 15 sccm.The Al-N codoped ZnO films were annealed in air for 30 minutes at temperatures of 300, 400, 500, 600 and 700 °C.The crystallinity of the samples was characterized using X-ray diffraction (XRD, Bruker D8 Advance).Room temperature photoluminescence (PL) was carried out using a He-Cd laser with excitation wavelength 325 and power of 16 mW.Raman microscopy study was performed with a 532 nm laser line of 10 mW as excitation source (DXR model, Thermo Scientific).
Before annealing
Figure 1 shows XRD patterns of ZnO films before thermal treatment for nitrogen flow rate of 6 (N6), 12 (N12) and 15 sccm (N15).The patterns have a slow mismatch with the reference positions of undoped ZnO films due to the stress originated by the incorporation of both Al and N atoms.In sample N6, seven diffraction peaks corresponding to the (100), (002) (101) (102), (110), ( 103) and (112) planes are observed, the first three with major intensity.For samples N12 and N15, ZnO films exhibit (002) preferential orientation with the c-axis perpendicular to the substrate.As increase of N content, the full width at half maximum (FWHM) of the (002) peak decreases clearly, indicating the improving crystallinity due to incorporation of N atoms.Figure 2 shows PL spectra of samples N6, N12 and N15, where is evident a broad band between 1.7 and 3.1 eV that could be originated by interstitial defects (Zn i , O i , N i and Al i ).We cannot see the band to band transition near to 3.36 eV, suggesting a poor crystal quality.On samples where the N content increase (N12 and N15), the signal associated to interstitial defects is diminished, the bandgap transition is possible to see clearly and the signal associated to oxygen vacancies (1.62 eV) is observable too.
In order to estimate the nitrogen incorporation to the oxygen sites (N O ), Raman measurements were made.Figure 3 displays the Raman spectra of N6, N12 and N15 samples, where we can see two vibration modes located at 275 and 580 cm -1 originated by the Al-N codoped ZnO films and two more by the Si substrate.There is an old controversy about the accurate origin of 275 Raman mode, but it has been frequently related to the NO concentration and to the presence of point defects (Zn i specifically) [20][21][22][23][24]. Raman intensities of both modes, I275 and I580, decreases when the nitrogen flow increases from 6 to 12 and 15 sccm.Other authors suggesting that the increase of nitrogen flow mitigates the formation of interstitial defects [23].In this work, we can see good agreement between XRD, PL and Raman measurements: the interstitial defects decrease as the N flow increase in samples grown at 20ºC.
After annealing
Figure 4 shows the XRD patterns of sample growth with 12 sccm nitrogen flow after the annealing process at temperatures between 300 and 700 ºC.As we can see, the (002) peak shifts to high angles (residual stress decrease) and the FWHM decrease as the temperature increase.Patterns of N6 and N15 samples presented similar behavior (not showed here).The XRD patterns behavior is clear evidence that the crystal quality has improved after annealing treatment, this suggests that interstitial defects concentration decreases as the temperature increase.
Figure 5 shows the Raman spectrum of N12 samples after annealing, where we can see that both I275 and I580 modes increases as the temperature increases, reaches a maximum around 500 °C and decreases at higher temperatures.Considering that interstitial defects are decreasing, the I275 behavior is generated mostly by the N O concentration.At high temperatures, this signal is diminishing because more.oxygen vacancies are generated, which play like n-type doping.The Raman behavior is in good agreement with other reports where has reported that optimal temperature is between 500 and 600 ºC [13,16].
CONCLUSIONS
In this work, the effect of annealing temperature on the structural properties of Al-N codoped ZnO films have been studied.When the nitrogen content was increased a better crystal quality was observed by XRD, photoluminescence and Raman measurements, originated by the diminishing of the interstitial defects concentration.After annealing, in the Raman spectra it was observed that the Raman mode associated to the complex Zn i -N O (I275) increases as the temperature increase, reached a maximum intensity between 500 and 600 °C, and decreased for higher temperatures.X-ray diffraction measurements showed that after annealing the compressive stress decrease progressively as the temperature annealing increase.This study suggests that 275 Raman mode could be used to estimate the optimal annealing temperature in order to achieve p-doping ZnO films.
Figure 4 :
Figure 4: XRD patterns of N12 after thermal treatment compared wit the spectra without thermal treatment (wTT).
Figure 5 :
Figure 5: Raman spectra of N12 sample after thermal treatment at different temperatures, compared with the spectrum without treatment.
|
2019-03-13T18:33:55.551Z
|
2018-07-19T00:00:00.000
|
{
"year": 2018,
"sha1": "e205f49060ab88b1ebb61d678b8e2842c2faea01",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/pdf/rmat/v23n2/1517-7076-rmat-23-02-e12120.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "e205f49060ab88b1ebb61d678b8e2842c2faea01",
"s2fieldsofstudy": [
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
17105538
|
pes2o/s2orc
|
v3-fos-license
|
Quantum Stability of (2+1)-Spacetimes with Non-Trivial Topology
Quantum fields are investigated in the (2+1)-open-universes with non-trivial topologies by the method of images. The universes are locally de Sitter spacetime and anti-de Sitter spacetime. In the present article we study spacetimes whose spatial topologies are a torus with a cusp and a sphere with three cusps as a step toward the more general case. A quantum energy momentum tensor is obtained by the point stripping method. Though the cusps are no singularities, the latter cusps cause the divergence of the quantum field. This suggests that only the latter cusps are quantum mechanically unstable. Of course at the singularity of the background spacetime the quantum field diverges. Also the possibility of the divergence of topological effect by a negative spatial curvature is discussed. Since the volume of the negatively curved space is larger than that of the flat space, one see so many images of a single source by the non-trivial topology. It is confirmed that this divergence does not appear in our models of topologies. The results will be applicable to the case of three dimensional multi black hole\cite{BR}.
I. INTRODUCTION
When we consider matter fields in a spacetime with a non-trivial topology, the boundary effects of quantum fields appear. This is one of main targets in quantum field theory in curved spacetimes. Such effects have been studied well in spatially flat spacetimes [1], but not so well in spatially curved spacetimes. This is because of the complexity of the topology which is allowed in such curved spatial section. In other words, there will be new topological effects of quantum field in the curved spatial section with a little complex topology.
To construct a space with a non-trivial topology, we identify the points of covering space by the discrete subgroup of the isometry of the space. Then we want to consider the covering space with an appropriately simple isometry group such that the topology has some interesting characteristics. As the space with simple isometry, there are S n , R n and H n , so called closed-, flat-and open-universe. Furthermore, we decide to treat H n since this hyperbolic space allows various topologies possessing interesting characteristics.
To treat this open-universe as a background spacetime, we determine its time evolution.
For simplicity we consider maximally symmetric spacetimes of de Sitter spacetime with a hyperbolic chart or anti de Sitter spacetime in a Robertson-Walker coordinate. Their spatial sections are H n . The de Sitter spacetime with the hyperbolic chart may be important in a cosmological sense. It is believed that the global feature of our universe is homogeneous and isotropic. If our observation suggests that the spatial curvature of our universe is negative, the background spacetime is locally open-universe. In the inflation, the de Sitter spacetime with a hyperbolic chart is a good model for cosmology [2]. If we prefer a universe with a finite volume, the de Sitter spacetime with a hyperbolic chart with non-trivial topology will become important.
The topology of the open-universe (in the present article, the 'open-universe' means not an open topology but only a negatively curved space) is well known in a two dimensional space. Then we construct the simple example of a two dimensional open-universe with interesting topology in (2+1)-de Sitter spacetimes and (2+1)-anti de Sitter spacetimes. Quantum scalar field is studied in these spacetimes using point stripping manner. The divergences of the quantum fields will be discussed.
In the section 2, we prepare simple model of the universes with interesting topologies in the (2+1)-de Sitter spacetime and the (2+1)-anti de Sitter spacetime. The quantum field is investigated in the section 3. The last section is devoted to a summary and discussions.
A. Two Dimensional Universe
First of all, we develop topologies of two-dimensional spatial sections. For the simplicity of topologies we treat (2+1)-spacetime in the present article. For a cosmological reason and a further simplicity, spatial two-dimensional sections are assumed to be S 2 , R 2 or H 2 corresponding to closed-, flat-or open-universe, respectively. In a two-dimensional space, the topologies of complete manifolds are classified by Euler numbers. This is calculated from the number of handles and cusps (see Fig.1). The cusp is a point at infinity with a needle-like structure. Here it should be emphasized that the cusp is no unnatural or artificial. These points at infinity are no singularity. There is no reason to give them special treatment in a classical physics [3]. From the Gauss-Bonnet theorem for a complete 2-manifold, the Euler number χ is given where (2) R is a two-dimensional scalar curvature and N( * ) is the number of * . The signature of (2) R restricts the variety of the topology. Since the Euler number is less than 2 (χ = 2 is for a sphere), the case of negative curvature allows various topologies (various numbers of handles and cusps). In such negative curvature space, we expect new topological effects of a quantum field. Since the cusp is an infinitely small structure, it may cause the divergence of the quantum field. The negative curvature space has a crucial characteristic that the volume of the space is larger than that of the flat space at a distant region. Then one will see so many images of sources because of a non-trivial topology. The method of images may suffer the difficulty of a divergence. To discuss these speculations, we must treat general topologies of the negative curvature space H 2 . In the present article, however, only two simple cases of them can be investigated since these cases possess the cusps and the above mentioned characteristic.
To construct a non-trivial topology of the negative curvature space, we draw a polygon surrounded by geodesics on a hyperbolic space H 2 as a fundamental region and identify the geodesics. Poincaré model is one of the models of the hyperbolic space, which is conformally flat and a compact chart. The metric of the Poincaré model is whose spatial curvature is −1. cosps. △ABC is transformed to a n-th outward position by (Λ Requiring orientability, there are two pairs of identifications for geodesics contained by the triangles, which provide complete manifold. One is They generate a discrete subgroup Γ (1) of SO(2, 1). By these identifications, the topology of H 2 /Γ (1) becomes a torus having a point at infinity, which is a cusp (see Fig.(3)). Of course, the Gauss-Bonnet theorem gives its Euler number as The other is They generate Γ (2) ⊂ SO(2, 1). A resultant topology H 2 /Γ (2) is a sphere with three cusps (Fig.3). The Euler number of it is It is noted that these manifolds H 2 /Γ (1) and H 2 /Γ (2) are easier to handle than the doubletorus which is a well known example of hyperbolic manifolds. For instance, though the fundamental group of our manifolds is (a, b), that of the double torus is (a, b, c, d : Here we express all elements of Γ (i) in an appropriate form for the rest of the present ∀ T (i) ∈ Γ (i) is given by For example, Fig.2. △ABC is transformed to a 2n-th outward position of triangles by (Λ (i) k ) n and rotated around the origin by R(θ j ) with an appropriate angle θ j . k is selected from 1 ∼ 3 so that the orders of the vertices match between △ABC and △A ′ B ′ C ′ .
For the rest of the article, we give Λ 1 is given by Here we should note that Λ (1) 's and Λ (2) 's are in the different category of the Lorentz group SO(2, 1). It is known that all elements of SO(2, 1) are SO(2, 1)-conjugate to an element of the following forms. We call them as standard forms. In the SL(2, C) representation (16), • 1) An elliptic element is conjugate to with one fixed point on H 2 .
• 2) A parabolic element is conjugate to with one fixed point on the sphere at infinity.
• 3) A hyperbolic element is conjugate to with two fixed point on the sphere at infinity.
These angle parameters θ, β are real numbers. Though Λ (1) 's are conjugate the category 3), Λ (2) 's are parabolic. We note that the fixed point of a parabolic element corresponds to a cusp produced by the parabolic element. It is revealed in the next section that these facts affect the quantum field.
B. de Sitter and anti de Sitter Spacetime with Non-trivial Topology
Now we consider (2+1)-spacetime whose certain spatial sections have above mentioned topologies. For simplicity, Teichmüller deformation [5] is not considered and every identifi- where the upper case is dS 3 and the lower case is AdS 3 . We treat only the spacetime with a unit curvature radius for simplicity because the absolute value of the curvature is not essential for the following investigation.
III. QUANTUM FIELD
In this section Quantum field is investigated in the spacetime with non-trivial topology whose covering spacetime is dS 3 or AdS 3 . We introduce conformally coupled massless scalar field φ, with the action where R is the scalar curvature. The field equation in dS 3 or AdS 3 with R µν = ±2g µν is with R = ±6 for our dS 3 or AdS 3 with a unit curvature radius.
Now we consider the Hadamard Green functions in the covering spacetime dS 3 or AdS 3 .
According to Steif [6], they are given bȳ where |x − y| is a chordal distance between x and y in the four dimensional imbedding spacetime (24)∼(27) and not a proper distance in dS 3 or AdS 3 .
The Hadamard functions for spacetimes with non-trivial topologies dS 3 /γ (i) , AdS 3 /γ (i) can be obtained from the Hadamard function for their covering spacetime (31) by the method of images. Since the images of y are generated by elements of γ (i) , the Green function is where the summation is over all elements of γ (i) except for identity. The identity is excluded to subtract all local contributions of the quantum field. This procedure ought to regularize the energy-momentum tensor of the quantum field.
When γ is Abelian group, the summation can be easily evaluated like the three dimensional black hole case [6] (for example, 1 ) n is equivalent to the three dimensional black hole). On the other hand, our non-Abelian γ makes rigorous evaluations impossible. The simple universe shown in the previous section, however, allows us to evaluate some divergences. The abstract summation of (32) is decomposed into by eq.(15) A quantum energy-momentum tensor is given by in the point stripping method, where D µν is a certain differential operator (see [6], for example). Hence, investigating the zero of the distance |x − T (x)| and the summations of (33), we can discuss the divergences of < T µν > since the divergences of < T µν > come out from the divergences of the Green function.
There are three possibilities of divergences for the quantum field. The quantum field will diverge at the singularity of the background spacetime as in the case of three dimensional black hole [6]. Also the infinitely small structure of cusps will cause the divergence of the quantum field though the cusp is not a singularity. Furthermore, the summation of the images in the method of images is expected to diverge since the volume of the hyperbolic space is larger than that of the flat space at a distant region. One will see so many images of a source. On the other hand, T (2) ∈ γ (2) has a different characteristic about cusps. From eq.(43), |x − T (2) (x)| vanishes at the cusps on each time-slice, while |x − T (1) (x)| never vanishes on each time-slice even at their cusps. Therefore < T µν > is singular at the cusps of dS 3 /γ (2) and AdS 3 /γ (2) and regular at the cusps of dS 3 /γ (1) and AdS 3 /γ (1) .
Finally we discuss the third possibility of the divergences estimating the summation over all transformations in (33). From a rotational symmetry around the origin of each time-slice where we use A −1 Λ n A = (A −1 ΛA) n . If we consider the point For a sufficiently large n, 2 cosh(2nχ 0 ) − 1 and |x − R(θ)(Λ k ) n (x)| behaves as e n|χ 0 | . Furthermore, |x − R(θ)(Λ (2) k ) n | also behaves as e n|χ 0 | for a large n, since the tessellation of γ (2) is the same tessellation as γ (1) . From Though a rigorous estimation may be possible, it is too complicated and will give us no essential information. < T µν >∼ O( n 0.843 n ) barely converge because of the exact value of χ 0 . If the investigation could be done in other topology with negative curvature, a different value of χ might cause the divergences of the summation.
IV. SUMMARY AND DISCUSSION
In the present article, we have investigated new topological effects of a quantum field in a torus universe with a cusp and a sphere universe with three cusps. Their covering spacetime is de Sitter spacetime or anti de Sitter spacetime. The cusp is a point at infinity with regular local structure and needle-like global structure. Three possibilities of divergences of the energy-momentum tensor have been studied.
First, a divergence appears on the coordinate singularity of the classical background spacetime, which is initial or final singularity in cosmological sense. This is similar to the case of the three dimensional black hole [6].
Next possibility is a divergence at the cusps. In the present article, we show there are two types of the cusps. One is a cusp made by hyperbolic transformations of SO(2, 1) and the other is made by a parabolic transformation. We observed that < T µν > diverge at the latter cusps, which are included in the sphere with three cusps. This aspect means that the latter cusp is quantum mechanically unstable. Only the latter cusps will require a treatment in quantum gravity.
The last possibility is the divergence of the summation of images. This corresponds to the effect that we see more images of a source in a negatively curved universe than in a flat universe at a distant region. The summation, however, converges in the spacetimes given in the present article. The convergence of the image summation strongly depends on the values of a boost angle |χ| and the shape of a tessellation in the covering space. Though e |χ 0 | is 4.74, if e |χ 0 | were less than 4 with the same tessellation, < T µν > would diverge everywhere and the divergence is hard to remove. In the case of other topology, other e |χ| and other tessellation may make the summation diverge. If so, it will turn out that there are topologies accepting quantum field and not accepting. When we consider a compact (without a cusp) topology with a negative curvature, such situation may occur, though the compact topology is very difficult to treat.
Recently Brill [8] shows that a three dimensional multi-black hole solution can be constructed in the three dimensional anti-de Sitter spacetime. It is easily found that AdS 3 /γ (2) with a larger boost angle |χ ′ | than |χ 0 | is regarded as a two-black hole solution. Of course, the summation of images converges in this solution.
By a regularization performed in the present article, we perfectly subtract local divergences. We, however, have observed the topological divergences. They cannot be regularized and will have physical meanings.
Can we carry out a similar investigation in other topology with a negative curvature. A compact topology (without a cusp) seems impossible since the tessellation is so complicated.
On the other hand, it may be possible to treat other topologies with cusps. At least, we can decide whether each cusp causes a divergence of quantum field or not knowing whether the identification providing the cusp is parabolic or hyperbolic. The divergence of summation of images sensitively depends on the shape of the tessellation in the covering space and will be difficult to treat without a sufficient symmetry. In (3+1)-dimension the similar investigation may be possible. There will be convenient models of non-trivial topology.
ACKNOWLEDGMENTS
We would like to thank Professor H. Sato and Dr. T. Tanaka for helpful discussions. The author thanks the Japan Society for the Promotion of Science for financial support. This work was supported in part by the Japanese Grant-in-Aid for Scientific Research Fund of the Ministry of Education, Science, Culture and Sports.
|
2014-10-01T00:00:00.000Z
|
1996-01-01T00:00:00.000
|
{
"year": 1996,
"sha1": "9ababa1d3eb11e20b2bed074fb01303175c5f605",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/gr-qc/9601001",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "9ababa1d3eb11e20b2bed074fb01303175c5f605",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
257143117
|
pes2o/s2orc
|
v3-fos-license
|
Dynamic Parameter Identification of Collaborative Robot Based on WLS-RWPSO Algorithm
: Parameter identification of the dynamic model of collaborative robots is the basis of the development of collaborative robot motion state control, path tracking, state monitoring, fault diagnosis, and fault tolerance systems, and is one of the core contents of collaborative robot research. Aiming at the identification of dynamic parameters of the collaborative robot, this paper proposes an identification algorithm based on weighted least squares and random weighted particle swarm optimization (WLS-RWPSO). Firstly, the dynamics mathematical model of the robot is established using the Lagrangian method, the dynamic parameters of the robot to be identified are determined, and the linear form of the dynamics model of the robot is derived taking into account the joint friction characteristics. Secondly, the weighted least squares method is used to obtain the initial solution of the parameters to be identified. Based on the traditional particle swarm optimization algorithm, a random weight particle swarm optimization algorithm is proposed for the local optimal problem to identify the dynamic parameters of the robot. Thirdly, the fifth-order Fourier series is designed as the excitation trajectory, and the original data collected by the sensor are denoised and smoothed by the Kalman filter algorithm. Finally, the experimental verification on a six-degree-of-freedom collaborative robot proves that the predicted torque obtained by the identification algorithm in this paper has a high degree of matching with the measured torque, and the established model can reflect the dynamic characteristics of the robot, effectively improving the identification accuracy.
Introduction
At present, robot technology is developing towards intelligence, and the manufacturing mode is also changing. In recent years, collaborative robots have received extensive attention and research around the world. According to the definition in ISO10218-2, a robot that can interact directly with humans in a designated collaborative area is called a collaborative robot. Compared with traditional industrial robots, collaborative robots have the benefits of high security, good versatility, sensitivity, precision, ease of use, and human-machine collaboration. The above advantages make collaborative robots not only applicable in the manufacturing field, but also gives them potential application value in the fields of home service and rehabilitation medicine-for example, compliant robotic arms in the industrial field, surgical robots in the medical field, wearable rehabilitation assistance robots, and antiterrorist and explosion-proof robots in special applications [1,2]. Utilizing the technology of human-machine fusion, the establishment of a fusion robot technology with intrinsic safety, human-machine collaborative cognition, and behavioral mutual assistance can provide support for emerging new application scenarios such as industry, service, and medical care. To break through the challenges of existing robots in the four aspects of environmental adaptability, task adaptability, safety, and interactive capabilities, it is urgent to study a new generation of human-machine fusion robots [3,4].
As collaborative robot technology is developing towards high speed and high intelligence, people also put forward higher requirements on its control accuracy. In the course of human-robot collaboration-affected by uncertain factors such as robot joint friction, the moment of inertia and nonlinearity, end load, and external disturbance-people cannot directly measure robot parameters and it is challenging to model [5]. In addition, the precision requirements of the machining process are gradually increasing. If the method of obtaining inertia parameters from CAD software is used to model each part of the robot, the dynamic model of the robot cannot be accurately established. Currently, the only effective way to obtain the precise dynamic parameters of the robot is the robot parameter identification method based on experiments [6,7]. The identification of robot dynamic parameters includes six steps: dynamic modeling, model linearization, excitation trajectory optimization, experimental data sampling and preprocessing, parameter calculation, and experimental authentication [8]. The parameter estimation determines the accuracy of the entire identification. The joint model of the robot is an important part of dynamic modeling. In many previous research results, the method of joint modeling was neglected, and the Coulomb viscous friction model or Stribec friction model were often used to represent the overall friction of the joint [9,10]. The current difficulties in dynamic parameter identification mainly lie in the following: (1) Insufficient prediction accuracy. Due to the limitation of the identification algorithm on the model, the precision of the robot model used in the identification method is insufficient and the accuracy of the predicted torque based on this model is also limited.
(2) Predict torque fluctuations, which are caused by discontinuous jumps in a large range of predicted values at certain nodes in the dynamic model. (3) The error peak, which is due to the inaccurate description of the dynamic characteristics of the special motion state by the traditional friction model, resulting in a large deviation compared with the actual required torque value near these motion states. Therefore, obtaining accurate robot dynamic parameters becomes particularly important [11,12].
For tandem robots represented by collaborative robots, it is necessary to establish their joint models to improve the overall model accuracy. Kircanski et al. [13] carried out identification work on joint friction and joint stiffness. However, the friction torque is not estimated approximately, which leads to inaccurate parameters and does not improve the accuracy of the model. Atkeson et al. [14] used the WLS-based serials identification method to obtain the robot's dynamic parameters, which can accurately predict the force and torque generated by load movement; however, this method has many identification times and ignores the coupling between joints. This leads to increased recognition time and the inability to obtain the dynamic parameters of complex robot joints accurately and in a timely manner. Liu et al. [15] improved the genetic algorithm by using the inter-cell generation method and the large mutation strategy, which effectively improved the identification accuracy of the dynamic parameters of the space robot; however, the algorithm is complex in the calculation, slow in convergence, and needs to adjust many parameters. Sun et al. [16] used genetic algorithms to identify the parameters of the dynamics model of industrial robots. This algorithm can effectively avoid local optimal solutions but its efficiency will be affected because of the need to design a tedious coding and decoding process. Chen et al. [17] discussed the application of the artificial neural network algorithm in robot dynamic parameter identification such that the structure and weight of the neural network have clear physical meaning, but only the identification of inertial parameters on both sides of the end of the robot was analyzed, and there was no research on parameter identification of other joints of the robot. Wang et al. [18] proposed an identification algorithm based on an adaptive particle swarm optimization genetic algorithm for the dynamic parameter identification of flexible joint robots. In order to improve the convergence speed of the particle swarm optimization algorithm, the algorithm uses a dynamic adaptive adjustment strategy and introduces a new genetic algorithm hybrid cross-mutation mechanism to avoid particle swarms getting stuck in local optima. Due to joint flexibility and complex friction sources, this method cannot accurately reflect the internal friction of joints. Zhang et al. [19] proposed a model parameter identification method based on combination, proposed a hybrid genetic algorithm and cosine trajectory, took the multi-joint series robot as the research object, and carried out friction parameter identification experiments, which further improved the accuracy of the robot dynamics model. However, this article did not study other nonlinear disturbance factors in the joint. Guo et al. [20] designed an identification method based on particle swarm optimization (PSO) for robot dynamics parameter identification. In order to obtain the dynamic parameters of the robot, the PSO algorithm was used to calibrate the dynamic model of the robot according to the motion state and torque of each joint. Experiments show that the parameters obtained by this method are correct and feasible. However, the algorithm is not conducive to global search and is prone to local optimum problems. Lin et al. [21] proposed a hybrid estimation strategy for the parameter identification of underwater vehicles; for the rough estimation of the dynamic parameters of underwater vehicles, the least squares (LS) algorithm is used, and the improved particle swarm optimization (IPSO) algorithm is used for the accurate estimation. The advantage of the least squares method is that it can improve the identification accuracy; however, its calculation is complex. When the amount of computation is large, the real-time performance will be greatly reduced [22] and there is a problem of limited search space. Cao et al. [23] designed a dual quantum behavior PSO algorithm for parameter identification of parallel robots. For the covariance matrix of measurement noise and process noise, the QPSO-1 algorithm is suitable. For the optimization of motion parameter error estimated by the EKF algorithm, the QPSO-2 algorithm is adopted. Experimental results show that this method significantly improves localization accuracy. Liu et al. [24] proposed a connection combination method based on improved artificial fish swarm algorithm for dynamic parameter identification, which can identify the independent value of the required parameters and avoid the impact of load changes. However, the convergence speed of this algorithm is slow and it is difficult to ensure real-time performance.
Inspired by the above point of view, to ensure the cooperation between humans and collaborative robots and realize the precise and stable control of the collaborative robots system, this paper designs an improved algorithm based on weighted least squares and random weight particle swarm optimization to identify the parameters of robot dynamics. Firstly, the Lagrange method is used to establish the dynamic model of the collaborative robot and determine the joint dynamic parameters to be identified. Because measurement noise will be generated when collecting raw data, the weighted least squares method is adopted. The identification algorithm is designed by adding the measurement torque noise to form the weight coefficient matrix. The weighted least squares method is used to generate the initial solution, and the search range is set to about 10% according to the absolute value of the initial solution. Secondly, based on traditional particle swarm optimization (TPSO), a random weight particle swarm algorithm is proposed, and the random weight particle swarm algorithm is used to make the dynamic parameters to be identified quickly jump out of a small local search range under the influence of random weights, speed up the identification in a large search range, and obtain accurate optimal parameters. Finally, due to the noise and burr in the original data collected, the Kalman filtering algorithm is used to filter the data, and good denoising and smoothing effects are achieved. The validity of the WLS-RWPSO algorithm is verified on a 6-DOF collaborative robot. The results show that the identification algorithm used in this paper can accurately identify the dynamic parameters and effectively improve the identification accuracy of the dynamic parameters compared with the identification results of LS-PSO and WLS-PSO identification algorithms.
The rest of the paper is organized as follows: Section 2 establishes the dynamic equations of the collaborative robot system subject to frictional disturbances. Section 3 introduces the weighted least squares algorithm and the random weight particle swarm algorithm, and analyzes the stability and convergence of the random weight particle swarm algorithm. Section 4 introduces the design of the excitation trajectory for parameter identification and the preprocessing of the experimental raw data. Section 5 introduces the experimental collaborative robot system and its dynamics parameter identification process using the WLS-RWPSO algorithm, and verifies the performance and efficiency of the WLS-RWPSO method by comparing it with the WLS-PSO algorithm. Section 6 presents the conclusions.
Robot Dynamics Modeling
The robotic arm is assumed to be a tandem robotic arm in the study. Without loss of generality, the dynamic equation of the n-DOF tandem robot is obtained by using the Lagrangian method [25]: In order to improve the real-time performance, reduce the identification cost, and improve the identification accuracy, the dynamic equation of the collaborative robot is linearized. Without changing the robot model, according to the method proposed by Swevers et al. [26], this paper includes the influence of other factors on the inertial parameters of the robotic arm through the identification process to obtain a set of comprehensive parameters that meet the accuracy of calculation. The dynamic equation is expressed in the linear form of dynamic parameters through parameter transformation: where Ψ dyn (q,q,q) ∈ R n×m is the observation matrix and E dyn is the inertial parameter matrix to be identified. For a given robot whose inertial parameters are constant, the linearization of robot joint moments greatly simplifies the entire parameter identification process. In addition to the joint torque required to drive the link movement, the dynamic equation of the robot actually includes additional torque caused by factors such as friction and the moment of inertia of the motor rotor [27]. The joint friction includes Coulomb friction, viscous friction and static friction. This paper does not model static friction. The joint friction moment consists of Coulomb viscous friction: where τ f is the friction torque, ν is the coefficient of viscous friction, and c is the Coulomb coefficient of friction. By approximating this function to a tangent hyperbolic function, the non-smooth function in the robot model can be avoided, which is expressed as follows: where ε is a constant that makes the slope of the tangent hyperbolic function very steep near zero. Simultaneous Equations (2) and (3) can obtain the complete robot dynamics linearization equation: where τ s ∈ R n is the motor torque vector, Y s ∈ R n×12n is the observation matrix, and θ s ∈ R 12n is the robot dynamics parameter vector. The parameters of the dynamic equation are determined by the unknown parameter θ * , which is expressed as where θ * i is the actual parameter of connecting rod i; j is the number of robot joints; and m * , I * , ν * , and c * , respectively, represent the actual mass, moment of inertia, viscous friction coefficient, and Coulomb friction coefficient of the connecting rod.
In addition, because not every dynamic parameter has an impact on torque, Y s is not a full rank matrix. For removing redundant parameters, the linear recombination method is used to obtain a minimum set of parameters [8]: where Y ∈ R n×(p+2n) is the observation matrix, θ ∈ R p+2n is the total dynamic parameter vector containing the minimum dynamic parameters and friction parameters of the connecting rod, p is the minimum number of kinetic parameters, and 2n is the number of friction parameters.
Weighted Least Squares Algorithm
During the identification process, the robot is moved along a certain trajectory, the joint torque and joint angle of the robot at N different times are sampled, and the joint angular velocity and angular acceleration of the collaborative robots are calculated and filtered by difference. The processed data are substituted into Equation (7) as follows: represents the observation matrix. In fact, due to measurement errors and other reasons, non-homogeneous linear Equation (8) will generally result in incompatible equations. Although the incompatible equations cannot find the solution that fully satisfies the conditions, it can find the least squares solution θ OLS θ OLS = arg min Wθ − Γ The least squares solution of Equation (8) is Since the motor torque data of each joint of the robot have different levels of noise, for the parameter estimation problem of heteroscedasticity, the weighted least squares estimation has a better effect and the weighted least squares method is used: where r j is the number of dynamic parameter combinations of joint j, and the weighting matrix is The robot dynamic parameter identification adopts the least squares method to solve, and the dynamic parameter θ of the robot can be obtained: where Σ is the diagonal matrix [28] formed by measuring the variance of moment Γ, which is called weight matrix.
Basic Particle Swarm Algorithm
The particle swarm algorithm is an optimization method for parameter calculation, in which the solution of each optimization problem is a bird in the search space, called a particle, and the fitness value of each particle is determined by the optimization function. There is always a speed to determine the orientation and removal of each particle, and to follow the current optimal particle to search in the solution space. The benefits of the PSO algorithm are as follows: Compared with other intelligent algorithms, PSO relies on particle speed to complete the search. There is no crossover and mutation operation, and the search speed is fast. Only the best particles will transfer information to other particles in iterative evolution. Since the particle swarm algorithm has the characteristic of memory, the best historical position of the particle swarm can be remembered and transferred to other particles. The particle swarm optimization algorithm adopts real number coding, which is directly determined by the solution of the problem, and the number of variables of the solution of the problem is directly used as the dimension of the particle. It has few adjustment parameters, simplicity in formation, and is easy to apply in engineering.
The particle velocity and update position equation are where t = 1, 2, · · · , G; G is the search space dimension; i = 1, 2, · · · , N is the population size; c 1 is the local learning factor; c 2 is the global learning factor; W is the inertia weight; r 1 and r 2 are the random numbers that obey the r(0, 1) distribution; and p t i and S t i are the local optimum and the global optimum, respectively.
Particle Swarm Algorithm Based on Random Weight
In the particle swarm optimization algorithm, parameter particles will gather to their own best historical position and the best historical position of the population; so, it is easy to form a rapid convergence effect of the particle population, and it is prone to local extreme, premature astringency, or stagnation phenomena, resulting in imprecise parameter identification [29]. In particular, the local and global optimal search ability of parameter particles will be affected by inertia weight. Larger weights are conducive to jumping out of local optima, while smaller weights will enhance local search capabilities and facilitate algorithm convergence [30]. Aiming at the problems, POS tends to be limited in the search space, easily falls into local optima, and has unsatisfactory convergence rates. In this paper, we have designed a method of randomly selecting weight values so that the influence of the historical speed of parameter particles on the current speed is random, which can effectively increase the large search range and prevent parameters from falling into local optimal solutions, speed up parameter identification, and improve parameter identification accuracy. In the random weight method, the dynamic parameters to be identified can quickly jump out of the local small search range under the influence of the random weight, accelerate the identification speed in the large search range, and obtain accurate optimal parameters. The random weight W can be calculated by the following formula: where µ is the parameter adjustment factor, N(0, 1) represents random numbers from a standard normal distribution, and rand(0, 1) represents a random number between 0 and 1. The random weight particle swarm algorithm can avoid falling into the local optimum to a certain extent, increase the range of parameter particle search, and help improve the accuracy of the parameter identification of the robot. The dynamic parameter identification process of the collaborative robot is shown in Figure 1.
Stability and Convergence Analysis of Random Weight Particle Swarm Algorithm
Assuming that the number of population particles is Φ, the global optimal position of population particles can be obtained: Let θ 1 = c 1 r 1 , θ 2 = c 2 r 2 , · · · , θ = θ 1 + θ 2 , by sorting out Equations (13) and (14): where ; w, θ 1 , and θ 2 change adaptively during the iterative process; and θ 2 = c 1 r 1 + c 2 r 2 varies randomly in the interval with iteration. At this time, C(t) belongs to the time-varying matrix. According to the system theory, it can be known that where Θ(t, k) is the system state transition matrix, expressed as where T ; define scalar function as vector norm V(y(t)) = y(t) ; then, the following hold: However, the values of w, θ 1 , and θ 2 make C(t) < 1, so ∆V(y(t)) is negatively determined.
According to the Lyapunov stability theorem, the system is asymptotically stable-that is, the algorithm is stable.
According to Equation (18), when the modulus of eigenvalues of C(t) is less than 1, it can be obtained that When p t i and S t i are fixed, t → ∞ and v i (t) → 0. x i (t) → θ 1 p t i +θ 2 S t i θ is a point on the line connecting p t i and S t i . According to Equation (13) and the root mean square error, it can be seen that p t i eventually tends to S t i , which shows that the algorithm converges.
Design of Excitation Trajectory
A reasonable design of excitation trajectory for parameter identification can hasten the convergence speed of parameter estimation and improve the precision of parameter estimation. The robot excitation trajectory design can be performed in two steps: The first step is to parameterize the joint trajectory. The second step uses the optimization algorithm to determine the undetermined coefficients in the joint trajectory function according to the designed objective function and motion constraint conditions.
In view of the fact that the finite Fourier series has the advantages of convenient data processing, insensitivity to measurement noise, and easy implementation in the parametric representation of the robot joint trajectory [31,32], this paper uses the finite Fourier series to realize the joint trajectory of the collaborative robot parameterization [33]. The joint angle of the i-th joint of the collaborative robot is denoted as where ω f is the fundamental frequency of the Fourier series and q i0 is the joint angle offset.
Each joint of the robot uses the same fundamental frequency to ensure the periodicity of the excitation trajectory. The parameterized motion trajectory of each joint of the space robot contains 2N + 1 undetermined coefficients. By reasonably selecting a i l , b i l , and q i0 , the motion of the manipulator can meet the PE conditions of parameter identification.
In order to obtain the angular velocity and angular acceleration of joint i, the first and second derivatives of Equation (21) with respect to time can be calculated: Since the excitation trajectory is constrained by conditions such as motor torque, joint position, joint velocity, joint acceleration, and workspace, the designed trajectory needs to meet the following conditions: where cond(W) is the condition number of the observation matrix. To reduce the theoretical error of identification, it is necessary to optimize the parameters in the above trajectory, and reduce the influence of noise on the identification accuracy by reducing the condition number of the observation matrix [34]. q min and q max are the minimum and maximum joint positions, respectively;q max is the maximum joint velocity;q max is the maximum joint acceleration; β is the trajectory parameter; and τ max is the maximum joint torque. The essence of the parameter optimization of the excitation trajectory is a multi-variable constrained nonlinear function optimization problem [35,36]. The parameters are optimized using the fmincon function in the Matlab optimization toolbox.
Data Preprocessing
When the collaborative robot tracks the excitation trajectory, the joint position and motor current are collected, and the motor current is converted into the motor torque through the torque constant. Due to the measurement noise interference of the original data, it is necessary to denoise and smooth the collected data before the identification experiment.
Combining with the problems studied in this paper, the torque displayed by the theoretical model of the collaborative robot at a certain moment is used as the predicted torque in the Kalman filter algorithm, and the torque of the collaborative robot measured by the sensor is used as the measured torque in the Kalman filter algorithm; then, the established state equation and the measurement equation are expressed as where x k is the system state variable at time k, A k and B k are the system parameters, u k is the control variable of the system, B k is the system input relationship matrix, H is the state output transition matrix, w k is the noise deviation of the robot control system, and v k is the sensor measurement noise deviation.
The essence of the Kalman filter algorithm is to use recursive thinking to reduce the impact of noise. Each operation cycle contains two stages: time update and measurement update. The former uses the system model and the estimates for the previous cycle to obtain the prior estimate; the latter uses the actual measured output together with the prior estimate to obtain a posterior estimate of the state.
Time update stage: transition from time k − 1 to time k.
wherex k − is to use the optimal estimate at time k − 1 to predict the estimated state variable, which is called the prior estimate value, at time k. Q is the noise covariance of the control system; P k − is the estimated value of the covariance of the system at time k.
Measurement update phase: Use the output at time k to correct the prior estimate where K k is the Kalman gain at time k and R is the measurement noise covariance. The Kalman filter only uses the first two-order information (mean and covariance) of the state in the update rule; so, the Kalman filter has the following advantages: (1) The acquisition of the mean and covariance of the unknown distribution only needs to save less.
(2) The mean and covariance have linear transitivity. (3) The set of mean and covariance estimates can be used to characterize additional characteristics of the distribution.
Therefore, for the torque signal collected by the sensor, the Kalman filter algorithm [37] is used to preprocess the original data and compare them with the five-point thrice smoothing method [38]. The comparison between the torque signal before and after processing is shown in Figure 2.
The experiment usually cannot directly measure the joint velocity and acceleration. If the joint position is differentiated, the measurement noise will be amplified. In order to reduce the impact of measurement noise when collecting the original torque signal, through the Butterworth filter function in Matlab, the joint angle q is first differentially calculated and then low-passed. The actual angular velocityq and angular acceleration q of the connecting rod are obtained by filtering and noise reduction to reduce the noise influence caused by differential processing.
In Figure 2a-f, the collected original torque data are represented by a blue line; the green line and the red line are the torque data processed by the five-point thrice smoothing method and the Kalman filtering method, respectively. It can be obtained from Figure 2 that before signal processing, the acceleration signal is noisy, has many burrs, and has certain data mutation points. If it is used for parameter identification calculation, it will lead to a large deviation. However, the torque signal after Kalman filter processing is smooth and has low data distortion; it also has a better processing effect than the five-point thrice smoothing method, which can be used for parameter identification calculation.
Simulation Verification of Structure Simplification Results
The experiment was carried out on the six-degree-of-freedom collaborative robot ROCR6, as shown in Figure 3, and the modified DH parameters are also shown in Table 1. The excitation trajectory of the joint is a 5-order Fourier series with a fundamental frequency of 0.05 Hz and a bandwidth of 0.25 Hz. The joint angle and joint torque in motion are sampled at a frequency of 1000 Hz, with a total 20,000 sets of original data. An excess of data will reduce the efficiency of identification and cause the fluctuation of bad data. To improve the efficiency, 2000 groups of joint positions and motor current signals required for identification are obtained in the 20 s motion process as the original data required for identification.
The experiments send the excitation trajectories to the control platform, which ensures that all joints reach the commanded positions and receive real-time measurements of the actual values (joint position q, joint torque τ).
The system is identified by basic PSO and RWPSO, respectively. The number of population is selected as 200, the learning factor c 1 is 1.5, c 2 is 3, the inertia weight is 1.3, and the maximum number of iterations is 200.
The fitness change curve during the identification process is shown in Figure 4. It can be seen from the figure that the initial particle search range (0-370) obtained by applying the RWPSO is obviously larger than the search range (0-201) of the basic PSO. When updating the position and velocity information, the basic PSO takes about 83 generations to converge, while the RWPSO takes about 20 generations to reach convergence, which shows that the improved algorithm has significantly improved the convergence speed and parameter particle search range. To identify the inertial parameters of the collaborative robot, given the excitation track of the joint, as shown in Figure 5, drive the joint axis to track the excitation track and identify the inertial parameters of the link. Taking the first three joints of the collaborative robot as an example, the identification values of the robot dynamics parameters are obtained through the WLS-RWPSO algorithm identification, and the results are shown in Table 2. Among the 21 identification parameters in Table 2, the first 15 are inertia parameters and the last 6 are friction parameters. According to the dynamic parameters identified by the WLS-PSO, LS-PSO, and WLS-RWPSO algorithms, the predicted torque under the excitation track is calculated and compared with the actual torque. The results are shown in Figure 6. Figure 6a-f show the predicted torque of each joint under three different algorithms. It can be seen from Figure 6 that under the identification trajectory, the predicted torque of the three algorithms is close to the actual torque; however, on the whole, the predicted torque of the WLS-PSO algorithm is better than that of the other two algorithms, which shows that the WLS-RWPSO algorithm has a better effect on the prediction of torque. The identification effect of the algorithm will be further analyzed in combination with the verification trajectory.
To further compare the identification accuracy of the two algorithms, this paper introduces the root mean square error (RMS) λ to verify the validity of the identified model: whereτ pa is the average torque value after identification. If the value of λ is closer to 0, the identification accuracy is higher. In addition, in order to show the advantages of our method compared with traditional recognition methods, we list the root mean square (RMS) of the validation residuals of different recognition methods in Table 3, where LS-PSO is the standard least-squares particle swarm optimization algorithm, and WLS-PSO is the weighted least-squares particle swarm optimization algorithm. The above results show that our proposed WLS-RWPSO algorithm is compared with these two methods. Taking joint 2 and joint 3 as examples, the predicted torque obtained by the three recognition methods is compared with the measured joint torque. The recognition accuracy of this algorithm for joint 2 is 16.1% and 10.5% higher than that of the LS-PSO algorithm and WLS-PSO algorithm, respectively, and the recognition accuracy of joint 3 is 12.2% and 11.5% higher than that of the LS-PSO algorithm and WLS-PSO algorithm, respectively. The corresponding predicted torque is closer to the measured joint torque. It can be seen that WLS-RWPSO algorithm has stronger optimization ability and higher recognition accuracy.
Model Validation
After the completion of parameter identification, it is necessary to evaluate and verify the accuracy of parameters. It is worth noting that the significance of parameter identification itself is that under any given trajectory, the predictive value of motor output torque can be obtained based on the identified parameters; then, the control current of the joint motor can be obtained. In order to verify the validity of the dynamic parameters identified by WLS-RWPSO, the verification trajectory is selected as a Fourier series different from the previous excitation trajectory, and the robot joint is driven by the controller to track according to the verification trajectory. The identified dynamic parameters are used to predict the torque of the verification track. The experimental verification process is shown in Figure 7. After sorting out the torque vector obtained, the joint torque of each joint under the verification track can be obtained, as shown in Figure 8, Figure 8a It can be seen from Figure 8 that the deviation between the joint torque calculated by this identification method and the actual torque is small, and the torque curve is closer to the actual torque on the whole, which further proves that the WLS-RWPSO algorithm has higher accuracy in identifying the robot dynamic model and can accurately predict the dynamics characteristics of the robot system.
Conclusions
This paper summarizes the basic knowledge of dynamics model identification of collaborative robots and proposes identification strategies based on WLS-RWPSO. The optimal value of the fitness function obtained by the WLS-RWPSO algorithm is the smallest, which is not prone to local optima, is convenient for global search, and can better improve the accuracy of parameter identification. In the data preprocessing, the raw data collected by the sensor are preprocessed with the Kalman filter algorithm, which achieved good denoising and smoothing effects. In order to ensure the accuracy of parameter identification under disturbance, the excitation trajectory is designed based on finite Fourier series.
The identification algorithm proposed by the collaborative robot is tested to evaluate its performance and compared with WLS-PSO. The experimental results show that the WLS-RWPSO parameter identification algorithm proposed in this paper can accurately identify the dynamic parameters of the cooperative robot and has fast convergence speed, strong optimization ability, and certain engineering significance.
The identification algorithm proposed in this paper is helpful to improve the accuracy and stability of the trajectory control of the collaborative robot; however, there are still some shortcomings: (1) This paper does not consider its inherent complex nonlinear behavior.
(2) When the collaborative robot moves along the specified excitation track, the sudden change of friction torque is not considered at the joint commutation, which makes us unable to obtain more accurate friction prediction.
Therefore, in future work, we will be committed to integrating these factors into the proposed parameter identification process. In addition, the control system of the collaborative robot can be designed by using the improved identification algorithm to further expand its application scope.
|
2023-02-24T17:17:55.162Z
|
2023-02-20T00:00:00.000
|
{
"year": 2023,
"sha1": "b0080538f038a02c9e3d133f84004de741d5dbad",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2075-1702/11/2/316/pdf?version=1677119046",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "6ebcd40597d0539584e942a83afed8681a194c8a",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
}
|
105747564
|
pes2o/s2orc
|
v3-fos-license
|
Rhamnazin attenuates inflammation and inhibits alkali burn-induced corneal neovascularization in rats
The purpose of our study was to determine whether rhamnazin inhibits corneal neovascularization in the rat alkali burn model, and alleviates the inflammatory response of the cornea. Rhamnazin inhibited the proliferation of HUVEC cells in a dose-dependent manner, and it also inhibited the migration and luminal formation of HUVEC cells. 20 μM rhamnazin eye drops were applied to an animal model of corneal alkali burn neovascularization 4 times a day for 14 days. The corneal neovascularization in the rhamnazin group was obviously less than that in the PBS control group. In the rhamnazin group, the inflammatory index of the cornea decreased gradually over time, whereas the inflammatory index of the PBS group decreased only slightly with time. The corneal CNV area in the PBS group was significantly larger than that in the rhamnazin group. The expression level of VEGF protein of the rhamnazin group was lower than that in the PBS group, and the expression level of PEDF was significantly higher than that of the PBS group. Rhamnazin downregulated the expression of VEGFR2 protein and decreased the expression levels of p-STAT3, p-MAPK and p-Akt proteins. This study provides a new idea for the study of the molecular mechanism of corneal neovascularization.
Introduction
According to the WHO report, corneal disease is the third leading cause of blindness, causing serious damage to the visual quality of life of patients; however, its mechanism of occurrence, pathology, and treatment, as well as other aspects, have not been completely claried. 1 The main causes of blindness in corneal diseases are corneal vascularization and corneal scarring. Corneal neovascularization (CNV) is the result of many pathological changes in the cornea, which can be secondary to many other corneal diseases; trachoma and infection are the major causes of CNV. CNV is oen associated with keratitis and lymphangiogenesis. This can lead to rejection during corneal transplantation, which severely affects vision recovery in patients. 2 Therefore, the search for new and effective therapeutic agents is a hot spot in the study of corneal neovascularization.
Rhamnazin, one of the polyphenolic compounds, is oen extracted from Ginkgo biloba, Salix, sea buckthorn and other medicinal plants. It is similar to natural products of quercetin and present in the owers, fruits and leaves of many other plants. 1 Studies have found that rhamnazin helps in protecting cardiovasculature, antioxidation, and immune regulation. 7 It has also been found to have antitumor, antiviral, anti-inammatory, and anti-allergic functions, as well as other biological activities. It is widely used in the treatment of all kinds of cardiovascular diseases and tumors, such as gastric cancer, cervical cancer, nasopharyngeal carcinoma and lung cancer. [2][3][4][5] However, the research on rhamnazin is focused on its antitumor activity, and there is no study on its role in ocular diseases, especially in corneal neovascularization. The purpose of this study is to investigate the effect of rhamnazin on corneal neovascularization induced by alkali burn.
The alkali burn animal model, an inammatory model, is an important method to study the pathogenesis and treatment of inammatory neovascularization. 8 This study performed an in vitro cell experiment to observe the effects of different concentrations of rhamnazin on HUVEC cell proliferation, migration and tube formation. Meanwhile, alkali burn neovascularization rat models were used to study the effect of rhamnazin on corneal neovascularization in the prevention and treatment of inammation in rats. Through research on the mechanism of rhamnazin on corneal neovascularization and inammation control, we expected to fundamentally solve the two key problems, i.e. the difficulties of corneal repair aer operation and immune rejection reaction. It is a new way in the treatment of corneal blindness and inammation. Through this study, we can not only explain the progression and pathological mechanism of corneal neovascularization, but also help nd effective drugs for anti-angiogenesis and inammation, explore its optimal concentration, and evaluate the drug effects and prognosis, therefore providing information for treatment of corneal allogra rejection. The successful development of this medicine will provide a good treatment for most ocular neovascular diseases, and become a breakthrough point of ophthalmology.
From the current limited research data and our previous work, we can foresee that research of rhamnazin in the eld of inammation will open up a new direction of medical research. So far rhamnazin's function and the relationship are not clear, making it necessary to study the gene expression regulation and biological function in detail. Therefore, it is of great signicance and application prospect to carry out further and detailed analysis for a new drug for inhibiting inammation. The blindness caused by keratitis corneal damage is a common disease in the process of Chinese industrialization. Inammatory pathogenesis is extremely complex. Study of the mechanisms of keratitis provides an important theoretical basis for new drug targets, and is expected to solve several key problems of postoperative neovascularization and inammation rejection reaction. It is a new way in the treatment of corneal blindness, and has wide application in clinics. In addition, these will provide the theoretical reference and valuable direction of clinical treatment on other inammation and neovascularization-related diseases such as cancer, diabetes and rheumatoid arthritis.
Materials and methods
Cell culture and rhamnazin preparation HUVEC cells were purchased from PromoCell (Heidelberg, Germany) and cultured in EBM2 medium containing 2% fetal bovine serum and endothelial cell growth supplement (Pro-moCell) at 37 C. Cell growth was observed by inverted microscope. The logarithmic phase endothelial cells were used during the experiment. Rhamnazin (98%, Sigma-Aldrich, St. Louis, MO) was dissolved in DMSO to prepare required concentration.
Cell viability assay
Cell viability was examined by CCK-8 kits. Briey, HUVEC cells grown in logarithmic phase were inoculated in 96 well plates at 37 C, with a density of 5 Â 10 3 cells per well. Aer the overnight incubation, a nal concentration of 0.1 mM, 0.5 mM, 1 mM, 2 mM, 5 mM, 10 mM and 20 mM rhamnazin were added to the medium. The control group was treated with culture medium. Each group contained ve replicates. 72 hours later, 10 ml cell proliferation and cytotoxicity test CCK-8 solution was added to each well and cultured for an additional 4 hours at 37 C. Optic density at the wavelength of 450 nm was read by a microplate reader.
Wound closure assay
There were 1 Â 10 5 HUVEC cells in the logarithmic growth phase per well, seeded in 24 gelatin-coated wells for overnight culture at 37 C. A 200 ml sterile pipette tip was used to draw scratches in the vertical direction, resulting in the horizontal interval of 2 mm scratches. Cells were treated with PBS, 5 mM and 20 mM rhamnazin and photographed at 0 h, 12 h and 24 h time points. The experiment was repeated 3 times.
Tube formation assay
24-well plates were coated with Matrigel basement membrane matrix (BD, Bioscience) in 37 degrees for 1 hour, and 10 000 HUVEC cells per well in growth factor-free EBM2 medium with 0.1% FBS were added. Cells were then incubated for 6 hours at 37 C with PBS, 5 mM and rhamnazin (20 mM). Tube structures formed in the cavity of the articial basement membrane were photographed under the inverted microscope. At least 5 areas for each well were selected for photograph. The total length of the lumen was the average value of 5 random elds of view at 100Â magnication. The HUVEC branched out structure was quantied by ImageJ soware.
Alkali-burned rat corneas and treatment
We have obtained ethics approval from Nanchang Royo Biotech corporation animals ethics committee. 60 healthy adult male SD rats weighing 180-200 g were used in the study (Shanghai, Shilaike, Laboratory, Animal, Co, Ltd., Shanghai, China). General anesthesia was induced by intraperitoneal injection of pentobarbital (40 mg kg À1 ), then the right eye was used as the experimental eye, and the le eye as the control eye. Eye drops of tetracaine were used as corneal surface anesthetics. Standard 3.5 mm diameter single circular lter paper was soaked in 1 M sodium hydroxide solution for 20 s, and excess liquid was absorbed. Under the microscope, the lter paper was pasted on the right central corneal surface for 30 s. Aer removing the lter paper, eyes were immediately ushed with 10 ml PBS solution.
Aer the establishment of the alkali burn model, the animals were randomly divided into two groups, with 30 rats in each group. One group received 10 ml topical PBS eye drops aer alkali burn, 4 times a day, continuously for 14 days. The other group was treated with 10 ml 20 mM rhamnazin dissolved in PBS eye drops, 4 times a day, continuously for 14 days. The eyes of the rats were observed and evaluated by slit lamp microscope at 0, 1, 4, 7, 10 and 14 days aer eye drops. Corneal neovascularization, inammatory response, and corneal epithelial damage were evaluated. 14 days later, the animals were euthanized and the corneas were removed from the experimental animals and preserved in the À80 C refrigerator for later histological detection and protein extraction.
Slit-lamp microscopic observation
Aer establishing the alkali burn rat model, the ocular surface was stained with 1% sodium uorescein. The injury of corneal epithelium in rats was observed under a cobalt blue light slit lamp microscopy, and through the image-processing soware for image acquisition and processing. The growth of corneal neovascularization was examined daily, and the lengths of new vessels (length of continuous and with few bending new vessels tangent of corneal vertical prevail) were measured. Calculation of the neovascularization area was based on the formula S ¼ C/ 12 Â 3.1416 Â [r 2 À (r À I) 2 ], where S is corneal neovascularization growth area, C is the circumference of corneal neovascularization network, r is the corneal radius, and I is the new blood vessel length. The inammatory index was analyzed as previously described. 32
Histology analysis
The rat eyes were xed in 4% paraformaldehyde, dehydrated by gradient alcohol, processed by xylene and embedded in paraffin. Serial sections were made along the anterior and posterior diameters of the eyeball, with a thickness of 6 mm and placed on a slide pretreated with poly-lysine. Aer deparaffinization, slides were stained with conventional hematoxylin and eosin, and the image was acquired under light microscopy.
Western blot assay
The corneal tissue was lysed in ice-cold RIPA buffer with protease inhibitors, and homogenized by ultrasound. Protein concentration was determined by BCA method. The same amounts of protein were subjected to SDS-PAGE electrophoresis, and transferred to PVDF. The membrane was blocked with 2% BSA for 1 h, and blotted with primary antibodies Aer washing with TBST, horseradish peroxidase-labeled secondary antibody (1:5000; Dako, Shanghai, China) was added and incubated for 1 hour at room temperature. Aer washing with TBST and incubating with ECL reagent, lm was used for signal development. ImageJ soware was used to scan the gray value of each band, with b-actin as the reference control. Each experiment was repeated 3 times.
Immunouorescence
The expression of STAT3 in HUVEC cells under different intervention conditions and the expression of VEGF and PEDF in the cornea under different intervention conditions were detected according to the methods described previously. 33 Primary antibodies used in cell experiments were the mouse anti human STAT3 antibody with a dilution ratio of 1 : 300, and the sheep anti mouse VEGF and PEDF antibodies, with dilution ratios of 1 : 100 and 1 : 200, respectively. The nucleus was stained with Hoechst 33342, and the samples were mounted with antiquenching agents. Images were acquired under a uorescence microscopy.
Statistical analysis
Images were processed with Image-Pro Plus 6 soware. SPSS 19 soware was used for statistical analysis. The comparison between the groups was conducted using the analysis of variance test. The inammatory index and the area and length of the corneal neovascularization were analyzed using ANOVA followed by Bonferroni post hoc comparison. The LSD-t test was used to determine differences between two groups, and all statistical tests were two-sided. p < 0.05 was statistically signicant.
Result
Rhamnazin inhibits the growth of the HUVEC cells in a dosedependent manner Fig. 1A is the structure of rhamnazin. The inhibitory effect of rhamnazin on HUVEC growth was tested by CCK-8 assay in eight treated groups with different concentrations (0, 0.1, 0.5, 1, 2, 5, 10 and 20 mM). The CCK-8 showed that the viability of HUVEC was signicantly inhibited when the concentration of rhamnazin was higher than 5 mM (Fig. 1D). With the increase of rhamnazin concentration, the inhibition of the HUVEC growth was more obvious. The effective concentration of rhamnazin in the rats was 20 mM in our preliminary experiment, so the rhamnazin concentration used in following animal experiments was 20 mM.
Rhamnazin inhibits the cell migration and tube formation in HUVEC cells
To examine the role of rhamnazin in HUVEC cell migration, the HUVEC cells were treated with 1% BSA, 5 mM rhamnazin and 20 mM rhamnazin. The scratch healing process was observed at three time points (0 h, 12 h and 24 h) (Fig. 1B). The woundclosure assay showed that the scratch healing rate of the 20 mM rhamnazin-treated group was signicantly lower than the 1% BSA-treated group and the 5 mM rhamnazin-treated group at 12 h and 24 h. The scratch healing rate of 5 mM rhamnazin at 12 h was obviously lower than the 1% BSA-treated group. The scratch of the 1% BSA-treated group and the 5 mM rhamnazintreated group were closed at 24 h, but the scratch of the 20 mM rhamnazin-treated group was still not healed. The migration rate of the 1% BSA treatment group and the 20 mM rhamnazin treatment group at 12 h were about 100% and 50%, respectively (Fig. 1C).
HUVEC cells were treated with 1% BSA, 5 mM rhamnazin and 20 mM rhamnazin, and the ability of tube formation was assessed by an in vitro angiogenesis assay. 1 The result showed that the number of lumens formed by the 20 mM rhamnazintreated HUVEC cells was signicantly lower than that of the 1% BSA-and 5 mM rhamnazin-treated cells (Fig. 1E). We also measured the tube network length of HUVEC cells treated by 20 mM rhamnazin, 5 mM rhamnazin and 1% BSA (Fig. 1F). The results suggest that 20 mM rhamnazin blocked the tube formation compared with 1% BSA and 5 mM rhamnazin in HUVEC cells.
Metabolic condition of animal model
In the NaOH induced alkali-burned Sprague Dawley rats (60 males) model, 60 rats were divided into two treatment groups randomly: a PBS-treated group and a 20 mM rhamnazin-treated group ( Fig. 2A) (4 times/1 day). We measured the body weight and eyeball weight of rats at 4, 7, 10 and 14 days (Fig. 2B and C).
The results showed that there were no obvious differences between the PBS-treated group and the 20 mM rhamnazintreated group in body weight and eyeball weight.
Rhamnazin inhibited corneal neovascularization
Aer the alkali burn model was completed, we observed the corneal neovascularization of rats in the rhamnazin-treated group and the PBS-treated group. The rst day aer the alkali burn, corneal neovascularization began to form at the limbus. The number of corneal neovascularizations at the central and limbus in the rhamnazin-treated group was obviously less than that of the PBS-treated group at D14 (Fig. 3A). Histologic examination showed that treatment with PBS resulted in more neovascularization in the central corneal than treatment with rhamnazin at D14.
We also calculated the CNV area of the corneal at D0, D4, D7, D10 and D14. The CNV area in the PBS-treated group was signicantly higher than the rhamnazin-treated group at the same point in time. The CNV area in the PBS-treated group was much higher than that of the rhamnazin-treated group at D7 and D14 (Fig. 3C). Over time, the CNV area of the PBS-treated group increased considerably, but the CNV area of the rhamnazin-treated group only showed subtle increase. The inammatory index of the PBS-treated group increased in D7 and D10, while the inammatory index decreased signicantly from D1 to D14 in the rhamnazin-treated group. The inammatory index of the PBS group was higher than the rhamnazin group at the same point in time (Fig. 3B).
Rhamnazin regulates the expression of VEGF and PEDF
IF detection of PEDF and VEGF expression was used to access the proangiogenic and antiangiogenic effects in the PBS group and the rhamnazin group at day 14 ( Fig. 4A and B). VEGF staining in the rhamnazin group was infrequent, while the signal was obviously observed in the PBS group. PEDF staining was much higher in the rhamnazin group than the PBS group. We also evaluated the expression of VEGF and PEDF by western-blot ( Fig. 4C and D). We found that VEGF protein levels in the PBS-treated group were higher than in the rhamnazin-treated group. Rhamnazin signicantly upregulated PEDF expression at D14. Though the mechanism of rhamnazin is still unclear, our study suggested that rhamnazin regulated the expression of VEGF and PEDF in the corneal alkali burn model, suggesting rhamnazin may play a direct role in the inhibition of neovascularization.
Rhamnazin inhibits VEGFR2/STAT3 signal pathway
In order to investigate the mechanism of inhibition of neovascularization of rhamnazin, we detected the expression level of VEGFR2-related signaling pathways in HUVEC cells (Fig. 5A). HUVEC cells were cultured in the presence of VEGF. Western blot assay showed that the p-VEGFR2 protein content in HUVEC cells treated with 20 mM rhamnazin was obviously decreased compared with PBS treatment group and negative control group (Fig. 5B). To understand the signaling cascade initiated by VEGFR2, we also assessed the expression of p-STAT3, p-MAPK and p-Akt ( Fig. 5A and C-F). Rhamnazin down-regulated the expression of p-STAT3 in VEGF-treated HUVEC cells. We also found that p-MAPK was robustly decreased in rhamnazin-treated HUVEC cells. The expression of p-Akt was signicantly down-regulated by rhamnazin in VEGF cultured HUVEC cells. These data suggested that the down-regulation of the VEGFR2/STAT3/MAPK/Akt signal pathway by rhamnazin plays an important role in inhibiting angiogenesis in HUVEC cells (Fig. 6).
Subcellular localization of STAT3 activity is closely related to its activity, so the immunouorescence and confocal imaging were used to access subcellular localization of STAT3 in VEGFtreated cultured HUVEC cells (Fig. 5B). In the VEGF group, the uorescence intensity of the STAT3 in HUVEC cell nuclei was much stronger than the VEGF + rhamnazin group. The nuclear transcription level of STAT3 in the VEGF + rhamnazin group was signicantly less than the VEGF group. These results showed that rhamnazin inhibited angiogenesis by regulating the STAT3-dependent signaling pathway.
Discussion
Ocular alkali burn is one of the most difficult emergencies in ophthalmology. The study of its treatment has been the focus and difficulty of ophthalmology. Ocular alkali burn results in the damage of ocular surface tissue, which can lead to epithelial necrosis, exfoliation, corneal perforation, corneal neovascularization, and so on. 9,29,31 Controlling the inammation, stabilizing the ocular surface, and inhibiting corneal neovascularization are key to the recovery of visual acuity in patients with corneal alkali burn. 10,28 Neovascularization is the result of imbalance between factors that inhibit and promote angiogenesis aer corneal injury. 11,30 With the further development of neovascularization, it has been shown that VEGF plays a key role in neovascularization. 12 The study of the mechanism of corneal neovascularization and the drugs to inhibit angiogenesis are of great signicance for the treatment of corneal neovascularization.
In this experiment, corneal alkali burn model is a representative model of ocular neovascularization and inammatory diseases. It can lead to corneal ulceration, severe keratitis, corneal neovascularization, and formation of corneal scars. Therefore, it is widely used to study the mechanism and treatment of corneal inammation and angiogenesis. In recent years, it has been discovered that rhamnazin has a series of pharmacological properties, including antioxidation and antitumor functions, but its mechanism of anti-angiogenesis has not been clearly explained. [4][5][6] In the present study, we used the animal model to investigate the effects of rhamnazin on keratitis and neovascularization. For this reason, we treated corneas of alkali burn rats with 20 mM rhamnazin. The results showed that the index of corneal inammation and the area of neovascularization in the rhamnazin-treated group were smaller than those in the PBS group. The ndings suggested that rhamnazin indeed inhibits corneal neovascularization, reduces corneal inammation and promotes corneal epithelial repair. In vitro experimental results showed that rhamnazin can not only inhibit the proliferation of HUVEC cells, but also suppress HUVEC cell migration and tube formation. Therefore, we have shown that rhamnazin can inhibit the formation of corneal neovascularization both in vitro and in vivo.
Rhamnazin can inhibit corneal neovascularization, but its specic mechanism has not been elucidated. Studies have pointed out that the mechanism of anti-angiogenesis by rhamnazin is mainly through the vascular endothelial growth factor receptor (VEGFR). 6 Vascular endothelial cell growth factor receptor (VEGFR) belongs to tyrosine kinase family proteins. There are four VEGF receptors: VEGFR-1 (Flt-1), VEGFR-2 (Flk-1/ KDR), VEGFR-3 (Flt-4) and VEGFR (neuropilin-1). 13,14 VEGFR-1 and VEGFR-2 are mainly expressed in vascular endothelial cells, whereas VEGFR-3 is mainly expressed in lymphocytes. 15 Aer the invasion of new blood vessels, at least six different angiogenic-related growth factors are secreted, among which VEGF is the most important angiogenic factor. 16 The specic effects of VEGF on vascular endothelial cells are achieved by two types of receptor tyrosine kinase (RTK) regulatory families, VEGFR1 and VEGFR2. 17 Sugimachi K. and other studies have found that the expression rate of VEGFR2 in tumor regions is 100%, and the expression of VEGFR1 is not different from normal cells, which further proves that VEGFR2 is closely related to neovascularization. 18 Recent studies have shown that when Notch or VEGFR3 is absent, a small amount of VEGFR2 can also form inammatory neovascularization, while VEGFR2 alone/combined with VEGFR3 upregulates endothelial cell DLL4 and promotes neovascularization. 19 Fengyun Dong found that DHA can downregulate the expression of VEGFR2 cells to inhibit the proliferation and migration of HUVECs. 20 In order to investigate whether rhamnazin inhibits corneal neovascularization through VEGF and its receptor VEGFR2, expression levels of VEGF and VEGFR2 were detected by western-blot. The results showed that the expression levels of VEGF and p-VEGFR2 in the cornea of the PBS-treated group were signicantly higher than that in the rhamnazin treatment group, and the expression level of the two proteins decreased signicantly in the rhamnazin treatment group. We also demonstrated by immunouorescence that the expression of VEGF in the cornea was lower in the rhamnazin-treated group than in the PBS group. These results conrm our hypothesis that rhamnazin inhibits corneal neovascularization by downregulating the expression of VEGF and its receptor VEGFR2.
The development of new blood vessels is mediated by a complex array of cellular and molecular factors. VEGF and PEDF play important roles in the development of new blood vessels. PEDF is an effective endogenous anti-angiogenic factor, highly expressed in the cornea. Studies have shown that the exogenous expression of PEDF inhibitors in corneal stroma leads to corneal neovascularization. 21 However, VEGF plays an important role in promoting angiogenesis and promoting the development of inammation. Studies have shown that the expression of VEGF in vascular endothelial cells is signicantly higher in inammatory and neovascularization corneas. 22 Therefore, breaking the balance between VEGF and PEDF may be the pathological mechanism of the development of corneal neovascularization. 23 In our study, the expression of VEGF in the alkali burned cornea was signicantly increased and the expression of PEDF was signicantly decreased compared to the PBS control group. However, in the 20 mM rhamnazin treatment group, the expression of VEGF in the corneal tissue of the alkali burned rats was signicantly downregulated, while the expression of PEDF was signicantly up-regulated. Thus, rhamnazin reestablished the balance between the VEGF and PEDF, thereby inhibiting the formation of new blood vessels caused by alkali burn.
VEGFR2 can efficiently activate multiple downstream signaling components, including mitogen activated protein kinase (MAPK), phosphorylation of serine/threonine kinase (Akt), signal transducer and activator of transcription factor 3 (STAT3), therefore promoting the growth of tumors and vascular endothelial cell proliferation, migration and tube formation. [24][25][26] Previous studies have shown that inammation is closely related to angiogenesis and lymphangiogenesis. VEGFR2 controls angiogenesis and lymphangiogenesis, and mediates multiple cell signaling pathways involved in inammatory reaction. X. Zhang showed that the indirubin can inhibit the JAK/STAT3 signal pathway in VEGFR2 mediated tumor vascular inammation model, by inhibiting the migration and the formation of vascular endothelial cells and the release of inammatory mediators. 27 Therefore, we hypothesized whether rhamnazin can inhibit the neovascularization by regulating the downstream factors associated with VEGFR2. To conrm this hypothesis, we examined the expression levels of VEGFR2, STAT3, MAPK, and Akt in VEGF-treated HUVEC cells under different intervention conditions. The results showed that the protein expression levels of p-VEGFR2, p-STAT3, p-MAPK and p-Akt in the rhamnazin treatment group were signicantly lower than those in the control group. Immunouorescence showed reduced levels of STAT3 in the nucleus of VEGF-treated HUVEC cells in the rhamnazin group. These results conrmed that the mechanism of inhibition of neovascularization by rhamnazin is through the inhibition of VEGFR2 and its downstream STAT3/MAPK/Akt signaling pathway.
Overall, this study demonstrated that local administration of rhamnazin could inhibit inammation and neovascularization aer alkaline burn. Its anti-inammatory and anti-angiogenesis effects are achieved by regulating the STAT3/ MAPK/Akt signaling pathway via VEGFR2 protein. Therefore, rhamnazin is a potential drug for the treatment of keratitis and neovascularization.
Conflicts of interest
None. Fig. 6 The mechanism of rhamnazin-induced inhibition of angiogenesis in corneal epithelial cell. Rhamnazin inhibits corneal neovascularization by down-regulation of the VEGFR2/STAT3/MAPK/Akt signal pathway.
|
2019-04-10T13:11:30.107Z
|
2018-07-24T00:00:00.000
|
{
"year": 2018,
"sha1": "e2a8c42af780b20b7c52a4627f146bf7a72fc78f",
"oa_license": "CCBYNC",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2018/ra/c8ra03159b",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e1b15fc5258593d502fd347d5f122a9d7c5da4f6",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
233590839
|
pes2o/s2orc
|
v3-fos-license
|
Optimization of isolated and combined pad foundation using computer aided application of finite element approach
Most Finite Element packages provide means to generate meshes automatically. However, the user is usually confronted with the problem of not knowing whether the mesh generated is appropriate for the problem at hand. Since the accuracy of the Finite Element results is mesh dependent, mesh selection forms a very important step in the analysis of isolated and combined footing pad foundation. SAFE is an ultimate tools use in the design of concrete floors and foundation system, hence provide a suitable means for the user. From framing layout all the way through to detail drawing production, SAFE integrate every aspect of engineering design which are in one process easy and intuitive environment. SAFE provides unmatched benefits to the engineer with its truly unique combination of power, comprehensive capabilities, and ease-of-use. In the context of this research, we have plotted graphs showing the relationship between the nodes and displacement with the stress patterns as generated from the software. It is understood from the graph that multiple elements in the process of meshing will make the footing to be at equilibrium. The research also carry the shape deformed diagram which shows the deformation of the footing due to the impose load (stress) on the footing, it also give the bending moment diagram of the footings. The basic structure and analysis of the single and double pad footing foundations have been designed using Finite Element Analysis (FEA) with the failure planes being considered. The results obtained, it is assumed that FEA is an ideal design method that breaks foundation design into basic elements and nodes that shows the action of the loading on the footings.
Introduction
The foundation of a structure is the element of that structure which connects it to the ground, and transfers loads from the structure to the ground, it is sometimes considered as either shallow or deep. Hence by so doing, foundation is given a greater value in the recent years till date, Kumar et al (2015). The application of soil mechanics and rock mechanic (geotechnical engineering) in the design of foundation element in a structure. The finite element method is such a widely used analysis-and-design technique that it is essential that undergraduate engineering students have a basic knowledge of the theory and applications of the technique (Hutton, 2015). Hence it is very important in geotechnical engineering in the following ways; mostly failure analysis (what will be the failure plane in the soil mass when the foundation is loaded. How far will it extend? How far would it affect?), for analysis of rock fractures, for analyzing the forces in retaining walls and settlement of foundation, would piling affect the adjacent existing buildings and failure can happen due to pore pressure. Specifically shallow foundation (pad) are used to support individual point load such as that directly from the superstructure to the receiving soil strata, hence the construction of this type of foundation is based upon some certain facts which govern it, this facts are soil type and nature of load as stated by Kumar et al (2015). Foundation is the most important part of the structure and by so doing it must be given a serious attention in the design and construction. Finite Element Analysis is a very accurate means compare to other means in the (analysis) of a footing, or foundation. Finite element analyses have also proved very useful for planning instrumentation studies by showing where objects can be located to best advantage through meshing, which is the primary aim of finite element. state that this type of foundation is often used when the structural load will not cause excessive settlement of the underlying soil layers. In general, shallow foundations are more economical to construct than deep foundations. Pad foundations are used to support an individual point load such as that due to a structural column. Kharagpur (2017) pad foundations are shallow foundation used to support an individual point load such as that which is transfer from the super structure due to the structural column. They are spread footings, single footings or double footings which are may be circular, square or rectangular. They usually consist of a block or slab of uniform thickness, but they may be stepped if they are required to spread the load from a heavy column. Farrokhzad, et al (2011), a node is a specific point in the finite element at which the value of the field variable is to be explicitly calculated. Exterior nodes are located on the boundaries of the finite element and may be used to connect an element to adjacent finite elements. Nodes that do not lie on element boundaries are interior nodes and cannot be connected to any other element. Soil is considered by the engineer as a complex material produced by the weathering of the solid rock. Soil is the most important material which is in use for construction of civil engineering structures (Kouzer and Kumar, 2010). Among all parameters, the bearing capacity of soil to support the load coming over its unit area is very important. Principal factors that influence ultimate bearing capacities are type of soil, width of foundation, soil weight in shear zone and surcharge. Structural rigidity and the contact stress distribution do not greatly influence bearing capacity. Bearing capacity analysis assumes a uniform contact pressure between the foundation and the underlying soil. With other factors unchanged the type of failure of soil, depth of foundation and effect of water table also govern the bearing capacity of the soil (Madan et al. 1989). Similarly the effect of depth of footing on bearing capacity of soil is studied. In general, other factors remain constant, bearing capacity of soil goes on increasing as depth or width of foundation increases. In case of local shear failure, amongst different shapes of footing is found to be lowest in comparison with square, circular and rectangular shaped footings. The aim of the study was to model isolated and combined pad footing using computer aided application of finite element approach.
Study Area
This work is based on optimization for single and combined footings (Pad foundation) using Finite Element Analysis. The general techniques and terminology of finite element analysis will be introduced with reference to depict a volume of some material(s) having known physical properties. The volume represents the domain of a boundary value problem to be solved. Note that this implies that an exact mathematical solution is obtained; that is, the solution is a closed-form algebraic expression of the independent variables. In practical problems, the domain may be geometrically complex as is, often, the governing equation and the likelihood of obtaining an exact closed-form solution is very low. Therefore, approximate solutions based on numerical techniques and digital computations are most often obtained in engineering analyses of complex problems. Finite element analysis is a powerful technique for obtaining such approximate solutions with good accuracy. Hence foundation of a structure require a reliable soil to enable it to withstand the proposed structure, necessary measures are taken to ensure that a suitable soil is been chosen in other to acquire a suitable design status.
Foundation Modelling
The footings were assumed to be founded on clay with un-drained shear strength of 100 kPa. This value is typical residual clay which has vane shear strengths in the range of 70 to 120kPa. For earthquake loading the un-drained shear strength will be greater because of the rate of loading. We observed an increase of about 40% in the un-drained shear strength of this soil when tested at rates of loading comparable to those during a seismic event. Thus the value of 100kPa used herein for earthquake loading was equivalent to the lower end of the range of values found in normal site investigation. Hence the factors considered while modeling this foundation are listed below; Energy Dissipation Mechanism and, Factor of Safety
Energy dissipation mechanism
The dynamic response of a structure depends on its mechanical properties and the characteristics of the induced excitation. Mechanical properties which are efficient to mitigate the structure's response when subjected to certain inputs might have an undesirable effect during other inputs. Ground motions vary significantly from one another in amplitude, frequency content and duration. Those characteristics are influenced by source mechanism and travel path and modified by local geological and soil conditions. The ability to dissipate the induced energy is crucial to the earthquake resistance of a structure. Various energy dissipation mechanisms have been proposed to enhance structural response (Oyenuga 2001). These energy dissipation mechanisms can be of various types such as viscous, rigid-plastic, elastoplastic, viscoplastic, or combination of them.
Factor of safety
Where earthquake loads are included, the minimum safety factor for the foundation shall be 1.1. "The 2012 IBC Code and Commentary, volume 2, states the following: "the safety factor for stability of foundations predates the development of load combination.
Mesh Generation
Each element in the finite element model is addressed by its number. Also each node is addressed by its number. The inter-connectivity of the elements is determined by the common nodes shared by the elements. In a model with few elements and nodes, the user can manually divide the domain, number each element and node, and keep track of the element connectivity, hence this process is called mesh generation. However, in models with many nodes and elements, the effort required to divide the domain into elements and attend to connectivity is great. It then becomes difficult to accomplish this task without committing errors. However, there are several finite element preprocessors which do this job automatically once the geometry is defined (Bowles 1997). Users can then devote more time to interpreting results. Shephard has reviewed the current trends in mesh generation. Although there are several ways to generate meshes, these methods fall into two broad categories:
Mapping techniques
This type of mesh generation is best suited when the geometry is simple -as in the case of a rectangle or a cuboid. Typically the user needs to choose the number of elements on each of the edges that defines the geometry and the element concentration along the edges. The software then generates the mesh simply by joining nodes on the opposite edges. The software considered here is SAFE to generate the mesh.
Free mesh generation
This method of generation is best suited for models with complicated geometry. SUPERTAB has this capability. The model is broken down into sub-areas and sub-volumes. On each of the curves of every sub-area and sub-volume the number of elements and their concentrations are selected. The software then generates a mesh that is consistent with the selected values and satisfies the requirement on the aspect ratios and the distortion factors of the elements Hence the mesh generation method considered here is mapping techniques, were we choose a rectangular footing base for double footings, with dimension 2000mm by 4050mm and rectangular footing base for single footings with dimension 2400mm by 2800mm. were the maximum mesh size for both footings is 200mm, and was generated with the aid of a software safe, as will be discuses in chapter four of this work.
Data Analysis
The objective is to develop a practical and efficient procedure of single and double footing (Pad foundation) with computer aided application using Finite Element Approach (David 2004). The thesis is that for many problems the minimization of the trace of the stiffness matrix with respect to the nodal coordinates, leads to a minimization of the potential energy, and as a consequence, provides the optimal grid configuration. The research work is regular shaped concrete footings, with overall dimensions of, for single footing length of 2800m width of 2400m with a 450mm column centered, for double footings length of 4050mm width of 2000mm with two columns of 450mm spaced at 2500mm. Step, the dimensions and basic grid will be defined, which will serve as a guide for developing the model.
Step 2: Define the material
In this Step, material and section properties for the footings (area object), columns, (line object) are defined.
Click the Define menu > Materials command to access the Materials.
Highlight 4000Psi in the Materials area, and click the Modify/Show Material button to display the Material Property Data form. That form lists the properties associated with 4000psi concrete; this is the concrete property that will be used in our model.
Click the OK button to accept this material as defined.
In the Materials area, highlight A615Gr60. Click the Modify/Show Material button to display the Material Property Data form. This form lists the properties associated with Grade 60 reinforcing; this is the rebar property that will be used in our model.
Click the OK button to accept this material as defined.
Click the OK button on the Materials form to accept all of the defined materials.
Click the File menu > Save command to save your model. Click the OK button to accept the Column Property definitions
Step 3: Define statics load pattern
In this Step, the dead and live static load patterns are defined. That is, we will name the various types of loads and specify the self-weight multipliers. The loads will be assigned to objects, and the values for the loads specified (uniform dead load of 30kN/m² and live load of 50kN/m²), in Step 8.
Click the Define menu > Load Patterns command to access the Load Patterns form Note that load patterns DEAD and LIVE are defined by default Recall that the project will be analyzed for the dead load plus the self-weight of the structure. Thus, the Self Weight Multiplier should be set equal to 1 (this will include 1.0 times the self-weight of all members) for the DEAD load. Only the DEAD load pattern should have a non-zero Self Weight Multiplier.
Click the OK button to accept the defined static load patterns
Step 4: Define load cases
In this Step, the Load Cases are defined. This is where the type of analysis is specified.
Step 5: Draw Object
In this step, column with drop will be drawn with the active window set (i.e., Plan View window active and the snap to points and grid intersections enabled), use the following Action Items to draw columns. Locate the mouse cursor at a distance of 0.9m from the left inside the footing and click, for double footing do the same for the two columns with distance of 2.5m apart Click on the Select menu > Select > Pointer/Window command or press the Esc key on the keyboard to exit the Draw Columns command. Locate the mouse cursor just above and to the left of grid intersection C6, hold down the left mouse button, and drag diagonally to just below and to the right of D5 and release the mouse button. The status bar in the lower left-hand corner should show "2 Points, 1 Lines, 1 Areas, 4 Edges selected." For single footing, and "4 Points, 2 Lines, 2 Areas, 8 Edges Selected." If the selection is not correct, simply click the Select menu > Clear Selection command and try again. Click the Edit menu > Delete command or press the Delete key on the keyboard to remove the columns enclosed in the window. Click the File menu > Save command to save your model. Click the View menu > Set Default 3D View command to display the model in 3D. Note how the columns extend below the slab. Click the View menu > Set Plan View command to return to the Plan View before continuing the project.
Step 6: Add Design Strip
In this step, design strips will be added to the model. Design strips determine how reinforcing will be calculated and positioned in the slab. Forces are integrated across the design strips and used to calculate the required reinforcing for the selected design code (Satis and Santhian 2010). Typically design strips are positioned in two principal directions: Layer A and Layer B. Similar to the previous sections, ensure that the Plan View is active and the snap to points and grid intersections features are enabled. Add design strips to the model as follows: Left click at the bottom ends of the selected design strips; the status bar in the lower left-hand corner should now show "2 Points, 1 Areas, 4 Edges, 2 Design Strips selected." Click the Edit menu > Align Points/Lines/Edges command to display the Align Points/Lines/Edges form. Select the Trim Line/Edge/Tendon/Strip Objects option. Click the OK button to leave the Align Points/Lines/Edges form. The Y direction design strips to the left of grid line D should now be trimmed to the edge of the slab. The trimming of the design strips was done for display purposes only; the program will automatically ignore the portion of a design strip that extends beyond a slab edge.
Click the File menu > Save command to save your model.
Step 7: Display option
In this Step, the set display options will be used to alter the objects displayed.
Click the View menu > Set Display Options command. When the Set Display Options form displays, uncheck the Design Strip Layer A and Design Strip Layer B check boxes in the Items Present in View area, this action will turn off the display of the design strips. Click the OK button to accept the changes, and the model now appears
Step 8: Assign load
In this Step, the dead and live loads will be assigned to the slab. Ensure that the Plan View is still active, and that the program is in the select mode (Draw menu > Select > Pointer/Window command). In the Uniform Loads area, type 30 in the Uniform Load edit box. Note: Additional load patterns may be defined by clicking on the "…" button next to the load pattern name. A "…" button returns you to the form used to define the item in the adjacent drop-down list or edit box, which in this case is the Load Patterns form. Click the OK button to accept the dead load assignment. SAFE will display the loads on the model. Use the Assign menu > Clear Display of Assigns command to remove the assignments from the display, if desired Click anywhere on the main slab to reselect the slab, or click the Select menu > Get Previous Selection command to select the slab. Click the Assign menu > Load Data > Surface Loads command to again access the Surface Loads form. Select LIVE from the Load Pattern Name drop-down list. Type 50 in the Uniform Load edit box in the Uniform Loads area. Click the OK button to accept the live load assignment. Again, use the Assign menu > Clear Display of Assigns command to remove the assignments from the display. To review the assignments to the slab, right click on the slab anywhere that is not a beam, wall, column, droppanel or opening to access the Slab-Type Area Object Information form Select the Loads tab and note that the DEAD Load Pattern has a Load Value of 30KN/M², and that the LIVE Load Pattern has a Load Value of 50KN/M² Click the OK button to close the Slab-Type Area Object Information form.
Click the File menu > Save command to save your model.
Step 9: Run Analysis and Design
In this Step, the analysis and design will be run Click the Run menu > Run Analysis & Design command to start the analysis. The program will create the analysis model from your object-based SAFE model and will display information in the status bar in the lower left-hand corner as the analysis and design proceeds. Additional information about the run may be accessed at a later time using the File menu > Show Input/output Text Files command and selecting the filename with a .LOG extension. When the analysis and design are finished, the program automatically displays a deformed shape view of the model, and the model is locked. The model is locked when the Options menu > Lock/Unlock Model icon appears depressed. Locking the model prevents any changes to the model that would invalidate the analysis results
In this
Step, the analysis will be reviewed using graphical displays of the results
Step 11: Design Display
In this Step, design results for the slab and beams will be displayed. Note that the design was run along with the analysis in Step 9. Design results are for the B.S 8110, 1997 code, which was selected in Step 1. Design preferences may be reviewed or changed by going to the Design menu > Design Preferences command (some design preferences are also set on the section property data forms); be sure to re-run the analysis and design (Step 9) if changes to the preferences are made.
Click the Display menu > Show Slab Design command to access the Slab Design form In the Choose Display Type area, select Finite Element Based from the Design Basis drop-down list. This option displays the required reinforcing calculated on an element-by-element basis as intensity contoursintegration across the defined design strips is not performed. In the Reinforcing Direction and Location area, select the Direction 2 -Bottom Rebar option. Direction 2 refers to the object local axis 2 direction. In the Show Rebar above Specified Value area, select the Non option. Click the OK button to leave the Slab Design form and display the slab design results for the local axis 2 direction. Again, positioning the cursor anywhere on the slab will result in the display of the reinforcing values at the cursor and in the lower left-hand corner of the SAFE window.
Step 12 Run Detailing
In this Step, detailing will be run and displayed. Detailing may be run only after analysis and design are complete.
Click the Detailing menu > Detailing Preferences command to display the Detailing Preferences form. Use this form to set the regional standards, to control how dimensioning is displayed, to manage reinforcing bar notation, and to select the units for material quantity takeoffs. Review the settings on this form (we will accept the default selections), and then click the OK button to close the form. Click the Detailing menu > Footings Preference command to display the footing preference form. Click the General and Display tab. On this tab review or alter the rebar curtailment, detailing and callout options, as well as set how sections should be cut. We will accept the default settings. Click the Rebar Selection tab and review or change the rebar selection rules, preferred sizes, minimums and reinforcing around openings. We will accept the default settings. Click the OK button to accept the selections and close the form. Click the Detailing menu > Drawing Sheet Setup command to display the Drawing Sheet Setup form. The sheet size, scales, title block and text sizes can be reviewed and changed using this form. We will accept the default settings. Click the OK button to close the form.
Step 13: Create Report
In this Step, a report describing model input and output results will be created.
Click the File menu > Report Setup command to display the Report Setup Data form. In the Report Output Type area, be sure that the RTF File option is selected. In the Report Items area, uncheck the Include Hyperlinked Contents checkbox. Click the OK button to leave the Report Setup Data form. Click the File menu > Create Report command to display the Microsoft Word Rich Text File Report form. Type footings (single/double) in the File name edit box and click the Save button. A report, with a cover written bold SAFE will come up which should be displayed in your word processor, and will be saved to your hard disk, SAFE, (2016)
Refinement
The user needs to select the number of nodes and elements in the model. The selection may be the one that leads to the best description of the domain geometrically. For example, footing surface of the single and double footings could be modeled by a series of interconnected concrete cube. The larger the number of concrete, the better is the model. The selection may also be based upon intuition, past experience and engineering judgment. The mesh obtained may be adequate in some cases. In other cases, especially when singularities are present, the mesh may not be adequate to obtain the results to the accuracy desired. In such cases, the meshes need to be refined.
There are three ways of refining a finite element mesh: The H-method: This method increases the number of elements and hence decreases the element size while keeping the polynomial order of the shape function constant. The P-method: This method increases the polynomial order of the interpolation function while keeping the number of elements in the model constant.
The R-method: This method redistributes the nodes while keeping the element number and the polynomial order of the interpolation function constant.
But, our concern here is using H-method because the object have to be constant i.e. the shape then the element will increase thereby reducing the size of the element to best fit the mesh.
Overview of Safe
SAFE is an ultimate tools use in the design of concrete floors and foundation system, hence provide a suitable means for the user. From framing layout all the way through to detail drawing production, SAFE integrate every aspect of engineering design which are in one process easy and intuitive environment. SAFE provides unmatched benefits to the engineer with its truly unique combination of power, comprehensive capabilities, and ease-of-use.
Laying out models is quick and efficient with the sophisticated drawing tools, or use one of the import options to bring in data from CAD, spreadsheet, or database programs. Slabs and foundations can be of any shape, and can include edges shaped with circular and spline curves.
Post-tensioning may be included in both slabs and beams to balance a percentage of the self-weight. Suspended slabs can included flat, two-way, waffle, and ribbed framing systems. Models can have columns, braces, walls and ramps connected from the floor above and below. Wall can be modeled as either straight or curved. Mats and foundations can include nonlinear uplift from the soil springs, and a nonlinear cracked analysis is available for slabs. Generating pattern surface loads is easily done by SAFE with an automated option. Design strips can be generated by SAFE or drawn in a completely arbitrary manner by the user, with complete control provided for locating and sizing the calculated reinforcement. Finite element design without strips is also available and useful for slabs with complex geometries.
Comprehensive and customizable reports are available for all analysis and design results. Detailed plans, sections, elevations, schedules, and tables may be generated, viewed, and printed from within SAFE or exported to CAD packages.
SAFE provides an immensely capable yet easy-to-use program for structural designers, providing the only tool necessary for the modeling, analysis, design, and detailing of concrete slab systems and foundations. (SAFE, 2016). table; 1.0, this graph illustrate the displacement with node relationship as the displacement reach it maximum limit with a specified frequency with the node. Showing that at a high node, the displacement increases with increase in the node i.e. for the footing to be at equilibrium it should have a multiple node to enhance a reliable stress on the footing. The table (2) shows the result of the double footings with the nodal assembly and the displacement at each node as computed from SAFE 2016. Table; 2 shows the result of the stated Fig. 6 which is the graphical representation of Table 2, here the displacement move at uniform velocity and reaches its destination at the same frequency with the node. 5 it is noted that when the spacing between footings is changed, the maximum moment is always at the footing center and increase with increasing in footing spacing. The reaction of the soil pressure and the applied force on the footing yield equilibrium and state stability in isolated footing.
Conclusion
Based on the finite-element analysis of foundations resting on footing embedded in homogeneous linear elastic soil, the following conclusions could be drawn The displacement reached its maximum value at the same frequency with a higher node. The result indicate that for the footing of foundation considering the meshing spacing ration of (200mm) does not reveal the optimum spacing with regard to displacement. When the spacing between footings is changed, the maximum normalize moment factor is always at the footing center and increase with increasing in footing spacing, Im decrease by about 67% of the footing spacing of 1.675m and increase by about 18% for the 2.5m spacing, This is because the span supported by the footing increases with increasing spacing, leading to greater moments. The increase in with footing length can be attributed to the increase in the unsupported length within the footing, which leads to an increase in the moment.
The dimensionless displacement factor decreases markedly as the meshing increases, reflecting the increase in displacement with increasing footing spacing.
Recommendations
Based on the analysis of the footing modeling it is deduce that For a footing to be suitable and appropriate it is necessary to have a numerous mesh in order to achieve a good analysis. For combine footings the spacing should not exceed 2.5m in order to achieve a good stress analysis for the design.
The size of the footing should vary with the magnitude of load and column size that the footing supports.
|
2021-04-18T09:31:03.767Z
|
2021-03-30T00:00:00.000
|
{
"year": 2021,
"sha1": "f5962b0e4dd8f19a1ca73293cb5f4324b20f411f",
"oa_license": "CCBY",
"oa_url": "https://zenodo.org/record/4643360/files/GJETA-2021-0030.pdf",
"oa_status": "GREEN",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "f5962b0e4dd8f19a1ca73293cb5f4324b20f411f",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
259707261
|
pes2o/s2orc
|
v3-fos-license
|
An Updated Review: Opuntia ficus indica (OFI) Chemistry and Its Diverse Applications
: The beneficial nutrients and biologically active ingredients extracted from plants have received great attention in the prevention and treatment of several diseases, including hypercholesterolemic, cancer, diabetes, cardiovascular disorders, hypoglycemic, hypolipidemic, edema, joint pain, weight control, eye vision problems, neuroprotective effects, and asthma. Highly active ingredients predominantly exist in fruit and cladodes, known as phytochemicals (rich contents of minerals, beta-lains, carbohydrates, vitamins, antioxidants, polyphenols, and taurine), which are renowned for their beneficial properties in relation to human health. Polyphenols are widely present in plants and have demonstrated pharmacological ability through their antimicrobial, anti-inflammatory, anti-bacterial, and antioxidant capacity, and the multi-role act of Opuntia ficus indica makes it suitable for current and future usage in cosmetics for moisturizing, skin improvement, and wound care, as healthful food for essential amino acids, as macro and micro elements for body growth, in building materials as an eco-friendly and sustainable material, as a bio-composite, and as an insulator. However, a more comprehensive understanding and extensive research on the diverse array of phytochemical properties of cactus pear are needed. This review therefore aims to gather and discuss the existing literature on the chemical composition and potential applications of cactus pear extracts, as well as highlight promising directions for future research on this valuable plant
Introduction
The natural extracts obtained from seeds and plants that are present in natural surroundings, finding the novelty of advantages linked to their consumption, supported, promoted, and organized studies in the field of natural extraction and its uses. Therefore, extracts from natural plants and their components from family units of Opuntia have been explored [1]. Opuntia ficus indica (OFI) is a major source of fruit and is found in semiarid and arid regions of several countries in the world, including Italy, Spain, Mexico, South Africa, the Middle East, Australia, and North, Central, and South America. For industrial purposes, it is also cultivated on a suitable surface [2]. Cactus fruit produced by OFI is called nopal fruit or cactus figs. It has an oval shape, and the length is up to 3 cm, generally ( Figure 1).
Opuntia is a xerophyte plant (that means it is able to survive in conditions poor in liquid water) that contains 200 to 300 species. The three most important types are produced in Italy and Spain. In South African kinds, cladodes with a variety of shapes exist, while in Chile, a popular variety of fruit is green in color, even when it is ripe [3,4]. The color shades of OFI fruits are generally red, orange, magenta, and lime green. The difference in color is due to the inequality in the constituent pigments present in it [5]. The plant is able to survive in harsh environments due to its succulent leaves, which play an important role in thermal regulation, drought resistance, and water storage ability. The plant is Opuntia is a xerophyte plant (that means it is able to surviv liquid water) that contains 200 to 300 species. The three most im duced in Italy and Spain. In South African kinds, cladodes with a while in Chile, a popular variety of fruit is green in color, even w color shades of OFI fruits are generally red, orange, magenta, and ence in color is due to the inequality in the constituent pigments p is able to survive in harsh environments due to its succulent lea portant role in thermal regulation, drought resistance, and water s is appealing as food because of its productivity in changing into gives edible energy. OFI species succulent pads act as a source The keywords Opuntia ficus indica were applied to analyze literature in two familiar databases, which have shown the rising number of scientific articles on this subject in the most recent decade, as reported in Figure 2. The trend in the number of scientific articles related to OFI highlights the large and increasing interest in this topic.
This review covers the developments in the area of the chemical and pharmacological properties of relevant compounds separated from the anti-aerial components (cladodes, fruits, and flowers) of OFI. Furthermore, the main current and future uses in human foods, medicinal applications, disease prevention and rehabilitation, cosmetics, the bioremediation of wastewaters, building materials, and clean-energy production will be highlighted. This review covers the developments in the area of the chemical and ph properties of relevant compounds separated from the anti-aerial compone fruits, and flowers) of OFI. Furthermore, the main current and future uses in medicinal applications, disease prevention and rehabilitation, cosmetics, th tion of wastewaters, building materials, and clean-energy production will b
Nutritional and Phytochemicals
Though morphological standards do not match, in the literature, th leaves is commonly used for flattened stem segments of the OFI plant (see F are generally made of the colorless medullar parenchyma, with the pale phyllon enclosing the photosynthetic efficient agent. The latter is enclosed the spines (altered leaves) and the trichomes or numerous cellular hairs, b the named areole, which is a typical fellows feature in forming the Cactace Opuntioideae is a subfamily depicted with harsh, short thorns and spine chids. The areole is the flower source [4]. Glochids are made of purely 100 cellulose [14]. Generally, spines are composed of 96.00% polysaccharides, ther divided: 49.70% cellulose and 50.30% arabinan. The rest are fats, ash crude-waxes, and the residual is lignin. The length of cellulose µ-fibril is 0.0 diameter is 6-10 µm; it is loosely imbedded and exists parallelly in an ar The latter is found in the solid gel, which is partly strongly woven with cellu polymer was a 50:50 combination of natural arabinan cellulose composite an without the natural components of hemicelluloses [16]. The spine's length it represents 8.40% of the overall cladode mass. They have automated safet the phytophage, light replication, stem shielding, and, therefore, decreasin ping [4].
The composition of cladodes depends on multiple factors, such as e (physical and chemical characteristics of soil, including pH, salinity, and st farming area, the weather conditions during the growth season, and the nutrient levels during growth [17][18][19]. The different varieties and breeds
Nutritional and Phytochemicals
Though morphological standards do not match, in the literature, the term cactus leaves is commonly used for flattened stem segments of the OFI plant (see Figure 1). Stems are generally made of the colorless medullar parenchyma, with the pale green khloros-phyllon enclosing the photosynthetic efficient agent. The latter is enclosed together with the spines (altered leaves) and the trichomes or numerous cellular hairs, both producing the named areole, which is a typical fellows feature in forming the Cactaceae unit family. Opuntioideae is a subfamily depicted with harsh, short thorns and spines, named glochids. The areole is the flower source [4]. Glochids are made of purely 100% crystal-clear cellulose [14]. Generally, spines are composed of 96.00% polysaccharides, which are further divided: 49.70% cellulose and 50.30% arabinan. The rest are fats, ash-powder, and crude-waxes, and the residual is lignin. The length of cellulose µ-fibril is 0.04 mm, and the diameter is 6-10 µm; it is loosely imbedded and exists parallelly in an arabinan matrix. The latter is found in the solid gel, which is partly strongly woven with cellulose [15]. This polymer was a 50:50 combination of natural arabinan cellulose composite and swollen gel, without the natural components of hemicelluloses [16]. The spine's length is 3.0 cm, and it represents 8.40% of the overall cladode mass. They have automated safety functions for the phytophage, light replication, stem shielding, and, therefore, decreasing water dropping [4].
The composition of cladodes depends on multiple factors, such as edaphic factors (physical and chemical characteristics of soil, including pH, salinity, and structure) in the farming area, the weather conditions during the growth season, and the availability of nutrient levels during growth [17][18][19]. The different varieties and breeds have different nutritional contents and vary across different regions. There is no universally accepted standard for determining their specific nutritive values. The composition of de-barded cladodes has been studied by Mohamed E. Malainine [20]. The sample of 100.00 g of dehydrated matter has 19.60 g of ash, 3.60 g of lignin, 7.20 g of lipids and waxes, 21.60 g of cellulose, and 48.00 g of the additional polysaccharides, although unrefined proteins were not evaluated. Researchers [17,[20][21][22] stated that 100.00 g dry cladodes include 64.00-71.00 g of carbohydrates, 19.00-23.50 g of ash, 18.00 g of fibers, 1.00-4.00 g of lipids, and 4.00-10.00 g of proteins (complex molecules), including 1.00-2.00 g of digestive proteins. The quantity of active compounds in 100.00 g of fresh cladode is 3.00-7.00 g of carbohydrates, 1.0-2.00 g of minerals, 0.20 g of lipids, 0.50-1.00 g of proteins, and 1.00 g of fibrous substances [17,23]. Fresh cladodes showed greater protein and H 2 O contents. Remarkably, the 112.00 kg/ha of phosphate supplementation (e.g., Diammonium Phosphate (DAP), Monoammonium Phosphate (MAP), and NPKs) enhanced the minimal cladodes phosphate content, which plays a key role in the development and metabolism of the plant [24]. The fibrous structure of a plant during growth is formed in the cortex and decayed in the core. Overall, fibers and proteins decrease with time [17,24,25]. The large fiber contents and calcium are valuable. Moreover, cladodes are determined to be more beneficial than lettuce due to their natural composition pattern [23,24]. Furthermore, the 88.00-95.00% water content makes cladodes a low-calorie diet, with 27 kcal/100 g [26]. The corresponding compounds given in the literature are summarized in Table 1. The OFI contains several kinds of carbohydrates. One of the sugars found in OFI is a form of fructose named inulin, a soluble fiber that exists in many plant-based foods. Inulin works as a prebiotic, improves Ca absorption ability, has cholesterol-lowering effects and a low glycemic index, boosts heart health, decreases general calorie intake, and is effective for weight control. The important carbohydrates stated in OFI are galactose, galacturonic acid, and glucose. The high sugar contents observed are due to the presence of glucose (9.30 to 12.00 g/100 g), oxidized d-galactose (6.5 to 8.8 g/100 g), pectinose (1.5 to 2 g/100 g), and D-xylose (1.9 to 2.01 g/100 g), which were detected at lower levels, while rhamnose (0.35 to 0.79 g/100 g) was found in trace amounts. The monomeric sugar content analysis from the dry sample of cladode obtained from south Italy has the same report, confirming the large content of dextrose (15.3 g/100 g), hexuronic acid (9.6 g/100 g), and glucose lactique (3.3 g/100 g) [27]. These special properties make cactus pear sweet, a very good natural food, and an additive in different food materials.
Proteins
Amino acids present very important functions in human beings' fitness and biology. They create proteins, which are essential for organ and tissue development, repair, and preservation. They are involved in metabolic activities (making energy, hormones, and neurotransmitters), immune functions, wound healing, muscle development and repair, adjusting appetite, feeling, and sleep [28][29][30]. The majority of amino acids observed in the cactus cladode are glutamine, and the rest are lysine, valine, leucine, arginine, isoleucine, and phenylalanine. In seed, glutamic acid is found as the major amino acid, varying in percentage from 15.73% to 20.27%, followed by arginine, (4.81% to 14.62%) [28,29]. By contrast, the main amino acids in cactus fruits of total amino acid contents are proline (46%) and taurine (15.78%), while ornithine is the only amino acid that is present in trace amounts in cladode, seed, and fruit. Therefore, pulp, fruit, and seeds can be counted as extremely great sources of amino acids [28,[30][31][32] (see Table 2). Humans need to consume a balanced diet to obtain key amino acids, especially lysine, isoleucine, valine, and leucine, which are essential to the human body.
Fats
The fatty acids are essential for human beings' health and are present in a lot of food stuff (plant-based and animal goods). Fatty acids help maintain good eye health, reduce age-related vision difficulties, lower joint pain, enhance joint flexibility, and are beneficial for brain function and growth. A general representation of fatty acids is reported in Figure 3. The fatty acids are essential for human beings' health and are present in a lot of food stuff (plant-based and animal goods). Fatty acids help maintain good eye health, reduce age-related vision difficulties, lower joint pain, enhance joint flexibility, and are beneficial for brain function and growth. A general representation of fatty acids is reported in Figure 3. Studies on the extracted lipids from the seed, pulp, and skin of OFI [33] reported that the maximum quantity of palmitic acid (C16:0) was found in cladode skin (20.76 g/100 g). A large quantity of oleic acid (C18:1) exists in pulp (23.26 g/100 g); the maximum amount of linoleic acid (C18:2) was found in pulp (48.86 g/100 g), while a quantity of 11.44 g/100 g of linolenic acid (C18:3) was found in skin (Table 3). Trace amounts of polyunsaturated fatty acids, eicosenoic acids, and eicosatetraenoic acids were observed in the skin, seed, and pulp, respectively. The unsaturated acids (Z)-octadec-9-enoic acid (C18:1) and cis, cis-9,12-Octadecadienoic acid (C18:2) represent 90% of all fatty acids calculated [31,33]. Studies on the extracted lipids from the seed, pulp, and skin of OFI [33] reported that the maximum quantity of palmitic acid (C16:0) was found in cladode skin (20.76 g/100 g). A large quantity of oleic acid (C18:1) exists in pulp (23.26 g/100 g); the maximum amount of linoleic acid (C18:2) was found in pulp (48.86 g/100 g), while a quantity of 11.44 g/100 g of linolenic acid (C18:3) was found in skin (Table 3). Trace amounts of polyunsaturated fatty acids, eicosenoic acids, and eicosatetraenoic acids were observed in the skin, seed, and pulp, respectively. The unsaturated acids (Z)-octadec-9-enoic acid (C18:1) and cis, cis-9,12-Octadecadienoic acid (C18:2) represent 90% of all fatty acids calculated [31,33].
Vitamins
Vitamins play a key role in human growth, immune system protection, and metabolic stability. The basic vitamins necessary for human health are Vitamin A (which stabilizes vision and promotes healthy skin), Vitamin C (which is an antioxidant and helps the immune system), Vitamin D (which enhances calcium absorption and bone strength), Vitamin E (which protects cells from destruction), and the B-complex vitamins, which maintain good nerve function and energy consumption. The OFI contains a significant quantity of vitamins (see Table 4). The concentration of vitamins in different portions of OFI varies and depends on the area of harvest [27]. The fruit, specifically its pulp, is enriched in ascorbic acid at levels up to 478.82 mg/100 g (Table 5); however, the skin from the fruits has a low concentration of σ-tocopherol (26 mg/100 g) [34,35]. A rich quantity of α-Tocopherol (1760 mg/100 g) is also extracted from the fruit's skin ( Table 5). The amount of ascorbic acid in cactus pear is 7 to 22 mg/100 g. Ascorbic acid, α-Tocopherol, and other tocopherols are present in all parts of the plant [35][36][37]. Folic acid, thiamine, pyridoxine, riboflavin, niacin, and lycopene are present in trace amounts only in the fruit pulp, while in cladodes, thiamine and niacin are present in trace amounts. [36]. Since vitamins are significant molecules for humans and adequate consumption is necessary for the correct implementation of many physiological activities, OFI represents a good source of them.
Inorganic Minerals
Inorganic minerals play an essential role in human physical health and physiological activities and keep many bodily activities within limits. The basic inorganic minerals and nutrients are needed on a small scale to sustain a good physical condition. These minerals are taken from numerous dietary nutritional sources. From the micro element, Zn is good for oxidative and antibacterial action. Co is for red blood cell formation, and Fe is for genetics, cytokine secretion, proteins, and pharmacological actions [38]. From the macro element, Mg is good for nerve regulation and muscles, Ca is for bone strength, and Na is for fluid balance. A significant number of inorganic minerals are present in OFI cladodes, seeds, and fruit. Table 5 shows the composition of the mineral content of OFI. Calcium and potassium are the main minerals, amounting, in the total ash content in dry matter, to 316.5 mg/100 g and 108.8 mg/100 g, respectively [39,40], while in seeds, potassium is 304.51 mg/100 g and calcium is 480.93 mg/100 g. Copper is the mineral with the lowest amount in total dry matter: 0.01 mg/100 g in cladodes, 0.21 mg/100 g in fruit, 9.47 mg/100 g in the peel, and 2.1 mg/100 g in seeds, respectively. Chromium and nickel are present in seeds only [41,42]. The considered standard values should be precise figures because the minerals and concentration differ with classes, cultivation spots, and the biological condition of the cladode tissue and seeds.
Polyphenolic Compounds
Polyphenolic compounds belong to a class of organic compounds commonly found in the plant kingdom. This class is divided into four major groups, which are stilbenes, flavonoids, lignans, and suberin-acids. These compounds control cell destruction initiated by oxidative stress and free radicals, enhance blood flow, minimize the threat of heart disease, decrease neurodegenerative ailments, and improve cognitive function. The common assembly of OFI polyphenols is represented in Figure 4.
Inorganic minerals play an essential role in human physical health and physiologica activities and keep many bodily activities within limits. The basic inorganic minerals and nutrients are needed on a small scale to sustain a good physical condition. These minerals are taken from numerous dietary nutritional sources. From the micro element, Zn is good for oxidative and antibacterial action. Co is for red blood cell formation, and Fe is for ge netics, cytokine secretion, proteins, and pharmacological actions [38]. From the macro el ement, Mg is good for nerve regulation and muscles, Ca is for bone strength, and Na is for fluid balance. A significant number of inorganic minerals are present in OFI cladodes seeds, and fruit. Table 5 shows the composition of the mineral content of OFI. Calcium and potassium are the main minerals, amounting, in the total ash content in dry matter to 316.5 mg/100 g and 108.8 mg/100 g, respectively [39,40], while in seeds, potassium is 304.51 mg/100 g and calcium is 480.93 mg/100 g. Copper is the mineral with the lowes amount in total dry matter: 0.01 mg/100 g in cladodes, 0.21 mg/100 g in fruit, 9.47 mg/100 g in the peel, and 2.1 mg/100 g in seeds, respectively. Chromium and nickel are present in seeds only [41,42]. The considered standard values should be precise figures because the minerals and concentration differ with classes, cultivation spots, and the biological condi tion of the cladode tissue and seeds.
Polyphenolic Compounds
Polyphenolic compounds belong to a class of organic compounds commonly found in the plant kingdom. This class is divided into four major groups, which are stilbenes flavonoids, lignans, and suberin-acids. These compounds control cell destruction initiated by oxidative stress and free radicals, enhance blood flow, minimize the threat of hear disease, decrease neurodegenerative ailments, and improve cognitive function. The com mon assembly of OFI polyphenols is represented in Figure 4. As the name suggests, polyphenolic compounds are characterized by the presence o various phenolic groups, which may be linked with low-or high-molecular-weigh groups of chemicals to form their structures [44]. These compounds are by-products o plant metabolism [45]. The growing interest in polyphenolic compounds is due to their antioxidant ability [46] and health benefits [47]. Polyphonic compounds are present in al components of the cactus plant, such as a variety of polyphenolic acids and flavonoids As the name suggests, polyphenolic compounds are characterized by the presence of various phenolic groups, which may be linked with low-or high-molecular-weight groups of chemicals to form their structures [44]. These compounds are by-products of plant metabolism [45]. The growing interest in polyphenolic compounds is due to their antioxidant ability [46] and health benefits [47]. Polyphonic compounds are present in all components of the cactus plant, such as a variety of polyphenolic acids and flavonoids ( Table 6). The flower contains gallic acid as the main compound of dry matter (4900 mg/100 g), along with 6-isorhamnetin-3-O-robinobioside (C 28 H 32 O 16 ) at a concentration of 4269.00 mg/100 g [48][49][50]. The remaining polyphenolic compounds have appeared on a small scale ( Table 4). The pulp of OFI fruit is a rich source of phenolic content, with 218.80 mg/100 g present in the pulp [51]. Additionally, the isorhamnetin glycoside is present in a significant amount, 50.60 mg/100 g, in comparison to other flavonoids [52][53][54][55]. The fruit seeds are also high in polyphenolic compounds, including tannins and feruloyl derivatives [56] (Table 4). Notably, the fruit crust contains a remarkable (45.7 g/100 g) phenolic content; among the various phenolic compounds present, many have been found to have bioactive properties, particularly the derivatives of flavonoid quercetin and kaempferol (both are beneficial for heart diseases, Alzheimer diseases, anti-cancer effects, arthritis, and diabetes and improve memory function); the contents are 0.22 mg/100 g and 4.32 mg/100 g, respectively [7,53,57]. The highest significant source of flavonoids and polyphenolic compounds is the cactus flower. In particular, some types of cacti that have cladodes yield a diverse range of phenolic compounds. The Phaeacantha cactus contains a rich quantity of rare compounds of flavonoids such as narcissin (137.10 mg/0.1 kg) and nicotiflorin (146.50 mg/0.1 kg) (see Table 6). In addition, it contains a significant amount of isoquercetin (39.70 mg/100 g) and ferulic acid (34.8 mg/100 g) [55,58,59]. The variations in the polyphenolic contents of cacti can be explained by the nature of the soil, climate, cladode age, and environment. It is valuable to take a diet with wealthy polyphenolic compounds to obtain a full array of advantages.
Betalains
The presence of betalains is restricted to a limited number of plant species; beets, and cacti are the primary sources of this type of pigment. They are recognized for their cheerful and bright hues. Recent analyses have shown that OFI betalains have various interesting properties that are valuable for human health. Indeed, potentially, betalains are used as antimicrobials, they have cardiovascular benefits (they improve cholesterol levels and reduce blood pressure), they have anti-cancer benefits (especially against colon cancer cells), they act as antioxidants (they protect cells from damage), and they have been found to improve insulin sensitivity in the human body [63]. The core structure of betalain is made of nitrogen and betalamic acid, as reported in Figure 5. The derivates of amino acids and imino compounds react with betalamic acid to make betaxanthins and betacyanins pigments, which are yellow and violet, respectively. Amazingly, a large number of betalains are present in the skin and pulp of OFI. The color variation of the fruit depends on the concentration of betaxanthins, betacyanins, and their derivatives. Furthermore, indicaxanthin, betanin, betanidin, neobetanin, and isobetanin are also produced by OFI fruit pulp [64,65]. Betanin and indicaxanthin are also identified in the skin [66]. The presence of Vulgaxanthin IV, (S)-serine-betaxanthin, Vulgaxanthin II, Vulgaxanthin I, Miraxanthin II, Portulacaxanthin III, Portulacaxanthin I, muscaaurin, (S)-valine-betaxanthin's, (S)-isoleucine and (S)-Phenylalaine betaxanthin, and gomphrenin I has also been reported [66][67][68][69].
cacti are the primary sources of this type of pigment. They are recognized for their cheerful and bright hues. Recent analyses have shown that OFI betalains have various interesting properties that are valuable for human health. Indeed, potentially, betalains are used as antimicrobials, they have cardiovascular benefits (they improve cholesterol levels and reduce blood pressure), they have anti-cancer benefits (especially against colon cancer cells), they act as antioxidants (they protect cells from damage), and they have been found to improve insulin sensitivity in the human body [63]. The core structure of betalain is made of nitrogen and betalamic acid, as reported in Figure 5. The derivates of amino acids and imino compounds react with betalamic acid to make betaxanthins and betacyanins pigments, which are yellow and violet, respectively. Amazingly, a large number of betalains are present in the skin and pulp of OFI. The color variation of the fruit depends on the concentration of betaxanthins, betacyanins, and their derivatives. Furthermore, indicaxanthin, betanin, betanidin, neobetanin, and isobetanin are also produced by OFI fruit pulp [64,65]. Betanin and indicaxanthin are also identified in the skin [66]. The presence of Vulgaxanthin IV, (S)-serine-betaxanthin, Vulgaxanthin II, Vulgaxanthin I, Miraxanthin II, Portulacaxanthin III, Portulacaxanthin I, muscaaurin, (S)-valine-betaxanthin's, (S)-isoleucine and (S)-Phenylalaine betaxanthin, and gomphrenin I has also been reported [66][67][68][69]. At low concentrations, betacyanin-derived pigments including betanin, phyllocactin, and betanidin exhibited antioxidant activity, manifesting as yellow and red colors The presence of catechol groups in betanidin structures involved in free radical nitrogen scavenging activity has significant antioxidant capacity. The natural pigment betacyanins are useful for maintaining physiological ability under oxidative stress [70]. Additionally, indicaxanthin was presented as less active with respect to betanin in the free radical scavenging reactions [71].
Sterols
Sterols are a type of molecule present in plant-based foods, such as nuts and oils, and are also found in animals and dairy products. They are effective for human health, as they At low concentrations, betacyanin-derived pigments including betanin, phyllocactin, and betanidin exhibited antioxidant activity, manifesting as yellow and red colors The presence of catechol groups in betanidin structures involved in free radical nitrogen scavenging activity has significant antioxidant capacity. The natural pigment betacyanins are useful for maintaining physiological ability under oxidative stress [70]. Additionally, indicaxanthin was presented as less active with respect to betanin in the free radical scavenging reactions [71].
Sterols
Sterols are a type of molecule present in plant-based foods, such as nuts and oils, and are also found in animals and dairy products. They are effective for human health, as they help to generate and stabilize hormones in the body, boost the body's defense mechanisms, lower inflammation, and have anticancer abilities. From OFI fruits, seeds, and skin, the extracted sterol in prominent concentrations is β-sitosterol, which varies between 6.75 and 21.10 g/1000 g [72] (see Table 7). Further, a Campesterol quantity of 1.66-8.76 g/1000 g is claimed to be found in the skin, seed, and pulp. Additionally, glycine max (19.00 to 23.00 g/1000 g) and argan oil (4.00 g/1000 g) have similar Campesterol compositions in the seed, pulp, and skin. In addition, ∆ 7 -Avenasterol, stigmasterol, and lanosterol exist on a small scale, while ergosterol exists in trace content in the peel. Sterols in flowers and cladodes are still unidentified. The schottenol and spina also exist in argan oil [73].
Applications
Opuntia ficus indica is highly valued worldwide due to its wide range of applications. It is utilized as an effective tool in medicine, in cosmetic ingredients, in human nutrition as food, in livestock feed as forage, in wastewater treatment, in fuel production, and in sustainable and eco-friendly building materials. Its remarkable versatility and adaptability have made it invaluable in various industries and culture practices around the world.
Uses of Opunia in the Bioremediation of Wastewaters
Due to the increase in global population and pollution problems, providing safe drinking water is becoming an increasingly important issue. For this reason, many methods have been studied for removing contaminants from wastewater, including irradiation, biosorption, deionization, flotation, coagulation, microfiltration, membrane filtration, oxidation, ion exchange, ozonation, and electrochemical treatment. The elimination of organic and inorganic pollutants from wastewater by biosorption using OFI has advantages due to its low cost, its great ability for pollutant binding, its quick elimination of pollutants, and the easy accessibility of material [75]. The seeds of OFI and Moringa Oleifera were used as bio-coagulants in comparison to Alum, removing 100% water turbidity at pH 7.5. The OFI coagulation-flocculation active molecules, especially polysaccharides, work as inter particles binding in polluted H 2 O treatment [76]. Kinetic and thermodynamic studies and equilibrium models are utilized to determine the interfaces between pollutants and biosorbents. OFI pads, both in their raw and chemically and physically treated forms, are useful for removing chemical oxygen demand (COD) dyes, pesticides, turbidness, negativeions, and metal species from wastewater [75][76][77]. Except for the skin, various parts of OFI show coagulation action with Moringa Oleifera, which is useful for the elimination of turbidity in synthetic clay solutions, as reported by Miller [77]. The turbidity is reduced by 92-99 percent with a natural coagulant. The presence of galacturonic acid and other active molecules in OFI makes it efficient for treating wastewater turbidity. Galactose, arabinose, and rhamnose combined with galacturonic acid account for fifty percent of all coagulation processes, indicating that other components of OFI also contribute to the coagulation process [74,77]. The biocoagulation-flocculation process was utilized to remove heavy metals from contaminated natural samples collected from the Mukuvisi River in Zimbabwe under standard conditions with high OFI powder activity. The optimal conditions for the process were 35 • C, 5 pH, and a 180 min contact time. At standard conditions, even in the presence of other ions, Pb(II) is easily eliminated with OFI powder [78].
Usage as Forage
The OFI is a highly drought-resistant species with a deep-rooted mechanism, well fitted in arid conditions. The OFI fruits and pads have high levels of protein, water, soluble sugar, and nutrients, making them ideal for use as forage crops. Cactus cladodes are used as fodder for goats, sheep, and cows in different regions of Africa, Asia, America, and Europe. They cover a significant quantity of recommended minerals, proteins, water, and nutrition for animals. Since its laxative effect is attributed to the high-level content of oxalic acid, a blend with straw is suggested. Moreover, the low tannin and phenolic contents of cactus stems aid in digestion, improve protein and fats, and enhance the meat production [79,80] and milk yield [81]. Cactus cladodes combined with sugarcane bagasse are utilized as a significant dairy supplement in semi-arid regions. This alternative to traditional lipid components enhances milk production, improves the milk fat content, and shifts the composition of milk fat towards a more favorable fatty acid profile, promoting a healthier outcome [82]. In this study, the dry matter digestibility is an important factor. Livestock are more likely to acquire the necessary nutrients for growth when there is an increase in the digestibility of dry matter as forage in drought [83].
Fuel Fabrication
As an alternate combustible material, the possible production of biomethane (biogas), electrical power energy, and heat energy from cactus pear using the Anaerobic Digestion method is possible. G.I.S (Geographic Information System) Dufour 2.0 software shows that 600,000.00 ha. ca. produced 612,115 × 10 3 m 3 of biogas, resulting in the production of 342,784 × 10 3 m 3 of bio CH 4 , 67,038,000 KWh of electric power energy, and 70,390,000 KWh of heat energy. Further, the obtained digestate can also be treated as a bio-fertilizer for natural and regular farming [84]. For green energy, a bioelectrochemical cell is frequently employed in extremely water-saturated environments. A novel plug-in integrated porcelain-based fuel cell was assembled, resulting in a typical energy density of 103.60 mW/m 3 in a device applying O. albicarpa, with Opuntia (10.6 mW/m 3 ) > Opuntia. robusta (7.5 mW/m 3 ) > Opuntia. joconostle (0.46 mW/m 3 ), accompanied by a resistance of 10 3 Ω. The 285.12 J electricity was attained in 4 weeks from O. albicarpa [85].
Pharamcological Ability
In recent years, the OFI has been considered an active pharmacological compound source. Findings have indicated that OFI contains betalains, which are molecules with great antioxidant abilities that protect from oxidative stress and lessen inflammation. OFI seeds contain glucuronoxylans, which act as biological hypoglycemic natural agents and have shown antidiabetic effects. The biological natural agents obtained from Opuntia through a natural and easy procedure also show incredible power against chronic diseases with no side-effects [86]. Generally, the mixture of OFI fruit and cladode extract is used to control the hypoglycemic impact in pre-diabetic overweight humans. The extract, when processed before the dextrose tolerance examination, reduces the blood glucose levels [87]. Several chemotherapeutic medicines have been derived from more than 3000 plants and artificial derivatives for cancer therapy, including carotenoids, terpenoids, and alkaloids as the most important components in cancer treatment. According to international reports, the fruit, stem, and cladodes of Opuntia species have a noteworthy approach for anticancer activity [88]. Giglio and co-workers stated in their work that the extract obtained from OFI decreases the atherogenicity, increases metabolic parameters, and decreases lipoproteins rarity in examinees with metabolic risk aspects [89].
Antioxident Capacity
The total polyphenolic compounds, obtained from the aqueous extract of dehydrated flowers and peels from OFI, have shown activity in controlling radical scavenging against hydroxyl anions and superoxide. When the antioxidant activity of dehydrated flower and peel extract was observed, the 2,2-diphenyl-1-picrylhydrazyl (DPPH) scavenging action was lower in the peel (5.17 g G.A.E. kg/D.W.) (Gallic acid equivalent per kg of dry weight) than in the flower (30.40 g G.A.E. kg/D.W.). These results showed that the antioxidant action of OFI peels was six times lower than that of the flower [90]. The antioxidant action of seed oil from OFI extraction was found in three different solvents, C 6 H 14 , C 2 H 5 OH, and C 2 H 5 -acetate, exhibiting noteworthy scavenging motion towards free radicals (DPPH). The oil extract in C 2 H 5 -acetate has a maximum antioxidant action of 274 (µmole TE/20 mg), followed by C 2 H 5 OH 247.00 (µmole TE/20 mg) and C 6 H 14 , which have the lowest values. The antioxidant action of oil is greatly influenced by the solvent for extraction [91]. In addition, several typical models showed that Betalain dyes have notable antioxidant action. The UV-Visible spectroscopy technique was used to analyze the antioxidant action of different samples simply and quickly by examining the change in the color of the DPPH radical scavenging activity assay from a dark purple hue to a bright yellow. The highest contents of polyphenolic compounds in the purple peel of Opuntia spp. exhibited the maximum scavenging activity as compared to yellow and others. Furthermore, the antioxidant activity increased when the number of polyphenolic compounds increased [92]. According to Oliveira [93], the use of Opuntia polyphenolic compounds causes a decrease in rat liver damage. Moreover, the oven drying peel is used in baking goods as an additive due to its antioxidant ability [94].
Anti-Inflammatory Capacity
OFI has a long history of traditional uses for many illnesses. In the last decade, its potential anti-inflammatory effects have been the focus of several works of research. Inflammation is a natural reaction of the body to infection or injury, but many diseases (such as cancer, diabetes, and heart problems) are developed due to chronic inflammation. Studies have indicated that OFI contains anti-inflammatory compounds. The pro-inflammatory cytokines, which play a significant role in the inflammatory reaction, are also produced in the OFI plant. The anti-inflammatory ability of Opuntia has been presented in a Moroccan study [95]. Anti-inflammatory activity has been also assessed in a recent experiment on chemical injury induced in Swiss rats. The seed oil used in this study was extracted from 20 g powder by the Soxhlet apparatus and concentrated in a rotary vacuum evaporator at 40-60 • C under low pressure. The 200 mg/kg dose was given daily for 14 days. This study's results prove the seed oil OFI's effectiveness as an anti-inflammatory agent, and this plant is also used in traditional medicines as an edematous agent [96]. Extracted seed oil from OFI and Punica granatum was used to experimentally evaluate the anti-inflammatory effect of carrageenan and the induced-trauma inflammatory in a female rat with an edema paw. The study shows that the seed oils of both plants had significant anti-inflammatory activity for the drugs used in the reference models and had no effect on the general behavior of the tested female animal [93,95]. The strongest anti-inflammatory and antioxidant activities are presented by OFI betalain-pure extract rather than betalains obtained from other sources by in vitro, cell-based, and cell-free assays. Both betalains and OFI betalain-pure extract reduced the release of oxygen species (R.O.S.) and important inflammatory markers (IL-6, IL-8, and NO), and they were more effective in reducing the intestinal inflammation than the reference drugs dexamethasone and trolox [97].
Antibacterial Effect
Antibacterial activity typically relies on factors such as the nature of the biological sample and the extraction method employed. Notably, oven-dried fruits exhibited superior activity compared to other samples [98]. In the extract analysis, the findings indicated that OFI seed oil isolated with C 2 H 5 OH and acidic methanol has valuable antibacterial activity in many bacterial strains and at dissimilar inhibition zone distances. It covers a broad range of antibacterial effects versus bacteria such as Klebsiella pneumonia (20.00 mm) (20.00-34.00 mm), MRSA (35.1 mm), and L. monocytogenes (24 mm) [99]. The biological activity of Opuntia dillenii seed oils and the OFI antibacterial activity were examined on 01 Gram +ve and 02 Gram −ve bacteria, respectively. The statistics of this analysis demonstrated that seed oil has no antibacterial action for Pseudomonas aeruginosa, while Staphylococcus aureus and Escherichia coli exhibited antibacterial properties [100]. The antimicrobial assays conducted in this study focused on specific bacterial strains, including Gram-negative bacteria such as Agrobacterium tumefaciens and Escherichia coli as well as Gram-positive bacteria like Micrococcus aureus, on 96-well microplates by the broth microdilution method for 24 h at 37 • C, inhibiting growth in minimum extract concentrations [101].
Role of Opuntia Regarding Bodyweight and Bone Health
Alloxan, a diabetogenic compound, is commonly tested for the study of body weight loss. Through tests conducted on a sample of laboratory mice, it has been demonstrated that when alloxan is used in combination with OFI seed oil, more encouraging results are achieved as compared to those achieved with alloxan alone [102]. Seed oil at a concentration of 0.025% per kilogram in a high-fat diet for four consecutive weeks resulted in a significant weight gain compared to a basic diet [103]. Moreover, the hepatoprotective effects of the seed oil Opuntia dilleniid (SOD) on CCl 4 -provoked injury in rat livers have been studied. The rat was treated with SOD 2 mL/kg regularly for two weeks. The weight gain, plasmatic glucose level, and liver injury decreased significantly. The SOD has a protective effect against medicated-CCl 4 injury [104]. In male rats at the growing stage, it was found that the final growth stage of OFI cladodes played a role in bone development. Additionally, during the initial and final periods of cladode maturity, soluble fibers demonstrated good bone development properties, including the Ca content, micro-architecture, and fracture resistance, as evaluated in a study where rats were given insoluble cladode fibers [102].
Cosmetic Applications
As already mentioned above, OFI has a rich variety of useful compounds, including antioxidants, essential fatty acids, and polyphenols. It provides help in skin improvement, moisturizing, and wound care. The OFI (1%) extract in the oil-to-water base nano-emulsion can increase the water content for five hours in the corneum stratum, demonstrating significant improvement over the vehicle formulation. The excellent cleansing and moistening ability of the current formulation, due to the presence of carbohydrates in OFI, has stability and a soothing effect with potential in cosmetics [105]. High-power microwave treatment was applied to obtain O. humifusa extract (MA-OHE) with good viscosity, a high antioxidant capacity, and reduced consequences of particulate matter. So, MA-OHE is a prospective component in cosmetics for stopping/avoiding diseases [106].
Application of Opuntia in Building Materials
The application of OFI in construction materials has gained significant attention because of its ability to be a sustainable and eco-friendly alternative. OFI is easily cultivated in many areas, making it an instantly available resource for construction material applications. Findings have shown that Opuntia-dried pads, spines, and fibrous materials are used in roofs, furniture, natural adhesives in traditional construction, joining adobe bricks, making household objects, and in many other building raw materials, including wall panels, bio-composites, and insulations. The cladodes and stems are utilized to make insulating material with a high thermal ability and fire resistance. Further, plant fibers and bio-degradable materials are combined to make bio-composites with good mechanical properties. The admixture of resin with bio-silica grains (0.5% Vol.) was poured on a fixed, coated wax rubber mold; then, a layer of short OFI fiber (30% by volume) was carefully added to obtain composite material at 25 • C for 24 h and, subsequently, at 120 • C for 48 h. The results showed an enhancement in toughness strength, improved wear resistance, high tensile strength, and a significant increase in energy capacity storage (4.34 GPa and 34,371 life counts fatigue with a 0.71 loss factor) [107]. In another work, the OFI fiber of cladode (10% wt.) treated with alkali was applied in a (HDPE) high-density polyethylene matrix and demonstrated a rise in rigidity and the modulus of elasticity, while the addition of the maleic anhydride grafted 3, block C 8 H 8 -(C 2 H 4 -C 4 H 8 )-C 8 H 8 copolymer also enhanced the plasticity, ductility, and heat properties. The green composite OFI/HDPE is applied as an alternative material in construction products [108]. For mortar preparation, OFI mucilage (1:1 cladode/H 2 O ratio), Mexican standard cement (1:3 ratio with silica sand), and liquid (H 2 O + mucilage) (650 mL) are used with different concentrations of the total volume of OFI mucilage (1.5 to 95%). The results show that OFI mucilage decreases mortar (cement-based) porosity, enhances durability, and increases compressive strength and electrical resistivity. Additionally, it improves mechanical strength and increases its lifespan [109,110]. The OFI mucilage, cooked OFI mucilage and exudate OFI mucilage (4%, 8%, 15%, 30% w/m concentration), and OFI dehydrated powder (1%, 2%, 4% cement by sand mass replacement) are used to calculate the durability of concrete for 30, 90, 180, and 400 days. The OFI dehydrated powder improved chloride transport and decreased the RCP (rapid chloride permeability) index by 10%. The exudate OFI mucilage improved the 30% RCP index and 20% durability index, while for mixture controlling, cooked OFI mucilage showed excellent results. OFI derivatives work like biopolymers (clogging sponges) in the matrix pores of cement, stopping the transport of H 2 O and chloride in concrete [111]. The cactus extract solution contains polysaccharides with a gluey character which were used in mortar to improve the sustainability, water absorption resistance, and plasticity and significantly reduce water absorption in the concrete. The cactus 100% solution was shown to be more effect in mortar and concrete as compared to the cactus 50% solution. The study demonstrates that natural biopolymers sugarcane bagasse [112], straw, wood, stalks with lime, bamboo, and animal dung [113]-were utilized in construction materials as reinforcing agents to enhance the strength and durability of structures like adobe buildings, mud bricks, and wall plasters in ancient times [114].
In Table 8, we have compiled a comprehensive summary of the diverse applications of various parts of Opuntia ficus indica, including cladodes, fruits, flowers, seeds, skins, and mucilage. Anti-inflammatory and antiulcerogenic activities, interactions with drugs and intestinal homeostasis, construction materials, stabilizes vision and nerve function. [9,14,28,35,41,55]
Conclusions
In conclusion, the accumulated evidence substantiated that the chemical composition of nopal cactus contains abundant sets of macro and micro molecules and a rich chemistry of bioactive compounds-particularly, polyunsaturated-fatty acids, polyose, phytosterols, vitamins, tocopherols, and polyphenolic compounds. Cladode and fruit peel have more minerals and bio-active species compared to seed, which are useful for human beings' health and as medication for numerous diseases (cancer, diabetes, skin, and cardiovascular diseases). In fact, fruit peels worked as organic dyestuffs, organic antioxidants, therapeutic agents, and additives. In agriculture, they worked as forage, an important crop in harsh environmental regions. The food industry took OFI as a beneficial ingredient in a functional diet. Its health-promoting abilities and great nutritional contents make it a favorable ingredient for use in food development products. Still, more analysis and study are needed to understand the plant chemistry and the role of environmental conditions regarding the chemical composition in OFI. The construction industry took OFI as a sustainable and eco-friendly bio-source polymer for making building raw materials such as roofs, adobe bricks, wall panels, insulation, and natural adhesives in traditional and modern construction to enhance the thermal and mechanical properties. Moreover, it is still required to conduct a deeper examination of the diverse series of bioactive compounds obtained from cactus pear plants and their beneficial usages in multiple innovative industries and efficient functional foods.
|
2023-07-12T06:27:38.360Z
|
2023-06-29T00:00:00.000
|
{
"year": 2023,
"sha1": "6fa72acbea1931e60589750a9d4e07ba7432bc3c",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3417/13/13/7724/pdf?version=1688205267",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "f2d6a607bf604fdd4d1afb15f61841a5057a8859",
"s2fieldsofstudy": [
"Chemistry",
"Medicine"
],
"extfieldsofstudy": []
}
|
219401471
|
pes2o/s2orc
|
v3-fos-license
|
The FRB-SGR Connection
The discovery that the Galactic SGR 1935$+$2154 emitted FRB 200428 simultaneous with a gamma-ray flare demonstrated the common source and association of these phenomena. If FRB radio emission is the result of coherent curvature radiation, the net charge of the radiating"bunches"or waves may be estimated. A statistical argument indicates that the radiating bunches must have a Lorentz factor $\gtrapprox 10$. The observed radiation frequencies indicate that their phase velocity (pattern speed) corresponds to Lorentz factors $\gtrapprox 100$. Coulomb repulsion implies that the electrons making up these bunches may have yet larger Lorentz factors, limited by their incoherent curvature radiation. These electrons also Compton scatter in the soft gamma-ray field of the SGR. In FRB 200428 the power radiated coherently at radio frequencies exceeded that of Compton scattering, but in more luminous SGR outbursts Compton scattering dominates, precluding the acceleration of energetic electrons. This explains the absence of a FRB associated with the giant 27 December 2004 outburst of SGR 1806$-$20. SGR with luminosity $\gtrsim 10^{42}$ ergs/s do not emit FRB, while those of lesser luminosity can do so.
INTRODUCTION
Soft Gamma Repeaters (SGR) have long been candidates for the sources of Fast Radio Bursts (FRB). SGR are believed to originate in young neutron stars with extremely high magnetic fields and to be powered by dissipation of their magnetostatic energy (Katz 1982;Thompson & Duncan 1992, 1995, offering an ample source of energy. The energies ∼ 10 40 ergs of even "cosmological" FRB are a tiny fraction of the ∼ 10 47 ergs of magnetostatic energy of a neutron star with a 10 15 gauss field, a value inferred from the spindown rates of some SGR, measured in their quiescent Anomalous X-ray Pulsar (AXP) phases.
SGR also have short characteristic time scales. The most intense parts of their outbursts typically last ∼ 0.1 s but upper bounds on their rise times are < 1 ms. Although the temporal structure of SGR have not been measured on the scale of the fastest temporal structure of FRB (∼ 10 µs), the fact that both display extremely short time scales, shorter than any other astronomical time scale except those of pulsar pulses, suggests an association. This hypothesis has been advanced by many authors (Connor, Sievers & Pen 2016;Cordes & Wasserman 2016;Dai et al. 2016;Katz 2016;Zhang 2017;Wang et al. 2018;Wadiasingh & Timokhin 2019); see Katz (2018a) for a review.
c 2020 The Authors arXiv:2006.03468v1 [astro-ph.HE] 5 Jun 2020 is several orders of magnitude more energetic than that of FRB 200428 and with the radio-to-gamma ray fluence ratio of FRB 200428 more than five orders of magnitude greater than that of SGR 1806−20. A number of theoretical interpretations have been suggested (Lu, Kumar & Zhang 2020;Lyutikov & Popov 2020;Margalit et al. 2020;Wang, Xu & Chen 2020).
A past argument (Katz 2020) against a neutron star origin of FRB was the absence of periodicity in repeating FRB, particularly in the well-studied FRB 121102 (Zhang et al. 2018). SGR 1935+2154 has a period of 3.245 s (Israel et al. 2016), which would be expected to modulate the observable activity of FRB 200428, whatever its mechanism of emission, unless its magnetic field be a dipole aligned with the spin axis.
THE HOST
The characteristic spindown age of SGR 1935+2154 was measured over about 120 days in 2014 to be 3600 y (Israel et al. 2016), several times shorter than the estimated age of SNR G57.2+0.8 (Kothes et al. 2018;Zhou et al. 2020) in which it is embedded. These values of the SNR age were inferred from estimates of its distance; the smaller distance estimates of Mereghetti et al. (2020) and Zhou et al. (2020) would lead to much lower values of the SNR's age and might resolve the disagreement.
Alternatively, the neutron star might now be in a period (at least several years long because the spindown was measured six years before the FRB) of unusual activity and unusually rapid spindown. Yet other alternatives include misidentification of the SGR with the SNR or the emergence of strong magnetic fields long after the neutron star's birth.
CURVATURE RADIATION
FRB emission by a strongly magnetized neutron star has been explained as coherent curvature radiation (Kumar, Lu & Bhattacharya 2017). Its spectrum is the product of the spectrum of radiation emitted by accelerated point charges and the spectrum of the spatial structure of the coherent charge density distribution (Katz 2018b). The spectrum emitted by an accelerated point charge is very smooth and broad, so the observed spectral structure must be attributed to the distribution of charge density. The frequency and spectrum of the emitted radiation is determined by the phase velocity (pattern speed) of the deviations from charge neutrality that radiate. This must be distinguished from the velocities of the individual charges that also radiate incoherently. Describing the phase velocity of the plasma wave that bunches the charge density by its corresponding Lorentz factor γw, its minimum value γmin for observed curvature radiation of angular frequency ω where R is the radius of curvature of the guiding magnetic field line.
We have no direct evidence that the observed radiation is near this peak of the spectral envelope of curvature radiation (the actual dynamic spectra of FRB are determined by the spatial structure of their charge distribution; (Katz 2018b)), but selection effects favor the detection of the brightest radiation and make that plausible. This is the same argument that justifies the assumption of particle-field equipartition in incoherent synchrotron sources: the most efficient radiators are the most detectable. Taking R ∼ 10 6 cm, the neutron star radius, because the available energy density decreases rapidly with increasing distance from the neutron star, leads to an estimate γmin ≈ 100, only weakly dependent on the uncertain parameters.
The observed, comparatively narrow but varying, spectral bands of FRB radiation imply that there are comparatively few charge "bunches" radiating at any one time. If there were ω/∆ω ∼ 10 such bunches, where ∆ω is the width of an individual band, each would likely have a different peak frequency of radiation corresponding to a peak in the Fourier transform of the spatial distribution of charge. The total spectrum of radiation, a sum over many such peaks, would be smooth and broad, rather than being confined to a few narrower bands as observed.
Radiating Charges
We model this distribution of charge density as a single charge Q, the amplitude of the peak of that Fourier transform; a actual point charge Q would radiate a very broad and smooth spectrum, not seen. The frequency-integrated power received per unit solid angle (Rybicki & Lightman 1979) where a ⊥ ≈ c 2 /R is the magnitude of the acceleration perpendicular to the velocity (and magnetic field line), θ is the angle between the direction of observation and the velocity vector and φ is an azimuthal angle. The half-width at half power of the radiation pattern θ 1/2 ≈ 0.35/γw. For γwθ 1 the final factor varies ∝ (γwθ) −8 , cancelling the factor of γ 8 w , leading to a result independent of γw but ∝ θ −8 . Taking γwθ 1 and Eq. 1 for γw = γmin in Gaussian cgs units for L-band radiation. If γw γmin then the spectral peak and most of the radiated power is at frequencies above the observed L-band. As a result of integrating up to ωmax ∼ cγ 3 w /(3R), the inferred spectrally integrated dP/dΩ is multiplied by (γw/γmin) 4 and Eq. 2 is replaced by where (dP/dΩ)| obs is the measured power density at the observational frequency, henceforth 1400 MHz, corresponding (Eq. 1) to γmin.
For FRB 200428 (Bochenek et al. 2020; CHIME/FRB Collaboration 2020), taking a bandwidth of 400 MHz, a distance of 6 kpc and emission lasting 3 ms, and for a nominal "cosmological" FRB with a flux density of 1 Jy at z = 1 dP dΩ obs ∼ 1 × 10 36 erg/sterad-s FRB 200428 2 × 10 42 erg/sterad-s z = 1 and where γ2 ≡ γw/γmin ≈ γw/100 ≥ 1. These are only the charges whose (collimated) radiation is directly observed. There may be additional charges (much larger in total absolute magnitude) radiating in other directions, either simultaneously with the observed FRB, or at other times, if the FRB is part of a wandering or intermittent beam (Katz 2017).
Empirical Lower Limit on the Lorentz Factor
The upper limits set by Lin et al. (2020) on FRB emission during other soft gamma-ray flares of SGR 1935+2154 of 10 −8 of FRB 200428 statistically constrain the Lorentz factor γw of the emitting charges (or their wave or pattern speed) if the emission is produced by acceleration perpendicular to the velocity. This bound applies to synchrotron radiation as well as to curvature radiation.
For a relativistic particle of Lorentz factor γ, emission at angles θ 1/γ is O(γθ) −8 times that for θ 1/γ (Eq. 2). Brightness selection effects make it likely that FRB 200428 was observed at an angle θ θ 1/2 ≈ 0.35/γ. If other observed soft gamma-ray bursts of SGR 1935+2154 produced radio bursts similar to FRB 200428 but beamed in directions statistically uniformly but randomly distributed, then of N such bursts the closest to the observer was likely at an angle θ ∼ 4/N . Then γw 0.35 Fmax Fmin where N = 29 is the number of FRB outbursts observed by Lin et al. (2020) and Fmax/Fmin ∼ 10 8 is the ratio of the brightest FRB observed (FRB 200428) to the upper limits set on all the other SGR outbursts. The effective (halfwidth at half-power) beam width θ 1/2 ≈ 0.35/γw 2 • . Continuing observation, increasing N , will either increase the lower bound of Eq. 8 or find a distribution of observed FRB strengths from which their angular radiation pattern may be inferred. This method cannot be applied to the numerous observed bursts of FRB 121102 because no corresponding gamma-ray activity is detected. Because of limits on the sensitivity of X-and gamma-ray detectors, it is likely to be feasible only for Galactic FRB.
Particle Energies
The requirement that the electrostatic repulsion of the charge bunches not disrupt them sets a lower bound on the particle energy Ee and Lorentz factor γpart; an electron must have sufficient kinetic energy to overcome repulsion by the net bunch charge Q. Coherent emission requires that the charge bunch extend over a length λ = c/ω = λ/2π in its direction of motion and radiation in order that fields from its leading and trailing edges, arriving at times separated by λ/c, add coherently. The minimum electron energy is where is approximately the largest dimension of the charge cloud. If the cloud is roughly spherical ∼ λ (about 3 cm for L-band radiation) If the charge density be spread over a width ∼ R/γmin ∼ 10 4 cm transverse to its direction of motion and radiation (a very oblate shape), the maximum permitted by the condition that the fields add coherently, The fact that FRB spectral structure typically consists of bands of width ∆ω ∼ 0.1ω indicates that the radiating waves have a minimum of ∼ 10 periodically spaced charge peaks. Individual regions of unbalanced charge may have charges an order of magnitude less than indicated by Eq. 7, with a corresponding reduction in Ee. These regions radiate coherently so the effective Q is reduced in Eq. 9 but not in Eqs. 2 and 5. This and the uncertain factor γ2 may make it possible to reconcile the values of Eq. 11 with the maximum electron energy ∼ 0.2 TeV, above which curvature radiation is energetic enough to make pairs in the large magnetic field.
Accelerating the Electrons
Can electrons be accelerated to the energies indicated Eqs. 10 and 11? We calculate the required electric fields E by equating the power radiated by an electron in curvature radiation to the power delivered by the electric field ≈ eEc. There are at least two possible criteria: (i) The power of the incoherent curvature radiation emitted by electrons with energies Eq. 10 or 11, the energies required for electrons to form bunches with the charges inferred from the observed radiation without being disrupted Table 1. Minimum values of electric field (upper) (multiply by 300 to convert to V/cm) required to balance incoherent curvature radiation losses of electrons at the energies required to overcome Coulomb repulsion by radiating bunches and energy loss times (lower) if there is no accelerating field. There is an additional criterion, that the electrons can be accelerated to the required energy (Eq. 9) in a length R, that sets a more stringent minimum of E 5 esu/cm 2 (1500 V/cm) for FRB 200428 if = R/γw. by electrostatic repulsion, must not exceed the power imparted by the accelerating electric field. Their Lorentz factors γpart are generally much greater than γmin. The power an electron radiates as incoherent curvature radiation (Rybicki & Lightman 1979) For a "bunch" of charge Q the elementary charge e is replaced by Q and γpart is replaced by γw if the "bunch" is a wave or pattern on an underlying particle distribution with different Lorentz factors. Equating Pcurve = eEc (Kumar, Lu & Bhattacharya 2017), where γpart = Qe/ mec 2 (Eq. 9), is required. The resulting numerical values are shown in Table 1. Faraday's Law limits the electric fields that can be created by induction to E B, and vacuum breakdown (Heisenberg & Euler 1936;Schwinger 1951;Stebbins & Yoo 2015) limits it to E 2 × 10 12 esu/cm 2 . The curvature radiation model can be excluded as an explanation of "cosmological" FRB if ∼ λ unless γ2 10, but smaller values of γ2 are consistent with larger but possible values of .
(ii) The electric field must replenish the coherently radiated energy after the charge bunch has formed. As shown in Sec. 4.6, the kinetic energies of the charge bunches are very small, and must be replenished throughout a burst. This criterion is obtained from Eq. 14, replacing e by Q, using γw = 100 and the power delivered by the electric field ≈ QEc: The numerical results are shown in Table 2, and are independent of because the relevant Lorentz factor γw is determined by the observed frequency, not . It may not be necessary that work done by the electric field continuously replenish the kinetic energy of the coherently radiating charge bunches (Table 2). Energetic particles may be a sufficient energy reservoir, intermittently producing charge bunches by plasma instability, but if electrons E (esu/cm 2 ) FRB 200428 z = 1 All 3 × 10 6 5 × 10 9 Table 2. Minimum values of electric field (multiply by 300 to convert to V/cm) required to overcome coherent curvature radiation losses during the radiation of a charge bunch. Because the relevant Lorentz factor is that of the coherent wave the results do not depend on the values of or of γ 2 that determine the minimum particle Lorentz factor.
cannot be accelerated to sufficient energy to form the necessary charge bunches (as is the case for spherical bunches with ∼ λ and smaller γ2) then sufficient coherent curvature radiation cannot be emitted.
Origin of Accelerating Electric Field
Currents in a neutron star magnetosphere flow along closed magnetic loops, anchored in the neutron star in analogy to Solar prominences, as in the "magentar" model of SGR. A plasma instability may introduce a region of large "anomalous" resistivity, much greater than the microscopic plasma resistivity, interrupting the current flow and replacing the conductive region with an effective capacitor. Charge builds up on the boundaries of the newly insulating region. This is described by an LC circuit with inductance L ∼ 4πr/c 2 (in Gaussian units), where r is the radius of the current loop (that may be as large as the magnetospheric radius R) and capacitance C ∼ A/(4πa), where A is the cross-section of the current loop (that may be as large as ∼ R 2 for a distributed current) and a is the width of the gap that becomes insulating. The charge on the surfaces of the gap where t is the time since the insulating gap opened, J0 was the interrupted current, and Q0 = √ LCJ0. For a distributed current and a wide gap A ∼ r 2 , a ∼ r, and √ LC ∼ r/c. Then J0 ∼ ∆BRc/4π, Q0 ∼ ∆Br 2 /4π, the voltage drop V ∼ Q0/C ∼ ∆Br and the electric field E ∼ V /a ∼ V /r ∼ ∆B, where ∆B is the change in B when the current loop is interrupted. The fields indicated in the Tables for ∼ R/γw can be provided by plausible values of ∆B. The charges Qgap(t) are much larger than the radiating charges inferred from Eq. 7, but are not moving relativistically and do not radiate significantly.
Radiation will be emitted by the changing magnetic field. On dimensional grounds, the expression for the power radiated in the dipole approximation is roughly valid, where the dipole moment µ ∼ ∆Br 3 , varies on a characteristic time scale ∼ 1/ω ∼ c/r and r is the radius or characteristic size of the loop: For the maximum plausible ∆B ∼ 10 15 gauss and the observed FRB L-band frequency, P ∼ 10 41 ergs/s and would be unbeamed, in contradiction to Sec. 4.2 for FRB 200428. Such unbeamed power would be insufficient to power "cosmological" FRB. Narrow beaming would require highly relativistic motion.
The achievable value of E may be limited by breakdown creation of electron-positron pairs, either the Schwinger vacuum breakdown that occurs for E 2×10 12 esu/cm 2 , or the curvature radiation-driven pair production cascade breakdown believed to occur in pulsars. Even if breakdown occurs, it may not necessarily "short out" the electric field and accumulated charges because the region of breakdown may still be resistive as a result of plasma instability. If the current loop is wide ( ∼ R/γw), E may be large enough to accelerate the electrons to the energies necessary to overcome Coulomb repulsion. Each portion of the area A accumulates charge, limited independently by breakdown in the capacitive gap, so that it may be possible to produce the necessary thin sheet charge distribution.
Faraday's law ∆B ≤ B (defining B as its maximum magnitude). Causality requires ∆t ≥ ∆x/c so that This is a general limit on the electric fields that can be produced in a relaxing current-carrying magnetosphere. Changing the magnetic field within a loop of area r 2 by ∆B in a time τ produces an inductive electromotive force (EMF) In FRB 200428 the EMF required to accelerate particles to the minimum energy for = λ and γ2 = 1 (Eq. 10) can bei provided by ∆B ∼ 10 8 gauss if the loop encompasses much of the magnetosphere (R ∼ 10 6 cm) if τ ∼ 0.1 s, as observed for SGR. If = R/γw and γ2 = 1 (Eq. 11), ∆B ∼ 2 × 10 4 gauss would be sufficient. For the nominal 1 Jy-ms FRB at z = 1, ∼ λ and γ2 = 1 would require ∆B ∼ 10 11 gauss but ∼ R/γw and γ2 = 1 would only require ∆B ∼ 3×10 7 gauss. Without a detailed understanding of the magnetohydrodynamics and plasma physics of SGR activity we cannot decide if these values are plausible, but they violate no physical law.
Energetics
The magnetic energy dissipated is obtained using Eq. 22 and r ∼ R to obtain the minimum ∆B required to accelerate electrons to the energy Ee = V inductive e: where the numerical values assume ∼ λ (larger would lead to lesser values), γ2 = 1 and the observed width of FRB outbursts τ ∼ 0.1 s; for FRB 200428 B = 2 × 10 14 gauss (Israel et al. 2016) and for the burst at z = 1 B = 10 15 gauss have been assumed. The value of E for FRB 200428 is consistent with the observed X-ray fluences of SGR 1935+2154. For "cosmological" FRB the value of E is consistent with giant outbursts of Galactic SGR, but the argument of Sec. 4.8 indicates that only less powerful SGR outbursts may produce FRB. Eq. 10 ( = λ) would permit ∼ 10 6 bursts in the lifetime of SGR 1934+2154 and ∼ 10 4 repetitions for the nominal "cosmological" FRB if B ∼ 10 15 gauss. The number of repetitions could be several thousand times greater if = R/γw (Eq. 11). These values are obtained from the required inductive EMF, not directly from the change in magnetostatic energy. If the magnetic field is regenerated from internal motions, there could be yet more repetitions. Weaker bursts, such as observed from FRB 121102, require smaller Q, Ee, V inductive , and ∆B, and could repeat many more times during the active lifetime of their source.
The electric fields within the charge bunches If ∼ λ and γ2 ∼ 1 the field estimated for the cosmological FRB exceeds the Schwinger pair-production vacuum breakdown field (Heisenberg & Euler 1936;Schwinger 1951;Stebbins & Yoo 2015) several-fold. This paradox is resolved if the charge distribution is oblate, with λ or if γ2 1. It might seem unlikely that charge would be concentrated into thin sheets perpendicular to its direction of motion and the magnetic field lines, but there is a strong selection effect favoring the observation of such emitting geometry because for it the fields add coherently, making the radiation stronger and more observable.
The kinetic energies of the motion of the net charges Q (Eqs. 7, 9) are very small, ∼ 8×10 20 ergs for FRB 200428 and ∼ 2 × 10 27 ergs for the nominal FRB at z = 1 even if γ2 = 1. Most of the energy driving the FRB, ultimately derived from magnetostatic energy, must be present in the quasi-neutral part of the particle distribution and the kinetic energies of the net charges are continually replenished. Although the net charges are large, they imply very small fractional deviations from neutrality.
Curvature Radiation vs. Compton Scattering
The relativistic electrons emitting curvature radiation are moving in the soft gamma-ray radiation field of the SGR. It is necessary to compare the power the emit in curvature radiation to their energy loss by Compton scattering. If the latter were to dominate, then it would be difficult to accelerate a population of electrons to the energies necessary to emit a FRB.
The power the electrons lose to Compton scattering is where nγ ∼ LSGR 4πR 2 hνγc (27) is the number density of soft gamma-rays, Ne = Q/e is the number of electrons in the charge bunch, σKN ≈ πr 2 e ln (2hνγEe/m 2 e c 4 )/(hνγEe/m 2 e c 4 ) is the Klein-Nishina cross-section (re = e 2 /mec 2 is the classical electron radius) and Ee is the electron energy. In this regime of highly relativistic electrons scattering soft gamma-rays, nearly the entire electron kinetic energy is lost to the photon in a single scattering.
This value is uncertain, but is consistent with the assumption that Compton scattering losses do not exceed the radiated power and therefore the validity of Eq. 16 as a condition on the electric field. The use in Eq. 26 of the lower bound Eq. 9 on Ee is balanced, except for the slowly varying logarithm, by the energy dependence of σKN . Despite the intense soft gamma-ray radiation field, the quadratic dependence of the coherent Pcurve on Q makes it possible for it to exceed PCompt that is only proportional to one power of Q = Nee. An additional factor of Q enters PCompt through the minimum electron energy (Eq. 9), but this is nearly cancelled by the inverse energy dependence of the Klein-Nishina cross-section. The number of coherently radiating charges in the bunch or wave Ne = Q/e is ∼ 10 20 for FRB 200428 and ∼ 10 23 for the cosmological FRB. These enormous values and the quadratic dependence on Q (or Ne) that makes the FRB bright enough to observe also make Compton losses comparatively unimportant.
Why Not SGR 1806−20
The strongest argument against the SGR-AXP hypothesis was empirical: During an unrelated observation, the giant 27 December 2004 outburst of SGR 1806−20 was in a radio telescope sidelobe but no signal was detected from it (Tendulkar, Kaspi & Patel 2016). Although the sidelobe had sensitivity about 70 dB less than that of the main beam, the fact that the SGR was ∼ 3 × 10 5 times closer than a typical "cosmological" FRB, as well as the extraordinary brightness of the SGR, led to an upper limit on the ratio of the radio to soft gamma-ray fluences of < 10 7 Jy-ms/(erg/cm 2 ). This is more than five orders of magnitude less than the observed fluence ratio > 2 × 10 12 Jy-ms/(erg/cm 2 ) of FRB 200428/SGR 1935+2154.
There are at least two possible explanations.
(i) Eq. 28. The soft gamma-ray luminosity of SGR 1806−20 during its giant outburst (Palmer et al. 2005) was more than seven orders of magnitude greater than that of SGR 1935+2154 during FRB 200428; this was only partially offset by a value of hνγ less than two orders of magnitude greater, leading to a ratio Pcurve/PCompt ∼ 10 −3 for a burst like FRB 200428. Emission of GHz curvature radiation by SGR 1806-20 was suppressed by Compton scattering energy losses of the required relativistic electrons.
At intensities greater than ∼ 10 29 ergs/cm 2 -s (luminosities 10 42 ergs/s for an isotropically emitting neutron star) radiation and energetic particles thermalize to black-body equilibrium by processes that turn two incoming particles into three outgoing particles: radiative Compton scattering, three photon pair annihilation (Katz 1996) and photon splitting in a strong magnetic field. The result is an opaque equilibrium photon-pair plasma in which relativistic particles suffer runaway Compton and Coulomb scattering energy loss and radio radiation cannot propagate.
(ii) The observations of FRB200428 indicate that the observable FRB/SGR ratio may vary from burst to burst by at least eight orders of magnitude, likely because of beaming (Sec. 4.2).
DISCUSSION
The discovery and identification of FRB 200428 resolved the first question about FRB: What astronomical objects produce them? It took 13 years from their discovery (and 7 years from the time their reality became generally accepted) to answer this question because of the difficulty of accurate localization. The similar difficulty of localizing gamma-ray bursts meant that their identification took 25 years, as did the recognition of extra-Galactic radio sources as the products of Active Galactic Nuclei (AGN).
Identification of FRB with rotating neutron stars predicts that FRB activity should be modulated, at some level, at the rotation rate. Periodicity has not been observed in FRB 121102, the only FRB for which abundant data exist (Zhang et al. 2018); see discussion in Katz (2019). If "cosmological" and Galactic FRB are qualitatively similar phenomena, periodicity should be detectable in any FRB that repeats frequently. Periodicity will be easier to detect in FRB identified with Galactic SGR because their periods would be known a priori from gamma-ray observations of the SGR/AXP. The magnetospheric densities implied by Eq. 7 and the constraint on the dimensions of a radiating charge bunch < R/γw exceed the critical plasma density at observed FRB frequencies for the parameters of cosmological FRB. However, this limit on propagation is inapplicable. The plasma is strongly magnetized (so strongly that the electrons' motion transverse to the field, the direction of the electric vector of a transverse wave propagating along the field, is quantized). In addition, the electrons' longitudinal motion is highly relativistic (Eq. 9), increasing their effective mass by the factor γpart. Finally, the radiating charge bunches may be confined to a shell thinner than the skin depth, like the currents in a metallic antenna radiating radio waves. Propagation and escape of the radiation are beyond the scope of this paper, but are issues that must be faced by any model in which FRB are emitted from a compact region, as required by their narrow temporal structure.
Identification of FRB with SGR does not itself explain their mechanism. Their high brightness temperatures require coherent emission, but there is no understanding of their charge bunching. Even in pulsars, discovered 53 years ago, the mechanism of charge bunching remains uncertain. Acceleration of relativistic particles is nearly ubiquitous in astrophysics (Katz 1991), and is also required to explain FRB, but is not understood from first principles; if we had not inferred it from observations in AGN, Solar activity, supernova remnants, pulsars, FRB and many other phenomena, we would not have predicted it.
The presence of an intense thermal (X-ray and soft gamma-ray) radiation field interferes with the acceleration and propagation of relativistic electrons. At sufficiently high radiation energy densities, radiative and particle energy thermalizes to a dense equilibrium pair-photon plasma. This predicts that SGR with luminosities 10 42 ergs/s do not make FRB comparable to FRB 200428.
The issues discussed here of the radiating charges Q and their implied electric fields extend beyond curvature radiation models, and apply however the charges are bunched, whether by plasma instability, maser amplification, or another mechanism. In any model, radiation can only be produced by accelerated charges or changing currents. It is difficult to produce beaming from changing currents because conservation of charge and the assumption of quasineutrality imply that current is constant along bundles of field lines; a relativistically moving current front cannot be produced without creating net charge density. The required Q are determined by the very general Eq. 7 and the particle Lorentz factors by Eq. 9 that are not specific to curvature radiation. This does not exclude sources outside an inner neutron star magnetosphere, but Eq. 2 applies and smaller Q imply larger γw, narrower beaming and, if γw is the Lorentz factor of an actual particle bunch, higher particle energy.
|
2020-06-08T01:00:29.189Z
|
2020-06-05T00:00:00.000
|
{
"year": 2020,
"sha1": "95d52fdec5d7bcdd3574e9172948059c87136703",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2006.03468",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "95d52fdec5d7bcdd3574e9172948059c87136703",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
118461698
|
pes2o/s2orc
|
v3-fos-license
|
Gauging and Decoupling in 3d $\mathcal{N}=2$ dualities
One interesting feature of 3d $\mathcal{N}=2$ theories is that gauge-invariant operators can decouple by strong-coupling effects, leading to emergent flavor symmetries in the IR. The details of such decoupling, however, depends very delicately on the gauge group and matter content of the theory. We here systematically study the IR behavior of 3d $\mathcal{N}=2$ SQCD with $N_f$ flavors, for gauge groups $\mathrm{SU}(N_c), \mathrm{USp}(2N_c)$ and $\mathrm{SO}(N_c)$. We apply a combination of analytical and numerical methods, both to small values of $N_c, N_f$ and also to the Veneziano limit, where $N_c$ and $N_f$ are taken to be large with their ratio $N_f/N_c$ fixed. We highlight the role of the monopole operators and the interplay with Aharony-type dualities. We also discuss the effect of gauging continuous as well as discrete flavor symmetries, and the implications of our analysis to the classification of $1/4$--BPS co-dimension 2 defects of 6d $(2,0)$ theories.
Introduction and Summary
In this paper we study three-dimensional N = 2 supersymmetric gauge theories [1,2]. Since the gauge coupling is dimensionful in three spacetime dimensions, we expect that generic three-dimensional gauge theories become strongly-coupled in the deep IR (infrared), where the non-perturbative effects play prominent roles. For example, in three-dimensional N = 2 pure SU(N c ) super Yang-Mills theory non-perturbative instanton effects generate a superpotential term, which lifts the supersymmetric vacuum [3].
One interesting feature of the IR behavior of 3d N = 2 supersymmetric gauge theories is that there are often indications that strong-coupling effects make some operators free, and decouple those from the rest of the system, in the IR. In this case we need to subtract the corresponding degrees of freedom to discuss truly strongly-coupled interacting dynamics.
This also means that there are emergent U(1) flavor symmetries in the IR, which act only on that decoupled fields.
That some operators could decouple in the IR is known also in four dimensions, e.g. from the analysis of the 4d N = 1 adjoint QCD [4,5]. The story is, however, even richer in the three-dimensional counterparts discussed in this paper. This is because in three dimensions we have monopole operators (constructed out of dual photons), which are new sources for possible IR decouplings. Indeed, we will see below a strong evidence that such a decoupling of monopole operators do happen for 3d N = 2 SU(N c ) gauge theory with N f flavors, for infinitely many values of N c and N f (see [6] for a similar analysis for U(N c ) gauge groups, which provided an inspiration for this paper). This is in contrast with their 4d N = 1 counterparts, which show no sign of such decouplings.
One useful signal of the IR decoupling of operators is the unitarity bound [7][8][9][10] (see [11,12] for recent discussion). In 3d N = 4 theories there is a simple formula for the scaling dimensions of the monopole operators [13], which lead to the good/ugly/bad classifications of 3d N = 4 theories [14]. More concretely, the absence of the IR decouplings for U(N c ) 3d N = 4 SQCD (Supersymmetric QCD) with N f flavors require a simple inequality N f > 2N c .
One natural question is then what happens to the case of reduced supersymmetry, i.e. 3d N = 2 supersymmetry. In this paper we study 3d N = 2 SQCD with N f flavors. 1 In this case, the conformal dimension of a chiral primary operator (such as the monopole operator) is determined by its R-charge. The complication is that the UV (ultraviolet) U(1) R-symmetry 1 In this paper we only discuss parity-preserving theories, and in particular we do not discuss theories with Chern-Simons terms. The Chern-Simons terms renders the monopole operator to be gauge variant, which significantly modifies the discussion below, as already commented in [6]. could mix in the IR with flavor U(1) symmetries, hence the IR U(1) R-symmetry in the IR superconformal algebra is in general different from the UV R-symmetry.
The correct IR R-symmetry can be determined with the help of the F -maximization [15], i.e. the maximization of the supersymmetric partition function on the round three-sphere S 3 [15][16][17]. However, the S 3 partition function is a complicated integral expression, an evaluation of which often requires numerical analysis, and it turns out that whether or not the IR decoupling happens or not depends very sensitively on the choice of the gauge groups and the matter contents of the theory. For this reason we consider SQCD with various different gauge groups, SU(N c ), USp(2N c ) and SO(N c ), with N f flavors, with different values of N c and N f . Our analysis simplifies somewhat in the Veneziano limit In all the cases we find that there is a critical value x c > 1, below which some of the monopole operators decouple. Once some operators decouple we can re-do the F -maximization following the prescription of [18][19][20].
When we carry out the F -maximization, we run into another subtlety: the S 3 partition function does not always converge. This causes a problem, since we need S 3 partition function to determine the correct IR scaling dimension.
What saves the day is that 3d N = 2 SQCD has a non-perturbative magnetic dual, found by Aharony [21] (see also [22][23][24]). 2 Whenever the electric theory has a divergent partition function, the magnetic partition function is convergent, whose S 3 partition function can be used for F -maximization.
We point out that a gauging of flavor symmetries (either continuous or discrete) can drastically modify the IR behavior of the theory. We discuss this phenomena for the following three examples (see later sections for precise notations): • A gauging of the U (1) B symmetry of SU(N c ) SQCD, to obtain U(N c ) SQCD. For the U(N c ) gauge group, the critical value x c in the Veneziano limit is also the value where we switch from the electric to magnetic descriptions (this is also the case for USp(2N c ) and SO(N c ) gauge groups). This is in contrast with the case of SU(N c ) SQCD, where the magnetic description turns out to be valid above x c as well as below.
• Another highlight of our paper is a formula for the scaling dimension of the quark applicable to any gauge group in the large N f limit, up to the order of 1/N 2 f (66). The organization of the rest of this paper is given as follows. In sections 2, 3, 4 we discuss SU(N c ), USp(N c ) and SO(N c ) SQCD in turn. In section 5 we briefly comment on group theory aspects of the scaling dimensions. In section 6 we discuss quiver gauge theories, and in section 7 we comments on implications of our results to the theories arising from the compactifications of M5-branes, and in particular their 1/4-BPS co-dimension 4 defects. The appendices contain several technicalities and review materials.
SU(N c ) SQCD
Let us begin with SU(N c ) SQCD with N f flavors, and its magnetic dual [22]. Note that in three dimensions even a U(1) gauge group becomes strongly coupled in the IR, and we indeed will find crucial differences from the case of U(N c ) SQCD analyzed in [6].
Dual Pairs
Electric Theory The electric theory has a gauge group (3d N = 2 vector multiplet) SU(N c ), as well as quarks Q in the fundamental representation and anti-quarksQ in the anti-fundamental representation (these fields are 3d N = 2 chiral multiplets). We do not have a superpotential term: W electric = 0. The theory has N c − 1 independent monopole operators corresponding to the Cartan of the gauge group, however most of them are lifted by the instanton-generated superpotential, with the exception of a a single unlifted monopole operator which is typically denoted by Y in the literature [1] (cf. Appendix A). This should be contrasted with the case of a U(N c ) gauge group, where we have two unlifted monopole operators V ± [1,2].
under which the fields Q,Q, Y transform as follows: Here the U(1) R -charge was denoted U(1) R−UV , to emphasize that it is one of the many possible U(1) R-symmetries of the UV theory and is not the IR U(1) R-symmetry inside the superconformal algebra. We listed the U(1) R−UV -charge of the monopole operator Y [13]; we will comment more on this later when we discuss the S 3 partition function.
Note also that the theory has no topological U(1) J symmetry: the topological U(1) J symmetry is generated by the current J = * TrF , however this vanishes since the gauge field is traceless.
Magnetic Theory Let us first assume that N f > N c . The electric theory then has a magnetic dual [22] (see also [24]).
, and not SU(N f − N c ) as one might naively expect. For notational simplicity we defineÑ c bỹ The theory has dual quark q and anti-quarkq, and also b andb. The meson M = QQ, as well as the monopole operator Y of the electric theory, are now fundamental fields in the magnetic theory. The magnetic theory also has two unlifted monopole operatorsX ± .
The theory also has a superpotential Note that this superpotential breaks the topological U(1) J symmetry, which rotates the fields The magnetic theory has the same flavor symmetry as the electric theory, under which the fields transform as follows: Since U(1) diag is an Abelian symmetry, there is no canonical normalization of its charges; the charges above, which differ from those in [22] by a factor ofÑ c , are chosen in such a way that it matches with the standard normalization when embedded into the U(Ñ c ) gauge group.
The case of N f = N c requires a separate analysis. In this case the magnetic theory does not have any gauge fields, and is described by chiral multiplets Y, M, B,B with the superpotential W = Y (BB − det(M )) .
The fields Y and M are the monopole operator and the meson of the electric theory, as before. The fields B andB are the baryons, which when N f > N c are gauge-invariant and related to the b,b of the above-mentioned magnetic theory by the relation The charge assignment of the fields M, Y, B,B is The case of N f < N c can be derived from the N f = N c theory by mass deformation.
For N f = N c − 1 the Coulomb branch smoothly connects with the Higgs branch, giving rise the to constraint Y det(M ) = 1 [1]. When we have N f < N c − 1, the instantongenerated superpotential completely lifts the vacuum moduli space [3]. For this reason we will concentrate on the case N f ≥ N c in the rest of this section.
IR Analysis
As already mentioned in Introduction, the R-symmetry mentioned above is only one of the many possible R-symmetries in the UV, and the correct IR R-symmetry inside the superconformal algebra is a mixture of the UV R-symmetry with global symmetries. The correct combination is determined by the procedure of F -maximization [15].
Since non-Abelian flavor symmetries do not mix with the U(1) R-symmetry, we can parametrize the R-symmetry as where As we will see momentarily F -maximization gives b = 0, and b does not play crucial roles below.
Unitarity Bound The dimensions of the operators Y, M are given by The unitary bound ∆ Y,M ≥ 1 2 is given by where here and in the following the symbol ≈ will denote the Veneziano limit (1).
Note that there are other gauge singlet operators, such asqq andbb in the magnetic theory, whose dimension could become smaller than the threshold value 1 2 . However these are not chiral primary operators, and hence the constraints from the unitarity bound does not necessarily apply. For example, the operatorqq is trivial in the chiral ring thanks to the F-term relation for the field M , and hence is not a chiral primary. The same applies to the operatorbb.
F -maximization The S 3 partition function [15][16][17] of the electric/magnetic theories can be written down straightforwardly following the matter content given above (see Appendix B). For the electric theory we have The integral is over the Cartan of the SU(N c ) gauge group, and the integrand represents the 1-loop determinants for N = 2 vector and chiral multiplets. Here and in the following we use the shorthanded notation that ± inside the expression means the sum of the corresponding two expressions. For example, For the magnetic theory, let us first consider the case N f > N c . We then have where σ i (σ) parametrizes the Cartan of SU(Ñ c ) (U(1) diag ). The σ-dependence inside the integrand can be eliminated by the shift σ i → σ i − 1 Nc σ, after which the delta function constraint becomes σ = Ñ c i=1 σ i , i.e. σ is the diagonal part of the U(Ñ c ) gauge group. After a trivial delta-function integral over σ we obtain The case of N f = N c is much simpler thanks to the absence of the gauge group in the magnetic theory. We have Note that this expression can also be obtained by formally setting N f = N c in (15). magnetic valid everywhere electric valid Figure 1: The unitarity bound and the convergence bound for 3d N = 2 SU(N c ) SQCD with N f flavors with N f > N c , plotted in terms of the mixing parameter a (see (9), with b = 0). The correct IR value of a should be determined from F -maximization.
Convergence We have written down the expressions for the S 3 partition function, however they are in general only formal integral expressions and are actually not convergent.
We can analyze the convergence condition of the partition function by sending one of the σ i 's to infinity (for the electric theory, we need to send one to infinity and another to minus infinity, for the consistency with the traceless constraint i σ i = 0), while keeping other σ i 's finite. We can evaluate the leading behavior of the integrand from the asymptotic expansions (see (85) in Appendix B), and we see that the convergence bound of the partition function is magnetic: converges for any values of a .
Note that for numerical computations the practical convergence bound is slightly stronger than this, since as we approach the convergence bound the computational time becomes increasingly large.
It turns out that these conditions are the same as the condition that the dimensions of the monopole operators (Y for the electric theory,X ± for the magnetic theory) are non-negative: That we obtain the same conditions from two different considerations is not a coincidence, and we will encounter the same phenomena in later sections. In fact, we can think of this as a convenient way to obtain the R-charge/conformal dimension of monopole operators.
When we analyze the convergence of the partition function, we go to infinity in the Coulomb branch in the direction of the Cartan corresponding to a monopole operator V , for example σ 1 = −σ N → ∞, σ j =1,N = 0 in the magnetic theory (cf. Appendix A). Since Coulomb branch parameter is a dynamical version of the real mass parameter, this has the effect of making the fields massive. We can integrate out the these massive modes, except that we then could have induced Chern-Simons term with level k eff and induced FI parameter ζ eff . In the theories discussed in this paper, we have k eft = 0 however ζ eff = 0, leaving to the and the dimension (or equivalently the U(1) R -charge) of the monopole operator V , whose real part is e −2πσ 1 , can be identified with ζ eff : In our example, the partition function gives and their positivity conditions indeed match with (18).
As a side remark, this also explains clearly that the convergence condition is weaker than the unitarity constraint: for a monopole operator V the former requires ∆ V ≥ 0, while the latter requires ∆ V ≥ 1 2 .
Duality as Equality In the literature, the duality between two 3d N = 2 theories is often translated to an equivalence of the S 3 partition functions: Such equivalences have been verified in [22,[25][26][27][28]. 3 There are subtleties to the identity (23), however. In fact, as we have already seen in (17), we find that as a function of the parameter a the right hand side always converges, whereas the left hand side converges only when a is small enough, invalidating (23).
We can see this problem more sharply for the N f = N c theory. The magnetic partition function has poles at 2N c a = Z\{1}, as follows from the definition of the function l(z) (see Appendix B). The electric partition function, however, does not show any singular behavior at these points. This is not in contradiction with the existing results in the literature. In the analysis above we assumed that a is a real parameter, however in the literature a takes values in the complex plane, where the imaginary part of a plays the role of the real mass parameter for the U(1) A flavor symmetry. The S 3 partition function is known to be a holomorphic function of this complexified parameter [15,29], and we can then regard the both sides of (23) as complex functions of a, establish the identities in the regions where the real part of a is small, and then analytically continue into the whole complex plane. This is what is usually meant by the identity (23).
However, we do not wish to turn on imaginary parts of a for the purpose of this paper.
When we turn on the real mass parameter for the U(1) A symmetry, the quarks Q,Q gets a mass and hence can be integrated out in the deep IR, thereby dramatically changing the IR behavior of the theory. We need to keep a real, for the numerical analysis of the F -maximization below.
This means that we need be careful in interpreting the equality (23), at least for the purpose of F -maximization-When only one of the two sides converge, we should use that convergent partition function to determine the IR conformal dimensions, whereas if the both sides converge they should give the same value of the F -function (possibly up to an overall constant independent of the parameter a) and they both give the same IR conformal dimensions. 4 In the case of the SU(N c ) SQCD discussed here, the result (17) shows that magnetic partition function is convergent for all the values of the parameter a. This is in sharp contrast with the case of the U(N c ) SQCD discussed in [6], where region for the convergent magnetic partition function was complementary to that of the electric partition function; the overlapping region exists only for fine-tuned values of N c and N f , and vanishes in the Veneziano limit.
F -maximization We can determine the values of a and b by maximization of the free energy F , related to the S 3 the partition function (12) and (15) (which are identical thanks to the duality, modulo the issues just mentioned) by the relation Maximization with respect to b straightforwardly gives b = 0. We can then numerically search for the maximal value of F with respect to a. Note that it is crucial for our numerical analysis that F takes a maximal value, not just an extremal value. In fact, in many of our examples the function F has more than one local maximums.
We have carried out explicit numerical integration of the matrix integral, and obtained the critical value of a after F -maximization, for some sample values of small N c and N f ≥ N c , as shown in Table 1.
For the values N f = N c = 3, 4, 5, 6 (in entries in red boxes in Table 1) we find that after performing the first F -maximization that the unitarity bound (11) is violated for some operators. We interpret this to mean that we need to decouple corresponding operators. The .3687 (7.6517) .4064 (9.6090) .4277 (11.4143) .4412 (13.1306) .322 (14.0933) .3632 (17.4956) .3898 (20.5698) .2912 (22.4819) .3353 (27.5200) .2693 (16.9044) Table 1: The scaling dimension ∆ Q of the flavor multiplets (above) and the maximal value of F -function (below), at the conformal fixed points for a few small values of N f and N c in 3d N = 2 SU(N c ) SQCD with N f flavors. We have computed this from the electric theory, except for the diagonal entries and the blue-colored entries where we used a simpler magnetic theory for more efficient numerical evaluation. For N c = N f = 3, 4, 5, 6 either one or two operators hit the unitarity bound, and consequently we need to decouple them and repeat the F -maximization with the modified F -function (25), until the procedure terminates. For N c = N f = 4, 5, 6 we find a sequence of decoupling of operators, leaving to a free IR theory eventually-for example, for N c = N f = 4 operators M and Y decouple first, and the baryon B becomes free after the second F -maximization, Similarly, for N c = N f = 6 we find that first Y decouples, then M , and finally B becomes free. Such a decoupling pattern is shown inside the bracket in the red box. Note that the value of the scaling dimension ∆ Q shown here is the value after all the possible decoupling effects are taken into account, and not the value after the first F -maximization.
details of the decoupling varies for different values of N f and N c , as shown in the diagonal entries of Table 1.
After decoupling an operator, we need to again do F -maximization with the modified F -function, which are for example given by become free, leaving to a free IR theory.
Something interesting happens for N f = N c ≥ 6. After the first F -maximization, we find that the monopole operator Y decouples. After decoupling Y , we find that the modified F -function apparently has no maximum. We propose to interpret this as a signal for the decoupling of the baryon B. After yet another F -maximization we find that the meson M also becomes free, leading to the critical value a = 1/4 and the trivial IR fixed point. One consistency check of this proposal is that the critical value a = 1/4 is consistent with the analysis of the Veneziano limit shown below in Figure 2.
Note also that the value of the F at the critical value decreases as we decrease the value of N f . This is consistent with the F -theorem [30][31][32][33], since we can give a mass to one of the flavors, thereby reducing the number of flavors by one. It is probably worth pointing out that for a fixed flavor number N f the value of the free energy F could decrease as we increase N c .
In Table 1 the monopole operator decoupling happens only in the diagonal N f = N c .
However this is an artifact of the choice of small N c , N f values. The constraint from the unitarity bound (10) becomes stronger as we increase the value of N c , N f , and therefore we expect to find more and more examples of (N c , N f ) with monopole decoupling.
To analyze monopole decoupling in the Veneziano limit, we adopt techniques of [6, Appendix A.3] (see also [34]), which gives the scaling dimension ∆ Q of the quarks/anti-quarks to be which reduces to The combined plot of the numerical data points as well as the large x expansion of (27) is shown in Figure 2. We find that ∆ Q (x) hits the convergence bound at the critical value x c ≈ 1.46. Figure 2: ∆ Q as a function of x = N f /N c in the Veneziano limit. Points were computed by extrapolating small N c numerical results. Dotted line is the unitarity bound (11). We find that ∆ Q (x) hits the unitarity bound at the critical value x c ≈ 1.46. The black curve at large values of x is the analytical approximation (27). In the region right to the red curve, we use electric theory, while in the left region we use magnetic theory with monopole Y decoupled when needed. The right plot is a zoomed-in version of the left one around the critical value x c .
The analysis in this subsection is partly case-by-case, and it would be interesting to find more uniform patterns in their IR behaviors. A related question is to find a concrete UV Lagrangian description of the theory after decoupling of monopole operators, perhaps along the lines of [35].
3 USp(2N c ) SQCD Let us next consider the USp(2N c ) theory. 5 We find that the structure here is similar to the case of the U(N c ) theory. In particular, we find a small window where the electric and magnetic descriptions hold simultaneously, which window shrinks in the Veneziano limit.
Dual Pairs
Electric Theory The electric theory is given by quarks Q, and comes with a monopole operator Y . We do not have a superpotential term: W electric = 0. The theory has SU(2N f ) × U(1) A × U(1) R−UV flavor symmetry, under which the quark Q and the monopole operator Y transform as follows: Magnetic Theory For N f > N c + 1, the dual magnetic theory has USp(2N f − 2N c − 2) gauge symmetry with 2N f chiral multiplets, dual quark q i , and additional single chiral multiplets M and Y [21]. Coulomb branch of this magnetic theory is parametrized by the monopole operatorỸ .
The charge assignment is given by The theory also has a superpotential For N f = N c + 1, we expect that the magnetic theory is trivial. We propose that the magnetic theory in this case is described by Y and M , with the superpotential We can verify that this theory is consistent with the charge assignment, which is given as We have a deformed moduli space Y Pf (M ) = 1 for N f = N c [36], and the supersymmetry is broken for N f < N c . We therefore concentrate on the case N f > N c below.
IR Analysis
Let us parametrize the IR R-symmetry by where the notation is the same as in the previous section.
Unitarity Bound The dimensions of the operators Y and M are The unitary bound is satisfied if Partition Function The S 3 partition function of the electric theory is given by (see (86) and (87)) For the magnetic theory, we have for N f > N c + 1 and Using again the expansion (85), we determine the convergence bound to be The width of intersection of these two regions shrinks to zero in the Veneziano limit.
F -maximization We can again numerically maximize the F -function for small values of N c and N f . For this purpose it is sometimes useful to use the trick explained in Appendix (33)). The correct IR value of a should be determined from F -maximization. The structure here is very similar to the U(N c ) SQCD case discussed in Appendix C ( Figure 10). E (the same trick could be applied to the SO(N c ) theory discussed in the next section). The results of the numerical computation is summarized in Table 2.
For N f = N c > 2, we always see that the monopole Y always saturates the unitarity bound, thus we set its scaling dimension at 1 2 . This modified magnetic theory forces a = 1 4 . Inside the table Y is the only operator which decouples in the IR, and in this sense the structure here is much simpler than that of the SU(N c ) SQCD discussed in previous section.
We can again check the consistency with the F -theorem by decreasing the values of N f for a fixed N c .
We can also obtain analytic expressions of the scaling dimensions in the large N f limit, by the techniques of [6,Appendix A.3]. This gives the scaling dimension of the electric quarks to be In the Veneziano limit, this reduces into .3687 (4.4275) .4064 (6.3848) .4277 (8.1901 Table 2: The scaling dimension ∆ Q of the flavor multiplets (above) and the maximal value of F -function (below), at the conformal fixed points for a few small values of N f and N c in 3d N = 2 USp(2N c ) SQCD with N f flavors. For the red boxes in the diagonal (i.e. 2N f = 2(N c + 1)) entries we find after the first F -maximization that we need to decouple the monopole operator Y . In most of the entries we used the electric partition functions, except in the red-colored (along the diagonal) and blue-colored (at 2N c = 8, 2N f = 12) entries we used magnetic partition functions, since the magnetic description is more suitable for numerical computations.
On the other hand, when x is close to 1, (x − 1) expansion [6, Appendix A.3] makes sense and we get another expansion of ∆ Q (x): In Figure 4 we have plotted these result in combination from the numerical data points coming from the several explicit integrations for small values of N c and N f . We find good agreement between numerical and analytical results, and the critical value for x is given by
SO(N c ) SQCD
Let us now discuss the case of the SO(N c ) gauge group. The duality for this case is worked out in [23] (see [28,37,38]
Dual Pairs
Electric Theory The electric theory has quarks Q in the fundamental representation, and as usual we have W electric = 0. The theory also has the monopole operator Y , the baryon B, as well as a composite "baryon-monopole operator" β. Here and (N f ) Nc A represents totally antisymmetric representations. We have listed there discrete symmetry Z C 2 , Z M 2 and ZM 2 (we here list charges only for gauge-invariant fields). We can easily check that ZM 2 is a combination of Z C 2 and Z M 2 , and is not independent. These discrete symmetries will play crucial roles when we change the gauge groups later in section 4.3.
Magnetic Theory
The magnetic theory has a superpotential The theory has the same flavor symmetries as the electric theory, under which the fields transform as follows: Note that the charge assignment for Z M 2 and ZM 2 symmetries here is consistent with the identification (44).
For the case N f = N c − 1 the magnetic theory has no gauge group, contains fields Y and M , with superpotential given by (see [28,37,38] for O(N c ) + case) The charge assignment is given by and as before this case should be treated separately from the rest.
For lower values of N f , we have the quantum-corrected moduli space for N f = N c −2 [38], and the supersymmetry is broken for N f < N c − 2. We will hereafter concentrate on the
IR Analysis
Let us parametrize the IR R-symmetry by where as before R UV , R IR , J A are generators of U(1) R−UV , U(1) R−IR , U(1) A , respectively.
Unitarity Bound Let us consider the electric theory. The dimensions of the operators are given by The unitary bound ∆ ≥ 1 2 gives Notice that in the Veneziano limit, the unitarity bound for the monopole operator Y depends on the value of x, whereas that for the baryon-monopole β is independent of x.
Partition Function Let us write down the S 3 partition functions of electric and magnetic theories. The precise expression depends on whether N c is even or odd.
For N c even with N c = 2r, the electric partition function is given by (see (86) and (87)) while N c odd with N c = 2r + 1, we have The magnetic partition function is similar, and for N f − N c + 2 even (N f − N c + 2 =: 2r) For N f − N c + 2 =: 2r + 1 with N f > N c + 1 (i.e.r > 0), we have For N f = N c − 1 (i.e.r = 0), we have The convergence bounds of the partition functions are given in the following form, which hold irrespective of whether N c , N f − N c + 2 are even or odd: In this case there is a small overlapping region where both electric and magnetic descriptions are valid. However the width of the overlapping region shrinks to zero in the Veneziano limit. It is therefore expected that we really should not expect both electric and magnetic descriptions to be valid, except for only for limited values of N c and N f . (49)). The correct IR value of a should be determined from F -maximization. Depending on the value of a, operators decoupling might be either none, only Y , or both Y and β. While the baryon-monopole β could in principle decouple, this does not happen in the exampled we studied, both numerically and analytically.
electric valid magnetic valid
F -maximization We have done the F -maximization for several small values of N c and .2686 (11.3107) Table 3: The scaling dimension ∆ Q of the flavor multiplets (above) and the maximal value of F -function (below), at the conformal fixed points for a few small values of N f and N c in the 3d N = 2 SO(N c ) (or (O + (N c )) SQCD with N f flavors. For the diagonal entries (N f = N c − 1) we have used the partition function of the magnetic theory, and for entries in red box we need to decouple the monopole operator Y . All other entries are computed in the electric theory, except in blue boxes and in diagonal entries where we used the magnetic theory for better numerical computations.
As commented before, one interesting feature of SO(N c ) theories is the existence of the baryon-monopole operator β. This means that the baryon-monopole β, in addition to the baryon B, could decouple in the IR. In the examples we studied in Table 3, however, we find that β never decouples (this is also the case in the Veneziano limit, to be discussed below, see Figure 6).
We also compute ∆ Q (N c , N f ) for both N c odd and even, in the large N f limit. The S 3 partition functions take slightly different forms for N c odd and even, however it is natural to think that the value of ∆ Q (N c , N f ) should coincide between the two cases in this limit.
Analytic calculation by order in 1 N f shows that odd/even cases give the same answer, which reads In the Veneziano limit this expansion reduces to In the case of the magnetic theory, we can expand ∆(N c , N f ) in order of 1 Nc which gives us which reduces in the Veneziano limit into Figure 6: ∆ Q as a function of x = N f /N c in the Veneziano limit. Points were computed by extrapolating small N c numerical results. Dotted line is the unitarity bound (51). We find that ∆ Q (x) hits the unitarity bound at the critical value x c ≈ 1.45. The black curves at large and small values of x are the analytical approximation (59) and (61), respectively. In the region right to the red curve, we use electric theory, while in the left region we use magnetic theory with monopole Y decoupled. Note that all these gauge groups have the same Lie algebra as that of SO(N c ). These We can apply the same gauging to the magnetic theory. In fact, all the terms which appear in the magnetic superpotentials (see (45) and (47)) have charge +1 under any of the three Z 2 symmetries, and hence the gauging is consistent with the superpotential. There is one big difference from the electric case, however: the role of the Z M and ZM should be exchanged, as follows from the identification (44).
When we gauge two Z 2 symmetries, there is only one choice, since we are gauging all the discrete symmetries, and we obtain the Pin(N c ) − theory.
These gauging patterns are summarised in Figure 7. This immediately implies that the correct Aharony-like duality works as [23] O This should be compared with the SO duality When we gauge discrete Z 2 symmetries, we project out the fields which has charge −1.
For example, when gauging the Z C 2 symmetry (charge conjugation symmetry) we obtain the dualities for O + gauge groups. In this case, the baryon B and the baryon-monopole β are projected out, while their combinations, such as B 2 and Bβ, remain in the theory.
Similarly, when we gauge either Z M or ZM symmetry, the monopole operator in itself is projected out, and we instead have its square remaining: IR Analysis We now come to a natural question: does the gauging of the discrete symmetries discussed above have any impact on the IR behavior of the theory?
It turns out most of the preceding analysis for SO gauge groups does not require any modification. This is because we are primarily interested in F -maximization, which requires only the S 3 partition function with no operators inserted, and hence is insensitive to gauging of discrete symmetries.
There is one big change, however. While we have the same set of operators, gauging makes some of the gauge-invariant operators gauge-non-invariant. Since the unitarity bound applies only to gauge-invariant operators, the discrete symmetry gauging will in general change the unitarity bounds.
In the analysis for the SO gauge groups we did not find any examples where the baryon B, the meson M , or the baryon-monopole β decouple. We can therefore concentrate on the monopole operator Y . As we discussed above, the change for Spin, O − and Pin gauge groups is that the gauge-invariant monopole operator is not Y , but rather Y spin = Y 2 (64), whose scaling dimension is twice that of Y .
This immediately means that the unitarity bound for Y in (51) is replaced by As expected, the difference from the discrete symmetry gauging goes away in the Veneziano limit.
We can redo the IR decoupling analysis, to obtain the new table as in Table 4. Clearly the only difference can happen when the monopole operator Y decouples in the SO theory.
In the table this happens when N f = N c − 1 = 2, 3, when the monopole operator Y no longer decouples.
Digression on Group Theory
where C F and C A are quadratic Casimirs in fundamental and adjoint representation, respec-tively. Concretely, we have with which we can verify that (66) reproduce the formulas (26), (40), (58) ). Note that in the leading Veneziano limit we always have C F ∼ Nc 2 , C A ∼ N c and the differences of the gauge groups are washed away. This is basically the reason that the plots of ∆ Q (in Figures 2, 4 and 6), as well as the critical value x c , are similar among different choices of gauge groups with the same ranks.
Gauging and Quiver Gauge Theories
The difference between U(N c ) theory and the SU(N c ) theory is an example where the gauging of a flavor symmetry dramatically modifies the IR dynamics. We have also seen in section 4.3 that gauging of discrete symmetries also changes the IR decoupling. These can be thought of as particular examples of more general phenomena where the gauging of a flavor symmetry modifies the IR dynamics of the theory.
As yet another example of this type, we study gauging of the SU(N f ) flavor symmetry of the SQCD, to obtain a quiver gauge theory with a product gauge group. We discuss the effect of the gauging to the IR R-symmetry, and to the decoupling of monopole operators. Such quiver gauge theories naturally arise in string theory (see e.g [39,40] and references therein), and (as we will discuss later in the next section) for example in the compactifications of M5-branes.
Electric Gauging
Let us start with the U(N c ) SQCD with N f flavors, discussed in section C.
As shown in (88), this theory has SU(N f ) L × SU(N f ) R symmetry. Let us choose to gauge the diagonal SU(N f ) of these two SU(N f ) symmetries, which we denote by SU(N f ) V .
The resulting theory then has U(N c ) × SU(N f ) gauge symmetry, and SU( As a result of this gauging, we obtain a quiver gauge theory, whose quiver diagram is shown in Figure 8. In the rest of this section we use the notation N := N c , M := N f , to make this symmetry more manifest. In fact, now that the symmetry is manifest we can regard the same quiver gauge theory as obtained from the gauging SU(N ) V flavor symmetry of the U(M ) SQCD with N flavors, with N and M reversed from above ( Figure 8).
Now, the question we ask in this subsection is whether or not this gauging of the SU(M ) symmetry has any effect in the discussion of the IR scaling dimension and the decoupling of monopole operators.
The best way to see this is to write down the S 3 partition function as the parameter a corresponding to the mixing of the U(1) A symmetry, as in Appendix C: Note that the integral is kept invariant under the simultaneous shift of σ i and ρ j , and this represents the overall decoupled U(1) commented before. The same partition function can also be written as In either way, it is clear that gauging dramatically change the partition function as a function of the parameter a, and consequently the IR R-charges/conformal dimensions of the theory. In fact, this is to be expected since we have a manifest symmetry between N and M after gauging; by contrast N c = N, N f = M theory and N c = M, N f = N clearly have different IR dynamics, as we have seen in the rest of this paper.
This symmetry between N and M is actually a source for trouble, when we consider the convergence bound for the electric S 3 partition function. The convergence bound before gauging was worked out in (17), and since now we have symmetry and N and M , we should impose the same constraint with N and M (N c and N f ) exchanges. This gives and in particular a will be negative unless N = M .
Note that convergence constraint is ameliorated by including flavor matters to gauge groups SU(N ) and SU(M ). Suppose that we include k flavors (l flavors) to gauge groups SU(N ) and SU(M ). The convergence constraint then reads and in particular the constraint in practice goes away for sufficiently large k and l. Such flavors are natural from string theory constructions, however we will set k = l = 0 in the discussion below, to simplify analysis.
Magnetic Gauging
To avoid this convergence issue, one might be tempted to switch to the magnetic description.
Namely, instead of gauging the flavor symmetry of the electric theory, we can choose to gauge the flavor symmetry of the magnetic theory. The resulting partition function is given by (compare this with (15)): Equivalently, we can start with the U(N ) SQCD with M flavors, and then gauge the SU(M ) flavor symmetry (compare (98)): The formal equivalence of the two expressions (72) and (73) (up to a constant phase factor) can be checked directly by using the identities (108) and (84). (In fact, this is essentially the argument used for deriving SU dualities from U dualities, as reviewed in Appendix D).
Unfortunately, the convergence bound for (72) and (73) is satisfied for where the first (second) inequality comes from the convergence of the σ (ρ) integrals. In other words the partition function (72) and (73) cannot be used for any practical F -maximization.
The situation is better for the case M = N . We can then gauge the SU(M = N ) flavor symmetry of the magnetic U(N ) theory, leading to the partition function We can instead choose to gauge the U(N ) SU(N ) V × U (1) B flavor symmetry of the magnetic SU(M ) theory, leading to the expression The equivalence of (75) and (76) can again be checked by using the identities (108) and (84).
The convergence bound for these expressions is an inequality It is natural to expect that the magnetic quiver theories discussed here are dual to the electric quiver gauge theories discussed before. There is one caveat, however. We have implicitly assumed that the order of two operations, namely gauging of the flavor symmetry and going to the dual magnetic description, commute with each other. Since the duality at hand is an IR duality, in general the gauging of the flavor symmetry could change the behavior under the RG flow, and hence spoil the IR duality.
We however expect that this does not happen when the gauge coupling for the newly- .1962 (7.5009) .1804 (12.1339) .1567 (15.2490) Table 5: The critical value of the parameter a (above) and the critical value of the Ffunction (below), for the SU(N ) × U(N ) ∼ (U(N ) × U(N ))/U(1) theory, as computed from the magnetic partition function (75). Notice that the critical value of a is different from that in Table 1 before gauging the U(N ) flavor symmetry. In all these cases there is no indication that any operator decouple in the IR.
Numerical Results
The numerical analysis of the quiver case is computationally more challenging than the SQCD case, and as we have seen the convergence bound tends to be severe. Therefore let us here consider the simplest case of N = M magnetic theory. We can then do F -maximization for the partition function (75) (compare with Table 1). The numerical results for the values N = M = 2, 3, 4, 5 are summarized in Table 5.
There are two remarks on this result. First, the value of a at the maximum is different from that before gauging, as expected. Another non-trivial result here is that here in none of these cases exhibit operator decoupling. This is partly because the meson M of the magnetic theory, after gauging, is now an adjoint field with respect to the newly-introduced gauge symmetry, and hence is not gauge invariant. Therefore there is no need to consider the unitarity bound of the meson itself.
General Quivers
Having discussed quiver gauge theories with two nodes, we can discuss more general 3d N = 2 quiver gauge theories, whose matter content is determined by a quiver diagram, i.e.
an oriented graph 7 . We can then gauge the appropriate flavor symmetries, whose effect is to concatenate two quiver diagrams and to generate a more complicated quiver diagram ( Figure 9). For example, if we glue two quivers Q 1 , Q 2 at a node to obtain a new quiver Q, then partition function for the larger quiver Q can be schematically written in the form where Z vector [σ 2 ] is the contribution from the vector multiplet which is gauged under the gluing, and a 1,2 denote the parameters repressing the flavor symmetries of the theories Q 1,2 .
gauging/gluing Q Q 1 Q 2 Figure 9: We can generate a larger quiver Q by gluing together two quivers Q 1 and Q 2 . In gauge theory language the circle (the square) represents the gauge (global) symmetry, each of which can be for example U(N c ) or SO(N c ) with different values of N c for different nodes. Gluing in this context means to take two flavor symmetries (represented by two squares in the middle, which we assume to contain the same flavor symmetry group) and gauge the diagonal subgroup of the product. The partition function behaves nicely under this gluing, however not the F -maximization nor the IR behavior.
As before, the extremum of Z Q [σ 1 , σ 3 ](a 1 , a 2 ) as a function of a 1 (a 2 ) is in general differ- ). This means that to tell whether the monopole operator for the gauge group in the quiver Q 1 decouples or not, we need to know in advance the detailed data for the quiver Q 2 , however large the quiver Q 2 may be. 8 This is in sharp contrast with the case of 3d N = 4 supersymmetry, where the IR decoupling of the monopole operators can be checked locally at the quiver diagram, by verifying the inequality N f ≥ 2N c (cf. [44,45] for recent discussion in gravity dual).
Implications for M5-brane Compactifications
The comments from the previous section has interesting implications to the M5-brane compactifications, which we now turn to.
Boundary Conditions of 4d N = 4 SYM
Let us first start with the results of [14], which classifies 1/ 2 We can now consider the 1/4-BPS boundary conditions [46,47], whose boundary field theory B would then be 3d N = 2 theories. As we have seen above the criterion for the decoupling/un-decoupling is now more complicated than the inequality N f ≥ 2N c . In particular, in the Veneziano limit 10 we have learned from [6] that the decoupling happens at the critical value x c ≈ 1.45 < 2. This suggests that the minimal set of (Neumann type) boundary conditions should no longer be labeled by partitions. 11 It would be interesting to see if/how this conclusion could fit together with the analysis of the generalized Nahm equations in [46,47], or their 4d N = 1 counterparts [48,49]. 9 If we consider the mixture of Dirichlet and Neumann boundary conditions we obtain a slightly more general class of theories T σ ρ [SU(N )], labeled by a pair of partitions ρ, σ. 10 This is natural in the context of the holographic dual. 11 In 3d N = 2 theories, for each vertex of the quiver diagram we have the choice of whether or not to include N = 2 adjoint chiral multiplet. This means that the natural generalization of the 1/2-BPS analysis is that the boundary conditions are labeled by a decorated partition. However, our point here is that this is likely a redundant characterization of the IR boundary condition. Now we can repeat the same argument for the 1/4-BPS boundary conditions, and again obtain 1/4-BPS boundary conditions for 5d N = 2 SYM and the 6d (2, 0) theory. Our conclusion is then that these defects should not be labeled by partitions, since otherwise we would be over-counting. This should have some interesting counterparts as data specifying 1/4-BPS defects in 4d N = 2 theories arising from the 2-manifold compactifications [55,56], or 3d N = 2 theories arising from the 3-manifold compactifications [57][58][59][60].
with period 2πg 2 3 , making Y α well-defined. Only the Y α 's for positive simple roots α are independent, and hence classically we have r independent monopole operators. This parametrizes the classical Coulomb branch. However many of these Coulomb branches are lifted by quantum corrections (instanton-generated superpotential).
For example, for SU(N c ), classically we have N c − 1 monopole operators however the only remaining operator in the end is This is the monopole operator discussed in section 2.
B S 3 Partition Functions
The S 3 partition function [15][16][17] is given by (in the absence of Chern-Simons terms, FI parameters and real mass parameters) where |W | is the order of the Weyl group, R Φ (∆ Φ ) is the representation under the gauge group (R-charge) of the chiral multiplet Φ and the function l(z) is defined by This function l(z) has poles at integers on the real axis, except at the origin. We also have the relation For convergence of the partition function we use the following asymptotics in the limit and the roots are given by C U(N c ) SQCD In this Appendix we briefly summarize the case of the U(N c ) gauge group [6]. It is instructive to compare the discussion in this Appendix with that of the SU(N c ) theory in the main text.
Some of the ingredients discussed in this Appendix will be used in the discussion of quiver gauge theories in section 6.
C.1 Dual Pairs
Electric Theory The electric theory is similar to the SU(N c ) SQCD. The major difference is that we have two remaining monopole operators V ± .
The theory has a U(N c ) gauge symmetry, as well as SU( Note that compared with the SU(N c ) case we have the topological U(1) J symmetry in this case, whereas the U(1) B symmetry, being part of the gauge symmetry, is absent.
Magnetic Theory Let us first assume N f > N c . The magnetic theory has gauge group U(Ñ c ) (remember the definitionÑ c := N f − N c ), and has dual quark q, anti-quarksq, and the meson M = QQ and V ± . The magnetic theory also has two monopole operatorsṼ ± for the dual photons of magnetic gauge groups. The superpotential is given by The theory again has the same flavor symmetry as the electric theory, under which the fields transform as follows: For N f = N c , the magnetic theory do not have a gauge group, and is described by the chiral superfields V ± , M , with the superpotential The charge assignment in this case is
C.2 IR Analysis
As in other cases discussed in the main text, we need to consider the IR-mixing of the U(1) R-symmetry with the U(1) A symmetry Note that we do not need to consider the mixing with the topological U(1) J symmetry, since otherwise the parity is broken.
Unitarity Bound The dimensions of V ± and M are given by which leads to the unitarity bound is given by which in the Veneziano limit simplifies to Note this requires x ≥ 4 3 , and we will find the crack before this value.
Partition Function
The partition function of the electric theory is given by and that of the magnetic theory (for N f > N c ) by The convergence of the expression for the partition function above gives As explained in the main text for the SU(N c ) case, we can either derive this from the positivity of the dimension of the monopole operators, or from the positivity of the effective The large N f and small x − 1 expansions of the scaling dimensions of the matter quarks 1 4 M decouples a V ± decouples Figure 10: The unitarity bound and the convergence bound for the 3d N = 2 U(N c ) SQCD with N f flavors with N f > N c , plotted in terms of the mixing parameter a (see (93)). The correct IR value of a should be determined from F -maximization.
are given by In this Appendix we derive the SU(N c ) dualities from the U(N c ) dualities. The basic argument is not really new, and basically the same as in [24], except that here we work out the derivation at the level of the S 3 partition function (as opposed to the 3d index in [24]).
Similar manipulations appear in the discussion of quiver gauge theories in section 6.
Let us begin with the partition functions of U(N c ) theories, with all the real mass/FI parameters to our partition functions turned on in (97), (98) (this means that a is now complexified). When we denote the real mass parameters for the U(1) J , SU(N f ) L and SU(N f ) R symmetries by ζ, µ a ,μ a (a = 1, . . . , N f ), we have the S 3 partition functions and Now to obtain the SU(N c ) duality all we need to do is apply the S-transformation (as defined in [61]) to the U(1) J global symmetry. In other words we add an off-diagonal Chern- and gauge the gauge field A U(1) J for U(1) J . As we will see momentarily, the new gauge field A new will be identified with that of the U(1) B symmetry of the magnetic theory: at the level of the S 3 partition function this amounts to the Fourier transform with respect to ζ.
For the electric theory, we have where in the last line we shifted σ i → σ i + σ. We can check that this gives the charge assignment of the electric SU(N c ) theory, and in particular that this answer gives the (12) when we take µ a =μ a = 0 and when we identify b = ib. 12 Note also that in the Fourier transform we have included a factor of N c ; this was chosen such that the parameterb after the Fourier transform can be directly identified with the real mass parameter for the U(1) B symmetry.
We can add the same off-diagonal Chern-Simons term (105) to the magnetic theory, whose partition function is However this is not yet the magnetic theory discussed in the body of the text. We further need to use the duality for the N = 2, U(N c = 1), N f = 1 theory. The magnetic theory is given in (91), where M is now a 1 × 1 matrix (a number): W = V + V − M . At the partition function level this gives the following equality (which holds up to an overall constant term), which is a specialization of the pentagon identity for quantum dilogarithm: After applying (108) 13 , the expression (107) becomes Again, if we set µ a =μ a = 0 this coincides with the magnetic partition function (15) we wrote down in the main text, under the identification b = ib.
We discussed above the case of N f > N c , however the case of N f = N c is similar and simpler, so we will not repeat here.
E Numerical Tricks
The evaluation of the our S 3 partition function requires a multi-dimensional integral whose integrands oscillates relatively quickly. In some cases, we find it numerically more advantageous to convert the multi-dimensional integral into a sum of a product of one-dimensional integrals. Let us illustrate this for the case of U(N c ) SQCD: the same strategy works in a similar manner for USp(2N c ) and SO(N c ) SQCD.
The U(N c ) electric partition function (97) can then be rewritten as (up to an overall constant factor) dσ i e N f l(1−a±iσ i ) e 2π(ρ σ(i) +ρσ (i) )σ i = σ∈S Nc where we defined The resulting expression (111) is now written in terms of N c one-dimensional integrals.
|
2016-03-07T21:00:15.000Z
|
2016-03-07T00:00:00.000
|
{
"year": 2016,
"sha1": "62bc73b56dd2e4f53f68ca4242357388861bb41a",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP06(2016)077.pdf",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "62bc73b56dd2e4f53f68ca4242357388861bb41a",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
2393757
|
pes2o/s2orc
|
v3-fos-license
|
Identification of transcription factors and single nucleotide polymorphisms of Lrh1 and its homologous genes in Lrh1-knockout pancreas of mice
Background To identify transcription factors (TFs) and single nucleotide polymorphisms (SNPs) of Lrh1 (also named Nr5a2) and its homologous genes in Lrh1-knockout pancreas of mice. Methods The RNA-Seq data GSE34030 were downloaded from Gene Expression Omnibus (GEO) database, including 2 Lrh1 pancreas knockout samples and 2 wild type samples. All reads were processed through TopHat and Cufflinks package to calculate gene-expression level. Then, the differentially expressed genes (DEGs) were identified via non-parametric algorithm (NOISeq) methods in R package, of which the homology genes of Lrh1 were identified via BLASTN analysis. Furthermore, the TFs of Lrh1 and its homologous genes were selected based on TRANSFAC database. Additionally, the SNPs were analyzed via SAM tool to record the locations of mutant sites. Results Total 15683 DEGs were identified, of which 23 was Lrh1 homology genes (3 up-regulated and 20 down-regulated). Fetoprotein TF (FTF) was the only TF of Lrh1 identified and the promoter-binding factor of FTF was CYP7A. The SNP annotations of Lrh1 homologous genes showed that 92% of the mutation sites were occurred in intron and upstream. Three SNPs of Lrh1 were located in intron, while 1819 SNPs of Phkb were located in intron and 1343 SNPs were located in the upstream region. Conclusion FTF combined with CYP7A might play an important role in Lrh1 regulated pancreas-specific transcriptional network. Furthermore, the SNPs analysis of Lrh1 and its homology genes provided the candidate mutant sites that might affect the Lrh1-related production and secretion of pancreatic fluid.
Background
The pancreas is an endocrine gland, producing insulin, glucagon, somatostatin, and pancreatic polypeptide, and also an exocrine gland, accounting for more than 98% of pancreatic gland and secreting pancreatic juice containing digestive enzymes [1]. These digestive enzymes help to further break down the carbohydrates, proteins and lipids in the chime and thus support the absorption and digestion of nutrition in small intestine [2]. In the past decades, many research have focused on target genes and transcription factors (TFs) involved in the exocrine pancreas-specific transcriptional networks which are required for the production and secretion of pancreatic fluid that helps out the digestive system. Currently, many exocrine pancreas-specific genes and transcription factors have been identified, which may promote the understanding of the effect of exocrine pancreas on digestive system. Liver receptor homolog-1 (Lrh1; also called Nr5a2) is a nuclear receptor of ligand-activated transcription factors in liver by binding as a monomer to DNA sequence elements with the consensus sequence 5′-Py-CAAGGPyCPu-3′ [3]. It has been suggested that Lrh1 is progressively expressed in both the endocrine and exocrine pancreas [4]. Baquié M et al. [5] have found that Lrh1 is expressed in human islets and protects β-cells against stress-induced apoptosis that may be mediated via the increased glucocorticoid production that blunts the pro-inflammatory response of islets. Meanwhile, Fayard E et al. [6] have demonstrated that both Lrh1 and CEL (encoding carboxyl ester lipase) are co-expressed and confined to the exocrine pancreas. The identification of CEL as an Lrh1-target gene indicates that Lrh1 plays an important role in enterohepatic cholesterol homeostasis associated with the absorption of cholesteryl esters and the assembly of lipoproteins by the intestine [7]. Besides, Lrh1 is a downstream target in the PDX-1 (lead to pancreas agenesis) regulatory cascade that is activated only during early stages of pancreas development and that governs pancreatic development, differentiation and function [8].
Recently, the rapid advent of next-generation sequencing has made this technology broadly available for researchers in various molecular and cellular biological fields. Holmstrom SR et al. [9] have determined the cistrome and transcriptome for the nuclear receptor LRH-1 in exocrine pancreas and revealed that Lrh1 directly induces expression of genes encoding digestive enzymes and secretory and mitochondrial proteins based on Chromatin immunoprecipitation (ChIP)-seq and RNAseq analyses. Besides, Lrh1 cooperates with the pancreas transcription factor 1-L complex (PTF1-L) in regulation of exocrine pancreas-specific gene expression. However, many potential target genes and TFs of Lrh1 based on RNA-seq analysis have not been revealed.
In the present study, we downloaded the raw RNA-seq data of Holmstrom SR et al. deposited in The National Center for Biotechnology Information (NCBI) database, which were analyzed using multiple bioinformatics tools in the purpose of finding specific TFs of Lrh1 and its homology genes. Additionally, we also annotated the SNPs of Lrh1 and its homology genes to predict their mutant sites. Our study might improve the understanding of the regulation network of Lrh1-related production and secretion of pancreatic fluid.
RNA-seq data acquisition
The RNA-seq data was downloaded from NCBI (http://www.ncbi.nlm.nih.gov/) Gene Expression Omnibus (GEO) database (GEO accession: GSE34030 [9]), including 2 Lrh1 pancreas knockout samples and 2 wild type samples. RNA preparations were subjected to the Illumina RNA-seq protocol and the platform was GPL9185.
Data pre-processing, gene expression and homology gene of Lrh1 The raw data were downloaded from SRA (Sequence Read Archive) of NCBI and then converted to fastq reads using fastq-dump program of NCBI SRA Toolkit (−q 64) (http://trace.ncbi.nlm.nih.gov/Traces/sra/sra.cgi?view=std). Then, these reads were processed through TopHat [10] and Cufflinks [11] package to calculate gene-expression level. All parameters were set up according to the default settings of TopHat and Cufflinks. The DEGs were identified via non-parametric algorithm (NOISeq) methods in R package [12]. The thresholds value was False Discovery Rate (FDR) < 0.001. BLASTN analysis [13,14] of the selected DEGs was used to identify the homology genes of Lrh1. Homology genes here refer to the paralogous genes which share a high degree of sequence similarity (maximum expectation value was set to e −5 ) with Lrh1 in mice.
Function annotation of Lrh1 homologous genes
For functional analysis of Lrh1 homologous genes, DAVID (Database for Annotation, Visualization and Integrated Discovery) [15] was performed for Gene Ontology (GO) [16] function and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway enrichment analysis.
Transcription factor (TF) of Lrh1 homologous genes
Combined with TRANSFAC database [17], the TFs regulated the transcription of Lrh1 and its homologous genes were identified. Then, the promoter-binding factors regulated via the selected TFs were analyzed based on the website (http://www.nursa.org/molecule.cfm?molType= receptor&molId=5A2).
Screening of SNPs
The fastq reads were mapped to marker sequences using bowtie [18]. And the aligned reads were called using the SAM tool [19]. In order to minimize the risk of false-positive SNP Callings, the threshold value was that ID was "*" with quality > 50, or ID was not "*" with quality > 20. These SNPs were annotated via SnpEff [20] to categorize the effects of variants in genome sequences. The identified SNPs were searched in the dbSNP database to identify diseased SNPs or de novo discovered SNPs.
Identification and homology analysis of differentially expressed genes
After data processing, at FDR < 0.001, a total of 15683 DEGs were identified, including 10994 up-regulated and 4698 down-regulated genes. BLASTN analysis of DEGs showed 23 Lrh1 homology genes. Among them, 3 were up-regulated and 20 were down-regulated (Table 1).
Function and pathway annotation of Lrh1 homologous genes
To determine the function of Lrh1 homologous genes in pancreas, GO enrichment analysis and KEGG pathway enrichment analysis were used to analyze the up-and down-regulated Lrh1 homologous genes. For function and pathway annotation, DEGs were enriched into hexose metabolic process and monosaccharide metabolic process, which were involved into glycometabolism ( Figure 1). Meanwhile, KEGG pathway enrichment analysis identified insulin signaling pathway, indicating that the disorders of glycometabolism might be resulted from insulin resistance and/or insulin secretion ( Figure 2). PHKB, an Lrh1 homologous gene, participated in GO terms (hexose metabolic process and monosaccharide metabolic process) and KEGG pathway (insulin signaling pathway), was identified.
Potential TFs of Lrh1 homologous genes
Fetoprotein transcription factor (FTF) (ID: T04754) of Lrh1 was the only TF identified based on TRANSFAC database.
SNPs of Lrh1 homologous genes
The annotation of SNPs of Lrh1 homologous genes showed that the majority of SNPs were located in intron and upstream, accounting for nearly 92% of all SNPs (Tables 2 and 3). Three SNPs of Lrh1 were distributed in intron. Meanwhile, total 1819 SNPs of Phkb were located in the intron and 1343 SNPs were located in the upstream region of Phkb.
Discussion
In the present study, combined with RNA-seq data of Lrh1-knockout pancreas samples, FTF was the only TF of Lrh1 identified based on TRANSFAC database and may regulate cholesterol catabolism into bile acids by activation of the promoter-binding factor CYP7A. Many literatures have elucidated the function of Lrh1/Nr5a2/FTF/CYP7A via experimental studies [21][22][23][24][25].
FTF is highly expressed in the liver and intestine and is implicated in the regulation of cholesterol, bile acid and steroid hormone homeostasis [26]. Nearly 50% of the body cholesterol is catabolized to bile acids via bile acid biosynthetic pathway, of which cholic acid (hydroxylated at position 12) and chenodeoxycholic acid are the major primary bile acids and play an important role in cholesterol homeostasis [19]. Chenodeoxycholic acid can repress FTF expression and is a more potent suppressor of HMG-CoA reductase and cholesterol 7α-hydroxylase/CYP7A1 (7α-hydroxylase) than cholic acid [27]. It has been proposed that Lrh1, also known as CYP7A promoter-binding factor, LRH1, or FTF, is required for the transcription of the 7α-hydroxylase gene [19,28]. The small heterodimer partner 1 (SHP) of the nuclear bile acid receptor, FXR (farnesoid X receptor) can dimerize with FTF and diminish its activity on the 7α-hydroxylase promoter [29]. Although Lrh1 has been demonstrated the function in feedback regulation of CYP7A1 expression as part of the FXR-SHP-LRH-1 cascade, in which bile acids can inhibit their own synthesis, the mechanisms have not been well understood. Out C et al. [25] have suggested that CYP7A1 expression is increased rather than decreased under chow-fed conditions in Lrh1-knockdown mice that is coincided with a significant reduction in expression of intestinal Fgf15, a suppressor of CYP7A1. Besides, Noshiro M et al. [30] have suggested that the circadian rhythm of CYP7A is regulated by multiple transcription factors, including DBP, REV-ERBα/β, LXRα, HNF4α DEC2, E4BP4, and PPARα. Hepatocyte nuclear factor 4α (HNF4α) and FTF are two major TFs driving CYP7A1 promoter activity in lipid homeostasis. Bochkis IM et al. [31] have shown that prospero-related homeobox (Prox1) directly interacts with both HNF4α and FTF and potently co-represses CYP7A1 transcription.
In the present study, we annotated the SNPs of Lrh1 and its homologous genes, showing that the majority was located in intron and upstream. Quiles Romagosa MÁ [32] has reported that a functional SNP located in Lrh1 promoter is related to Body Mass Index (BMI) and these SNPs might play important roles in the obese phenotype. However, previous researches mostly focused on SNPs associated with pancreatic cancer cell growth and proliferation. For example, a previous genome-wide association study has identified five SNPs on 1q32.1 associated with pancreatic cancer that mapped to Lrh1 gene and its up-stream regulatory region [33].
Conclusions
In conclusion, FTF combined with CYP7A might play an important role in Lrh1 regulated pancreas-specific transcriptional network. Furthermore, the SNPs analysis of Lrh1 and its homology genes provided the candidate mutant sites that might affect the Lrh1-related production and secretion of pancreatic fluid. These common susceptibility loci for Lrh1 and its homologous genes needed follow-up studies.
Highlights
1. Total 15683 DEGs were identified, of which 23 was Lrh1 homology genes (3 up-regulated and 20 down-regulated). 2. Fetoprotein TF was the only TF of Lrh1 identified based on TRANSFAC database and the promoterbinding factor of fetoprotein TF was CYP7A. 3. The SNP annotations of Lrh1 homologous genes showed that 92% of mutation sites were occurred in intron and upstream. Three SNPs of Lrh1 were located in intron, while 1819 SNPs of Phkb were located in intron and 1343 SNPs were located in upstream region.
|
2017-08-03T01:40:39.094Z
|
2014-04-15T00:00:00.000
|
{
"year": 2014,
"sha1": "ea0eb6d9513a5646fe6cc8d9fb89f4b6909fb26d",
"oa_license": "CCBY",
"oa_url": "https://bmcmedgenet.biomedcentral.com/track/pdf/10.1186/1471-2350-15-43",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0845efccd831aa46452e4c69299d7ce114cf5204",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
8369528
|
pes2o/s2orc
|
v3-fos-license
|
Novel applications of therapeutic hypothermia: report of three cases
Therapeutic hypothermia can provide neuroprotection in various situations where global or focal neurological injury has occurred. Hypothermia has been shown to be effective in a large number of animal experiments. In clinical trials, hypothermia has been used in patients with postanoxic injury following cardiopulmonary resuscitation, in traumatic brain injury with high intracranial pressure, in the perioperative setting during various surgical procedures and for various other indications. There is thus evidence that hypothermia can be effective in various situations of neurological injury, although a number of questions remain unanswered. We describe three patients with unusual causes of neurological injury, whose clinical situation was in fundamental aspects analogous to conditions where hypothermia has been shown to be effective.
Hypothermia is also used during various surgical procedures, including major vascular surgery [1,[18][19][20]. Cooling is thought to provide spinal cord protection as well as overall neuroprotection in the latter category of patients. In the present article we describe three exceptional cases of neurological injury. Although each of these three patients had a rare and unusual cause of injury, their clinical situations nevertheless were in many aspects similar to those where therapeutic hypothermia has been shown to be, or is thought to be, effective. We therefore decided to treat these patients with artificial cooling to prevent postischaemic neurological injury. Patient A, a 49-year-old man with no significant medical history, was admitted to another hospital after being stabbed in his neck. He was transferred to our centre for emergency surgery, which was started nearly 1 hour following the incident. Surgical exploration showed a dissection of the left internal carotid artery and a complete transsection of the left internal jugular vein and vagal nerve. Haemostasis and anastomosis of the artery was achieved by saphenous vein interposition. During the surgical procedure, however, the patient developed dilation of his left pupil. A postoperative computerized tomography (CT) scan revealed a lesion in the left parietal region, suspect for postischaemic injury and developing infarction. The patient was subsequently admitted to the intensive care unit. Artificial cooling (32-34°C) was immediately started and continued for 24 hours.
In the following days the patient's overall condition improved and he was extubated. Eight days after the first scan, a second CT scan was performed; the lesion observed on the first CT scan was still present, but had not increased in diameter. Two smaller lesions, suspect for small infarctions, were also seen. Ten days after admission, the patient left the hospital in good clinical condition. On clinical investigation there was no evidence of neurological impairment, and no signs of hemiparesis were present. Patient B, a 57-year-old man with a history of hypertension, noninsulin-dependent diabetes mellitus, tonsillectomy and percutaneous transluminal coronary angioplasty, was admitted after elective thoraco-abdominal aneurysm repair for a type Crawford II aneurysm. A thoraco-phrenico laparotomy was performed and a prosthesis was implanted from the left subclavian artery to the aorta-iliacal bifurcation, with implantation of the abdominal and renal arteries in the prosthesis. Perioperative neurological controls including evoked potential measurements to monitor for spinal ischaemia showed no abnormalities. For distal perfusion, a left-left heart bypass was used combined with relative organ perfusion. In addition, spinal cord drainage was performed to keep the spinal fluid pressure ≤ 10 mmHg. Because the evoked potentials remained unchanged during the surgical procedure, no lumbar and/or intercostal arteries were re-implanted.
Following surgery, the patient was admitted to the intensive care unit. One day after surgery sedation was stopped, the patient woke up and was able to move both legs. The following day, however, the patient suddenly became restless and required sedation. It was noticed during his agitation that he had not moved his legs at all. On neurological examination he was unable to move his legs, and his leg reflexes had disappeared. A delayed onset paraplegia was diagnosed. Artificial cooling (32°C) was immediately started and emergency surgery was prepared. Surgery took place 1 hour following the diagnosis, and two intercostal arteries were successfully implanted in the prosthesis. The procedure took about 2 hours. Hypothermia was maintained during surgery and for 24 hours following surgery. Bleeding was not excessive in spite of the fact that hypothermia was maintained during surgery.
Postoperatively, the patient developed transient respiratory failure and renal failure. The overall course was favourable, however, and after 10 days it was possible to extubate the patient. The spinal reflexes returned 24 hours after surgery, and when the patient awoke he was able to move both legs. His strength returned slowly, and 23 days after his admission the patient was transferred to the ward, where he recovered without any complications. The patient left the hospital walking normally.
Patient C, a 55-year-old construction worker, fell asleep in the space between two large concrete building blocks with a height of 25 cm. His coworkers had not noticed this and a crane put another block on top of the two blocks with the worker in between. The air was pressed out of his lungs and he could not shout or even breathe due to the thoracic compression. After about 15 min his coworkers noticed what had happened and they removed the building block, finding their colleague blue and pulseless. CPR was started, and an ambulance arrived within 2 min (the building site was adjacent to our hospital). Upon admission to our intensive care unit, the heart rhythm had been restored. The patient was comatose with a Glasgow Coma Scale score of 3 and had a dilated pupil on the right side, with no other abnormalities. A cerebral CT scan revealed no abnormalities. A diagnosis of asphyxia and postanoxic encepalopathy was established, and artificial cooling (32°C) was started immediately and maintained for 24 hours.
Two days after admission, the patient was extubated and the following day he was transferred to the ward. His neurological condition steadily improved, and after 6 days he carried out verbal commands. During his stay in the ward, neurological screening revealed loss of strength of both shoulders and arms and sensibility loss of both arms. This was attributed to compression of the shoulders by the concrete blocks. Eight days after admission he was discharged. Neurological evaluation in an outpatient setting revealed proximal bilateral brachial plexus-paresis as a result of the compression trauma. This condition also steadily improved, and at this time the patient's condition is virtually normal in all aspects.
Discussion
The three cases described are exceptional clinical situations in which the effectiveness of artificial cooling will never be tested in a randomized clinical trial. This means that case reports describing the use of therapeutic hypothermia and demonstrating therapeutic results and outcome will probably be the only evidence available apart from observations in similar clinical situations. In our opinion the clinical condition of these Available online http://ccforum.com/content/8/5/R343 R345 patients, although rare and unique, was in many aspects analogous to, and closely resembled, situations were therapeutic hypothermia has been shown to be effective.
The situation of patient A is comparable with the clinical situation of stroke. There was a localized ischaemic period resulting from the dissection of a cerebral blood vessel. In this situation a central core region, closest to the occluded vessel, becomes necrotic whereas the area around this region, referred to as the penumbra, is hypoperfused and at risk of dying but can still be saved. With time the penumbra becomes core (and thus becomes necrotic) whereas the injured region expands outward and into the surrounding area, which then becomes penumbra. Outward expansion of the penumbra can be prevented by restoring flow; the penumbra can be salvaged with increased perfusion and/or other early interventions. Patient A was at risk for focal brain ischaemia because of the partial absence of brain perfusion as a result of a stab wound in his neck, damaging his internal carotid artery. Several studies have suggested that selected patients with stroke, particularly those with medial cerebral artery occlusion, might benefit from therapeutic hypothermia [9]. As our patient had been stabbed 1 hour before admission we immediately started cooling while preparing for emergency surgery. The aim was to mitigate postischaemic neurological injury.
Patient B had a delayed-onset paraplegia after an elective thoracic-abdominal aneurysm repair. Since there is evidence suggesting that intraoperative hypothermia in neurosurgical procedures and major vascular surgery provides spinal cord protection (in addition to its overall neuroprotective effects) [1,16,[18][19][20][21], we decided to induce hypothermia during surgery and for 24 hours following surgery in this patient. Reflexes had been absent for at least 1 hour before surgery was started and it took another hour before blood flow to the spinal cord could be restored. Our hope was that there would still be some perfusion of the spinal cord during this period, and that induced hypothermia could provide enough neuroprotection to save the spinal cord in this phase. Fortunately, the patient indeed recovered and, although there was a prolonged phase of weakness of the legs, his motor functions were fully restored.
The situation of patient C was comparable with the global ischaemia occurring after a cardiac arrest with return of spontaneous circulation. Favourable effects of cooling have been most clearly demonstrated in specific categories of patients with postanoxic injury following CPR [1][2][3][4], although of course the cause of injury in this patient was not related to cardiac disease. In this sense his situation was more favourable because he did not have cardiac disease; however, his period of anoxia was relatively long, because it took 15 min before he was found by his coworkers and CPR was started. We determined that this (neurological) situation was comparable with CPR caused by arrhythmias, and decided to treat this patient with artificial cooling to prevent postischaemic neurological injury. The patient recovered with virtually no evidence of neurological impairment.
Although we cannot prove that hypothermia was the cause of favourable outcome in these three patients, the extent of neurological injury was much less severe than had been expected on the basis of their initial injury and the duration of hypoxia/ ischaemia. Hypothermia can be neuroprotective even after some delay because many of the destructive processes occurring after ischaemia take place over a period of hours, or even days, after injury [1]. These processes include decreases in oxygen and glucose metabolism [22,23], suppression of ischaemia-induced inflammatory reactions [24,25], prevention of reperfusion-related injury [26], improvement of ion homeostasis [24][25][26] and blocking of free radical production [27,28].
Hypothermia has been shown to be effective in clinical situations analogous to those in our patients. In our opinion, these cases illustrate that hypothermia should at least be considered as a therapeutic option in cases were posthypoxic injury of the brain or spinal cord has occurred, provided the cause of this problem has been removed or can be quickly treated.
In conclusion, these cases suggest that doctors treating patients with various types of postischaemic neurological injury should consider the use of induced hypothermia for neuroprotection. Moreover, in this way additional time may be gained to carry out emergency surgical procedures or other interventions to restore blood flow to the endangered parts of the central nervous system.
|
2014-10-01T00:00:00.000Z
|
2004-08-18T00:00:00.000
|
{
"year": 2004,
"sha1": "5f2cc88c02f87208a342db4cc6d496d3ddebab0a",
"oa_license": "CCBY",
"oa_url": "https://ccforum.biomedcentral.com/track/pdf/10.1186/cc2928",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5f2cc88c02f87208a342db4cc6d496d3ddebab0a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
269303481
|
pes2o/s2orc
|
v3-fos-license
|
Comparative outcomes of obese and non-obese patients with lumbar disc herniation receiving full endoscopic transforaminal discectomy: a systematic review and meta-analysis
Objective This study aimed to assess the impact of full endoscopic transforaminal discectomy (FETD) on clinical outcomes and complications in both obese and non-obese patients presenting with lumbar disc herniation (LDH). Methods A systematic search of relevant literature was conducted across various primary databases until November 18, 2023. Operative time and hospitalization were evaluated. Clinical outcomes included preoperative and postoperative assessments of the Oswestry Disability Index (ODI) and visual analogue scale (VAS) scores, conducted to delineate improvements at 3 months postoperatively and during the final follow-up, respectively. Complications were also documented. Results Four retrospective studies meeting inclusion criteria provided a collective cohort of 258 patients. Obese patients undergoing FETD experienced significantly longer operative times compared to non-obese counterparts (P = 0.0003). Conversely, no statistically significant differences (P > 0.05) were observed in hospitalization duration, improvement of VAS for back and leg pain scores at 3 months postoperatively and final follow-up, improvement of ODI at 3 months postoperatively and final follow-up. Furthermore, the overall rate of postoperative complications was higher in the obese group (P = 0.02). The obese group demonstrated a total incidence of complications of 17.17%, notably higher than the lower rate of 9.43% observed in the non-obese group. Conclusion The utilization of FETD for managing LDH in individuals with obesity is associated with prolonged operative times and a higher total complication rate compared to their non-obese counterparts. Nevertheless, it remains a safe and effective surgical intervention for treating herniated lumbar discs in the context of obesity. Supplementary Information The online version contains supplementary material available at 10.1186/s12891-024-07455-5.
Introduction
Lumbar disc herniation (LDH) is a common spinal disorder that usually results in pain and dysfunction [1].Among the various surgical approaches available, full endoscopic transforaminal discectomy (FETD) has gained popularity as a minimally invasive technique that offers potential advantages such as reduced tissue trauma and faster recovery [2,3].
Obesity, characterized by excessive accumulation of adipose tissue, has been recognized as a significant factor affecting the natural history and treatment outcomes of various musculoskeletal conditions [4,5].Given the intricate anatomical considerations in the lumbar spine and the potential implications of increased adiposity on surgical access and healing processes, understanding the interaction between obesity and the results of FETD is crucial to optimizing patient care [6,7].
While individual studies have investigated the association between obesity and FETD outcomes [8][9][10], a comprehensive evaluation of the current evidence is warranted.Systematic reviews and meta-analyses offer a robust approach to synthesize existing knowledge and identify key trends.This systematic review aims to critically appraise the relevant literature comparing clinical outcomes of FETD in obese and non-obese patients.Specifically, we seek to elucidate the impact of obesity on pain relief, functional improvement, and complication rates following FETD.By examining the collective evidence, we strive to identify potential outcome differences that can inform clinical decision-making and ultimately improve patient care.
Study strategy
A systematic and comprehensive search was conducted across prominent scholarly databases, including PubMed, Embase, Scopus, Web of Science, China's National Knowledge Internet (CNKI) and Wanfang Data, adhering meticulously to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines [11,12].The search was executed on November 18, 2023, employing a set of keywords including "lumbar disc herniation", "endoscopic, " "transforaminal" and "obesity, " ensuring a comprehensive and focused exploration of the existing literature landscape.
To enhance the comprehensiveness of the search strategy, a secondary examination of the references cited in the selected articles was carried out, further fortifying the breadth and depth of the literature review.
Inclusion and exclusion criteria Inclusion criteria
(1) Original research articles with a quantitative research design, including randomized controlled trials (RCTs), cohort studies, and case-control studies.(2) Patients diagnosed with lumbar disc herniation who underwent FETD.Studies explicitly comparing clinical outcomes between obese and non-obese individuals following FETD.(3) Studies reporting on at least one of the following outcomes: pain relief, functional improvement, or complication rates.(4) Articles published in English or Chinese.
(2) Studies that included patients who had undergone a full endoscopic interlaminar discectomy or biportal endoscopic surgery.(3) Studies lacking a direct and clear comparison between obese and non-obese cohorts following FETD.(4) Studies without relevant and specific data on pain relief, functional improvement, or complication rates related to FETD.
Data extraction
Two authors were assigned the responsibility of meticulously screening all retrieved articles resulting from the systematic search.In instances where conflicts occurred during the screening process, a judicious approach was taken by consulting the other coauthors.The resolution of discrepancies was achieved through collaborative discussion, ensuring a consensus reflecting the collective expertise of the research team [13].The selection process involved a meticulous evaluation of the titles and abstracts to discern their relevance to the specific parameters of our study.In cases where ambiguity persisted or the information provided in the titles and abstracts proved insufficient, a comprehensive examination of the full-text articles was carried out.This rigorous approach aimed to determine the eligibility of studies based on predetermined inclusion and exclusion criteria.
The extracted data were meticulously classified into two discrete sections, each serving as a distinct focal point for subsequent analytical endeavors.The initial section of data extraction encompassed fundamental details related to the baseline characteristics of the included studies.This included key information such as the author's name, year of publication, journal name, study design, gender distribution, sample size, and mean age of the patient cohort.The second component is the important clinical outcomes.This included the duration of surgery, hospitalization period, complication rates, Visual Analog Scale (VAS) scores, Oswestry Disability Index (ODI) scores, and MacNab results.In particular, complications were subcategorized into immediate and late postoperative occurrences.Furthermore, the evaluation of clinical indicators, represented by VAS and ODI scores, was conducted to define improvements at 3 months postoperatively and during the final follow-up, respectively.
Quality assessment and publication bias
The methodological rigor of the studies incorporated into this meta-analysis underwent a comprehensive evaluation utilizing established tools, notably the Newcastle-Ottawa Scale (NOS) for non-randomized studies [14].NOS facilitated a systematic evaluation of the quality of the study by assessing key parameters, including selection, comparability, and outcome, with studies achieving or exceeding a predetermined threshold of five "stars" deemed to meet high-quality criteria based on the specified rating criteria.
To further enhance the robustness of the synthesized evidence, this meta-analysis employed the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) method [15].The GRADE methodology systematically evaluated the credibility of the evidence derived from the pooled results.This evaluation considered several factors, including the risk of publication bias, the precision of the results, and the magnitude of the effects of treatment.The resulting quality of the evidence was then stratified into four hierarchical grades: high, moderate, low, and very low.
Statistical analysis
Statistical meta-analyses were executed using the Review Manager 5.3 software, employing rigorous analytical methods to synthesize the available evidence.For continuous data, the weighted mean differences (WMD) computation accompanied by 95% confidence intervals (CI) was performed.Dichotomous outcomes were represented as odds ratios (OR) along with their corresponding 95% CL.The I 2 statistic was employed to quantify the extent of heterogeneity, with a threshold of I 2 ≥ 50% indicative of substantial heterogeneity.In instances where there was no discernible statistical heterogeneity (P > 0.1, I 2 < 50%), a fixed-effects model was applied for the purpose of pooling.Conversely, in the presence of significant heterogeneity (P < 0.1, I 2 ≥ 50%), a random-effects model was employed.The criterion for statistical significance was established at P < 0.05.
Furthermore, to examine the presence of publication bias, funnel plots were incorporated into the analysis.These graphs served as visual aids to detect asymmetry, providing information on the potential influence of publication bias on observed results.
Search results and study characteristics
Following a systematic and exhaustive literature search, a judicious selection process led to the identification of four studies that unequivocally met the predetermined inclusion criteria, as defined in Fig. 1.In particular, all four studies included in this analysis were retrospective in design [16][17][18][19].
The collective study cohort comprised a total of 258 patients, with 99 individuals assigned to the obese group and 159 to the non-obese group (Table 1).Among the selected studies, two originated in China, one from Korea, and the remaining one study from Greece.Within this amalgamated cohort, the treatment level that most frequently underwent FETD was identified as L4-5.In three studies, individuals with a body mass index (BMI) ≥ 30 were categorized as obese, whereas another study used more stringent criteria and categorized individuals with a BMI ≥ 40 as obese.
Perioperative measurements Mean operative time (mins)
Incorporating data from three studies and a collective subject pool of 228 participants, an analysis of mean operative time was conducted.The findings revealed a statistically significant distinction in surgical duration between obese and non-obese patients.Specifically, obese patients exhibited a prolonged operative time compared to their non-obese counterparts (P = 0.0003, WMD: 3.90; 95% CI: 1.78 to 6.02, Fig. 2A).
Hospital length of stay (days)
A total of 228 subjects across the three studies provided the length of hospitalization.The analysis showed no difference in the length of hospitalization after FETD surgery between obese and non-obese patients (P = 0.76, WMD: 0.05; 95% CI: -0.30 to 0.40, Fig. 2B).
Improvement of VAS
Three studies reported VAS for back pain scores in 210 patients preoperatively and at 3 months postoperatively.The results of the analyses did not show statistically significant differences in the improvement of VAS for back pain at 3 months after FETD in obese patients compared to non-obese patients (P = 0.93, WMD: -0.02; 95% CI: -0.41 to 0.38, Fig. 3A).
A comprehensive analysis incorporating data from four studies, involving a total of 258 patients, was conducted to evaluate VAS for back pain before surgery and at the final follow-up.The results of this meta-analysis indicated that the improvement in back pain VAS scores at the final follow-up, after FETD, did not show statistically significant differences between obese and non-obese patients (P = 0.78, WMD: 0.05; 95% CI: -0.32 to 0.42, Fig. 3B).
Three studies reported VAS for leg pain scores in 210 patients preoperatively and at 3 months postoperatively.The results of the analyses showed no statistically significant difference in the improvement of VAS for leg pain at 3 months after FETD in obese patients compared with non-obese patients (P = 0.82, WMD: -0.05; 95% CI: -0.49 to 0.39, Fig. 3C).
A comprehensive analysis incorporating data from four studies, involving a total of 258 patients, was conducted to evaluate VAS for leg pain before surgery and at the final follow-up.The results of this meta-analysis indicated that the improvement in leg pain VAS scores at the final follow-up, subsequent to FETD, did not exhibit statistically significant differences between obese and non-obese patients (P = 0.60, WMD: -0.11; 95% CI: -0.51 to 0.30, Fig. 3D).
Changes in ODI
Two studies reported ODI scores in 180 patients preoperatively and 3 months postoperatively.The results of the analyses did not show statistically significant differences in the improvement in ODI at 3 months after FETD in obese patients compared to non-obese patients (P = 0.69, WMD: -0.65; 95% CI: -3.78 to 2.49, Fig. 4A).
A comprehensive analysis was performed that included data from three studies, involving a total of 228 patients, to evaluate ODI preoperatively and at the final followup.The results of this meta-analysis indicated that the improvement of ODI at the final follow-up, subsequent to FETD, did not exhibit statistically significant differences between obese and non-obese patients (P = 1.00,WMD: -0.01; 95% CI: -2.97 to 2.96, Fig. 4B).
Satisfaction
Surgical satisfaction, assessed through the modified MacNab criteria, was evaluated in two studies encompassing a cohort of 180 patients.The meta-analysis of these results indicated that there was no statistically significant difference in the final satisfaction between obese and non-obese patients following FETD (P = 0.22, WMD: 0.50; 95% CI: 0.17 to 1.52, Fig. 4C).
Complications
Data pertaining to postoperative complications from four studies, encompassing a collective cohort of 258 patients, were systematically analyzed.The results of this metaanalysis revealed that obese patients exhibited a higher incidence of total complications after FETD compared to their non-obese counterparts (P = 0.02, OR: 2.68; 95% CI: 1.21 to 5.93, Fig. 5A).The incidence of total complications within the obese group was documented at 17.17%, while the non-obese group exhibited a lower rate of 9.43%.Data pertaining to immediate complications were derived from three studies, encompassing a cohort of 210 patients, and subjected to systematic analysis.The findings of this meta-analysis indicated that the rate of immediate complications lacked a statistical distinction between non-obese and obese patients undergoing FETD (P = 0.11, OR: 3.81; 95% CI: 0.72 to 20.02, Fig. 5B).However, the incidence of immediate complications within the obese group was documented at 6.41%, while the non-obese group exhibited a lower rate of 0.76%.
Data pertaining to late complications were derived from three studies, encompassing a cohort of 228 patients, and subjected to systematic analysis.The findings of this meta-analysis indicated that the rate of late complications lacked statistical distinction between nonobese and obese patients undergoing FETD (P = 0.06, OR: 2.26; 95% CI: 0.96 to 5.35, Fig. 5C).Nevertheless, the incidence of late complications within the obese group was documented at 15.19%, while the non-obese group exhibited a lower rate of 9.39%.
Furthermore, an examination of postoperative recurrences was conducted, utilizing data from three studies comprising a total of 228 patients.The meta-analysis revealed that the recurrence rate subsequent to FETD was comparable between non-obese and obese patients, with no statistically significant difference observed (P = 0.06, OR: 3.84; 95% CI: 0.95 to 15.48, Fig. 5D).Interestingly, the incidence of recurrences within the obese group was documented at 7.59%, while the non-obese group exhibited a lower rate of 1.34%.
Others
The study conducted by Bae et al. [16] found that, the quantity of removed disc material during the FETD procedure was 0.9 cc, with a range of 0.5-2 cc, for the obese patients.In comparison, non-obese patients had a slightly higher amount of disc material removed, measuring 1.4 cc within a similar range of 0.5-2 cc.
The study conducted by Zhu et al. [19] reported that, within their investigated cohort, the obese group of patients exhibited higher values in terms of the number of intraoperative fluoroscopies, access establishment time, and procedure time in comparison to the nonobese group (all p < 0.05).
Quality analysis and publication bias
Table 2 presents a comprehensive overview of the risk of bias assessment conducted for all studies included in the meta-analysis.In particular, each study exceeded a predetermined quality threshold, as evidenced by NOS scores of 5 stars or more.This consistent high-quality scoring across studies attests to the robustness of the evidence synthesized in this meta-analysis.To further scrutinize the potential for publication bias, particularly in the context of overall comorbidity, a visual inspection of the funnel plot (Fig. 6) was undertaken.The symmetrical distribution observed within the funnel plot suggests a low risk of publication bias, enhancing the credibility of the findings.
Table 3 provides a concise summary of the GRADE methodology employed to assess confidence in the overall results.
Discussion
We performed this analysis to determine the perioperative variables and postoperative clinical outcomes of obese patients receiving FETD.We observed that obese patients had significantly longer mean operative times and higher overall postoperative complication rates compared with non-obese patients.The identified differences in operative times and complication rates underscore the potential challenges and considerations inherent in FETD procedures in obese individuals.The extended operative times may be indicative of increased technical complexity, possibly attributed to anatomical variations or procedural intricacies associated with obesity.In addition, the epidural fat popping out early in the surgery needs more fluid for the field to remain clearer.So, this increases the time and also chances of prodrome.Moreover, the higher overall postoperative complication rates in obese patients emphasize the need for heightened vigilance and tailored perioperative management strategies to address potential challenges and enhance patient safety.It should be noted that this meta-analysis represents the inaugural comparative examination of clinical outcomes in the context of obese versus non-obese patients with LDH undergoing FETD.This analysis not only informs clinical decision-making but also serves as a foundational reference for future investigations aimed at refining and optimizing surgical strategies for LDH in the context of obesity.
Beyond its well-established association with cardiovascular and cerebrovascular diseases, obesity significantly contributes to orthopedic ailments [20][21][22].While prior orthopedic literature primarily focused on weight-bearing knee degenerative diseases in obese patients [23], recent advancements in spinal biomechanics have unveiled a substantial linear correlation between obesity and conditions such as low back pain and lumbar disc herniation [24,25].International research has highlighted that severely obese individuals experience 1.5 times greater lumbar forces than normal, requiring increased lumbar back muscle exertion to maintain body balance and prevent deviation from the central axis [26].This increased lumbar force increases the risk of lumbar strain or disc herniation.In addition, the substantial load borne by the lower lumbar intervertebral discs in severely obese patients exacerbates degeneration, making the nucleus pulposus and annulus fibrosus more susceptible to rupture under equivalent external forces [27].Furthermore, the likelihood of vascular sclerosis or injury of the upper and lower endplate in severely obese individuals contributes to impediments in the supply of nutrients, resulting in metabolic imbalances, reduced matrix synthesis, increased acid metabolites, and diminished water Traditional open laminectomy serves as a prevalent clinical intervention for LDH, effectively mitigating mechanical compression within the spinal canal.However, the inherent drawbacks of the procedure, such as excessive manipulation of the paravertebral muscles that leads to a greater risk of hemorrhage and an increased incidence of postoperative adhesion, warrant consideration [8].Moreover, the utilization of general anesthesia in traditional procedures introduces elevated anesthesiaassociated risks, prolonged postoperative recovery, and heightened susceptibility to complications like urinary tract infections and pneumonia [29].In contrast, FETD offers a distinct approach.The employment of local anesthesia during FETD not only diminishes anesthesiarelated risks but also facilitates direct communication with the patient, reducing the likelihood of neurological damage.FETD obviates the need for spinal cord retraction and bone removal, minimizing the impact on adjacent soft tissues and muscles [16].Precise decompression through a small incision maximizes the preservation of posterior spinal integrity and mitigates potential complications [17].In addition, this approach has minimal impact on the feasibility of subsequent posterior decompression or fusion surgery.
FETD offers several theoretical advantages for obese patients undergoing surgical intervention for LDH.FETD utilizes a small surgical incision, potentially reducing the incidence of incisional fatty liquefaction, a complication more common in obese patients due to the increased Fig. 3 Forest plot comparing the improvement of VAS scores for back pain at 3 months (A), back pain at final follow-up (B), leg pain at 3 months (C), and leg pain at final follow-up (D) in obese and non-obese patients undergoing full endoscopic transforaminal discectomy.VAS: visual analog scale adipose tissue at the incision site.Discography can be performed concurrently with FETD, allowing for precise localization of the ruptured annulus fibrosus, the source of pain in LDH.This combined approach can enhance diagnostic accuracy compared to traditional methods.Staining the surgical field with a methylene blue and iodine alcohol mixture can improve visualization of anatomical structures, particularly nerve roots.This improved visualization can facilitate a smoother and more efficient surgical procedure.The use of a radiofrequency knife during FETD offers potential benefits.It may ablate nerve endings that have infiltrated the ruptured annulus fibrosus, potentially reducing post-operative pain.Additionally, the radiofrequency technology may lessen the formation of nerve adhesions, a potential source of chronic pain.Continuous saline irrigation throughout the procedure can effectively flush out chemical irritants released from the ruptured disc material.This reduces the accumulation of these substances and minimizes potential chemical irritation of the nerve root, potentially leading to faster recovery and reduced postoperative pain.Despite its effectiveness, FETD comes with a steeper learning curve, narrower indications compared to traditional open surgery, limited decompression capability, longer working channels required for obese patients, and the imperative need for weight control and avoidance of early postoperative physical exertion.
Comprehensive meta-analysis of three and four studies evaluating VAS for back pain and leg pain scores, and ODI before and after surgery, at 3 months postoperatively, and at the final follow-up, revealed no statistically significant differences in improvement between obese and non-obese patients after FETD.The findings indicate that surgical outcomes in terms of pain relief and functional improvement were comparable between the two groups at both short-term and long-term followup.Furthermore, the evaluation of surgical satisfaction using the modified MacNab criteria did not show significant differences between obese and non-obese patients.Collectively, these results suggest that FETD yields similar clinical benefits in terms of pain relief, functional improvement, and patient satisfaction, regardless of the obesity status.
Our analysis revealed a significantly higher overall complication rate in obese patients compared to their non-obese counterparts.Interestingly, however, the rates of immediate and late complications did not differ statistically between the two groups.Common surgical complications associated with FETD include various challenges, including postoperative radicular sensory abnormalities, dural tears, and nerve root injuries.Among these, postoperative radicular sensory abnormalities emerge as the most common complication, often attributed to overstimulation or nerve root damage.Prudent preoperative evaluation and a gentle operative approach are crucial in mitigating the risk of this complication [30].Dural tears, often encountered in inexperienced practitioners or cases with substantial subdural scar tissue adhesion, underscore the importance of meticulous and cautious surgical procedures to prevent iatrogenic injury.Nerve root injuries, the most serious complication, often result from insufficient familiarity with anatomical structures and inadvertent maneuvers, emphasizing the need for careful identification of tissue structures and avoidance of rough operative techniques.Recurrence rates after FETD, reported in the literature ranging from 5 to 15% [31], highlight the significance of thorough removal of degenerated intervertebral disc tissue during the operation to minimize the risk of recurrence.Postoperatively, lumbar back muscle exercises and a month-long protective regimen against heavy physical labor are recommended to further reduce the likelihood of recurrence.While FETD proves to be an effective, safe, and minimally invasive surgical method for lumbar disc herniation treatment, careful adherence to surgical indications and contraindications, along with continual improvement in surgical proficiency, remains crucial in minimizing the incidence of complications associated with this technique.
In addition, the findings of our study carry significant clinical implications, particularly for the management of obese patients undergoing FETD.Obesity poses unique challenges in surgical interventions due to increased surgical complexity, higher complication rates, and potentially inferior outcomes.Therefore, tailoring surgical approaches to address these challenges is crucial.Firstly, prolonged operative time can potentially increase the risk of intraoperative complications, such as anesthesia-related issues, surgical site infections, and blood loss.Clinicians should consider these factors when planning surgical schedules and allocating resources for obese patients undergoing FETD.Strategies to optimize perioperative management, such as meticulous surgical planning, preoperative optimization of comorbidities, and intraoperative monitoring, are essential to mitigate the increased risks associated with prolonged surgery.Secondly, given the higher prevalence of comorbidities and anatomical variations in obese patients, thorough preoperative assessment is crucial to identify potential risk factors and optimize surgical outcomes.This includes assessing the severity and duration of symptoms, evaluating the extent of disc herniation, and considering the presence of concomitant spinal pathologies.Clinicians should also take into account the patient's body habitus, spinal anatomy, and overall health status when determining the suitability for FETD.Furthermore, we emphasize the need for tailored surgical techniques and instrumentation to address the anatomical challenges posed by obesity.This may involve using longer instruments, specialized retractors, and advanced imaging modalities to navigate through adipose tissue and reach the target disc space safely and effectively.Additionally, intraoperative fluoroscopy or navigation systems can aid in accurately localizing the surgical site and minimizing the risk of iatrogenic injury.Lastly, comprehensive postoperative care and rehabilitation in optimizing outcomes for obese patients undergoing FETD.Close postoperative monitoring for potential complications, such as wound infections, neurological deficits, and recurrence of symptoms, is essential in obese individuals due to their heightened susceptibility.Moreover, implementing tailored rehabilitation programs focusing on weight management, core Bae 2016 [16] ☆ ☆ ☆☆ ☆ ☆ ☆ 7 Kapetanakis 2018 [17] ☆ ☆ ☆☆ ☆ ☆ 6 Yu 2021 [18] ☆ ☆ ☆☆ ☆ ☆ ☆ 7 Zhu 2021 [19] ☆ ☆ ☆ ☆ ☆ stabilization, and lifestyle modifications can promote long-term success and prevent recurrence of disc herniation in obese patients.
Limitations
Some limitations of the present study lie in the predominantly retrospective nature of the included studies (including selection bias, information bias, confounding variables, and challenges in establishing external validity), coupled with the relatively modest number of available studies, both of which contribute to the overall constraints of this review.Furthermore, the discernible divergence in the definitions of obesity between the respective authors of the included studies represents a notable weakness, introducing variability and potential bias into our findings.Despite these limitations, we contend that our study has yielded valuable information, providing a foundation for future investigations.
Conclusion
The key findings of our meta-analysis underscore notable distinctions between obese and non-obese patients who undergo FETD.In particular, obese individuals exhibited prolonged durations in receiving FETD surgery and experienced a higher overall postoperative complication rate compared to their non-obese counterparts.However, no statistically significant differences were discerned between the two groups regarding the length of hospitalization, the extent of improvement in VAS scores, the improvement in ODI, and the recurrence rates.These findings, though noteworthy, merit consideration in the context of acknowledged limitations in study design and heterogeneity among included studies.Despite these constraints, our investigation serves as a valuable preliminary exploration and refine our understanding of surgical management strategies for LDH in both obese and non-obese patients.
Fig. 1
Fig. 1 Flowchart of study selection for meta-analysis
Fig. 2
Fig. 2 Forest plot comparison of operative time (A) and hospitalization (B) in obese and non-obese patients undergoing full endoscopic transforaminal discectomy
Fig. 4
Fig. 4 Forest plot comparing the improvement of ODI scores at 3 months (A), at final follow-up (B), and the final satisfaction rate (C) in obese and nonobese patients undergoing full endoscopic transforaminal discectomy.ODI: Oswestry Disability Index
Fig. 5
Fig. 5 Forest plots comparing total complication rate (A), immediate complication rate (B), late complication rate (C), and recurrence rate (D) in obese and non-obese patients undergoing full endoscopic transforaminal discectomy
Fig. 6
Fig. 6 Funnel plot of publication bias for complications
Table 1
Characteristics of the included studies NR: not report; BMI: Body Mass Index.
Table 2
Quality assessment of the included studies
|
2024-04-24T13:19:02.290Z
|
2024-04-23T00:00:00.000
|
{
"year": 2024,
"sha1": "bf2871b04f8d185fedc7d2922b970f94e99baefa",
"oa_license": "CCBY",
"oa_url": "https://bmcmusculoskeletdisord.biomedcentral.com/counter/pdf/10.1186/s12891-024-07455-5",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d83c0e9f62dc5469ce0a42a9959129931ba3b7f3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
55860167
|
pes2o/s2orc
|
v3-fos-license
|
Effect of Different Deficit-Irrigation Capabilities on Cotton Yield in the Tennessee Valley
1Biosystems Engineering Department, Auburn University, Research Fellow, 200 Corley Bldg, Auburn, AL 36849-5417 2Biosystems Engineering Department, Auburn University, Associate Professor, 200 Corley Bldg, Auburn, AL 36849-5417 3Biosystems Engineering Department, Auburn University, Associate Professor, 200 Corley Bldg, Auburn University, AL 36849-5417 4Biosystems Engineering Department, Auburn University, Emeritus Professor, 200 Corley Bldg, Auburn, AL 36849-5417 5Biosystems Engineering Department, Auburn University, Professor, 200 Corley Bldg., Auburn, AL 36849-5417 6Tennessee Valley Research & Extension Center, Associate Director, P.O. Box 159, Belle Mina, AL 35615 7Tennessee Valley Research & Extension Center, Director, P.O. Box 159, Belle Mina, AL 35615
Introduction
Increased demand for limited water resources worldwide mandates that agricultural sectors explore increased water use efficiency for irrigation while striving for optimum economic crop productivity. Excessive irrigation aggravates water scarcity and can result in leaching and/or runoff of nutrients and pesticides. As a result, excessive irrigation can lead to increased costs for production and environmental protection. This research identifies the minimum level of irrigation for economic crop yield over a multiyear time span in a humid subtropical climate. Several studies in this humid region showed that cotton response to irrigation during seasons with insufficient rainfall [1][2][3]. Thus, an attempt was made here to study the response of cotton to deficit irrigation as a mean to conserve irrigation water while maintaining an economic yield.
Deficit irrigation has been reported by numerous authors as a method to improve water use efficiency in plants [4][5][6][7]. Bordovsky et al. [8] observed that deficit irrigation of short-season cotton using a LEPA system not only improved lint yield, but conserved groundwater on the Texas Southern High Plains. Likewise, Kirda et al. [6] reported that deficit irrigation was effective in saving irrigation water and increasing water use efficiency but did not decrease cotton seed yield. On the contrary, Steger et al. [9] reported that water stress caused by delayed post-planting irrigation reduced cotton lint yield. Similarly, in field studies conducted under rainfed and irrigated conditions, Pettigrew [10] found that moisture deficit reduced cotton lint yield by 25% in rainfed cotton. Moreover, DeTar [11] showed that deficit irrigation of cotton on a sandy soil reduced yield. The decline in yield as a result of moisture deficit in cotton plants is due to physiological impacts such as reduced root growth, decreased leaf area index, lower photosynthesis, and decreased flowering and fruiting [12][13][14][15][16][17][18][19][20].
Northern Alabama has abundant water for crop production based on average annual rainfall (52 inches), however the region has large inter-annual variability in rainfall with low historic rainfall during the growing season ( Figure 1). Sporadic convective rainfall during the growing season makes rainfed agriculture a poor competitor to the efficiency of irrigated agriculture [21]. This research originated from the broad body of knowledge related to rain-fed and irrigated crop production and to irrigation management, especially deficit irrigation practices. Earlier work by Tyson et al. [22] led to the development of a cotton scheduling procedure entitled MOISCOT (Moisture Management and Irrigation Scheduling for Cotton). This approach utilizes long-term average crop water use data, soil moisture monitoring and precipitation data to schedule cotton irrigation timing and quantity of water applied. In this experiment, the MOISCOT scheduling procedure was interrupted to incorporate a deficit irrigation component in order to simulate various design capacities, in terms of gallon per minute per acre (gpm acre -1 ) available
Abstract
Fluctuations in cotton (Gossypium hirsutum, L.) yield in the Tennessee Valley of Alabama are common and usually related to drought or irregular rainfall. A sprinkler irrigation study was established from 1999 to 2004 to evaluate the minimum design flow rate to produce optimum cotton yields and economic gain. A replicated randomized block design consisting of four irrigation treatments ranging from one inch every 12.5 days (equivalent to 1.5 gpm acre -1 design flow rate or system capability) to one inch every 3.1 days (6.0 gpm acre -1 ) and a control, rainfed treatment. Daily plant water requirement was determined using soil moisture sensors and a spreadsheetbased scheduling program (MOISCOT) developed by Alabama Cooperative Extension engineers. Significant yield differences between irrigated and rainfed cotton were noted during the study period, with rainfall variability and treatment effects accounting for most of the yield response. The minimum design flow rate (1.5 gpm acre -1 ) increased mean seed cotton yield by more than 500 lb acre -1 over rainfed yields. The most economically efficient design flow rate (4.5 gpm acre -1 ) increased mean seed cotton yield by more than 996 lb acre -1 . A positive relationship was observed between cotton yield and total seasonal irrigation depth during dry years. Across all six years of the study, irrigated treatments produced significantly higher yields than rainfed cotton. The highest six-year cotton lint yield and net economic returns were obtained with the 4.5 gpm acre -1 irrigation treatment. This result provides a rule of thumb for estimating the extent of irrigated area based on available water supply rate. for pivot irrigation of cotton. Recommended scheduling by MOISCOT was necessarily delayed when deficit irrigation capability treatments were unable to provide irrigation applications when the MOISCOT scheduling procedure called for irrigation. When extended dry periods occurred, MOISCOT would recommend irrigation but some of the capability treatments did not allow irrigation to occur because sufficient time had not passed since the last irrigation application. In terms of a center pivot system designed with a flow rate that did not meet the peak evapotranspiration rate of the cotton crop, a delay would occur from the time irrigation was called for by MOISCOT and when the system could make an application. In this case, only sufficient rainfall could return the available soil moisture in a field irrigated by the pivot to field capacity.
Because irrigation water supplies are limited on many farms, this research was designed to determine if satisfactory yields could be achieved over a number of years using irrigation systems that could not provide adequate water to replace crop evapotranspiration during peak water demand periods. The design flow rate delivered to center pivot systems is sometimes described in terms of gallons per minute per acre (gpm acre -1 ). The desired or optimum flow rate in gallons per minute (gpm) delivered to a pivot is determined based on the anticipated crop(s) to be grown, soil type and water holding capacity, the peak water use of the crops grown, and the acres irrigated by the pivot. Dividing this flow rate by the acres irrigated determines the gpm acre -1 . In areas with similar climatic conditions, soil types and crops produced, this term can be used to quickly make an estimate of the flow rate needed for any size pivot. The number of acres irrigated by the center pivot multiplied by the gpm acre -1 is the fixed or design flow rate that will be delivered from the water source to the pivot. A higher gpm acre -1 flow rate may allow the pivot to match or exceed the evapotranspiration rate during peak water use periods and is preferred. A lower gpm acre -1 may fail to supply the peak evapotranspiration rate and thus is not preferred but may be necessary where the water supply is insufficient to provide the higher flow rate. Determining crop yield and economic benefits to a range of flow rates from low to optimum over several years is the major objective of this study.
Thus, the gpm acre -1 treatment levels used in this study reflect the irrigation capability of a system with the lowest gpm acre -1 treatment providing substantially less than the peak water demand and the higher gpm acre -1 treatment providing application amounts near peak. Lower gpm acre -1 treatments reflect the most extreme case of deficit irrigation design. A fixed flow rate is required for a specific center pivot system design. Different flow rates can be specified for a system but not changed after initial design without major design changes. Lower or deficit flow rates might be selected because of limited water supply. Thus a 100-acre system would have continuous pumping capacities throughout the growing season of 150 gpm, 300 gpm, 450 gpm and 600 gpm for each of the treatment levels. Therefore, a system design capacity experiment was established in 1999 with the goal of determining the minimum design capacity, gpm acre -1 , for center pivot irrigation in Northern Alabama to produce optimum economic cotton yield. Specific objectives of the study were to 1) compare sprinkler irrigated cotton yields to rainfed with different in-season rainfall levels and distributions, 2) determine the minimum design capacity for sprinkler irrigation without impacting cotton yield, and 3) identify the economic return for varying irrigation capacities along with non-irrigated cotton.
Materials and Methods
The research presented in this paper is located in northern Alabama in the Tennessee Valley, an area of widespread cotton production. This study was conducted on a Decatur silt loam soil (fine, kaolinitic, thermic, Rhodic Paleudults) at the Tennessee Valley Research and Extension Center located in Belle Mina, Alabama, during 1999-2004.
During the six years of this study, growing season precipitation and evaporation fluctuated across a wide range, providing representative wet and dry years for comparison ( Figure 1).
Treatments included four sprinkler irrigation system (Hunter popup rotors, Hunter Industries Inc., San Marcos, California) capacities and a control, rainfed treatment. Irrigation was managed using soil moisture sensors and a spreadsheet-based scheduling method. The irrigation system capacities tested were (1) one inch every 12.5 days, (2) one inch every 6.3 days, (3) one inch every 4.2 days, and (4) one inch every 3.1 days. The one-inch amount represented the maximum irrigation depth applied during the number of days indicated. One inch represented a typical application that could be applied by center pivot systems to the soils in this region with minimum runoff. These four irrigation capabilities were equivalent to 1.5, 3.0, 4.5 and 6.0 gpm acre -1 respectively. The application amount was scheduled for one inch with an electronic controller that controlled the sprinkler run time for each plot. The actual application amounts were determined based on field measurement with rain gauges placed in each treatment plot. The actual amount of water applied throughout the six-year study ranged from 0.98 to 1.15 inches. This variability reflected the mechanical and hydraulic characteristics of the sprinklers as well as wind or drift effects. Hence, an average of one inch was applied each time the MOISCOT scheduling program called for irrigation, providing that sufficient time had elapsed between irrigations for each treatment. Thus, MOISCOT might call for irrigation, but irrigation may have been delayed until the design capacity time limitation for that treatment was met. In some cases, rainfall might occur within the waiting period that would satisfy the crop requirement for ET. Thereby, the experiment provided a realistic simulation of different center pivots with different pumping capacities and flow rates irrigating a cotton field under identical rainfed conditions, but with different availability of water for irrigation. For example, a center pivot system with a lower pumping capacity per acre would require a longer period of time to apply one-inch than a system with a higher pumping capacity.
In order to develop a sprinkler plot layout to simulate different center pivot capacities, 39 feet x 39 feet square sprinkler research plots were designed and installed. The plots were designed to deliver water with head to head coverage in each plot area. Each sprinkler was adjusted to apply water in a quarter circle so all water applied by the four sprinklers was placed in the designated plot. The irrigation system controller for all plots had a cycle and soak feature that allowed the application of one inch in an ON-OFF cycle to each plot to ensure that applied water infiltrated in the designated plot without runoff.
The planted plot size within each square irrigated plot was 26.7 feet x 39.0 feet, equivalent to eight 40-inch cotton rows, each 39 feet long. The middle four rows within each eight row plot served as data rows and the two outside rows within each plot served as guard rows. The excess width within each plot was planted in fescue and this perennial turf grass utilized irrigation overthrow outside the area planted to cotton.
Individual plots were arranged in a randomized complete block design of five treatments. From 1999 to 2000, three replications of each treatment were used. In 2001 and thereafter, a fourth replication was added when an adjacent space became available (Figure 2).
Moisture management and irrigation scheduling was accomplished using Watermark™ soil moisture sensors (Irrometer Company Inc. Riverside, California) and the spreadsheet-based MOISCOT irrigation scheduling program developed by Alabama Cooperative Extension System [22]. The MOISCOT program was designed to use data from individual farm fields to calculate anticipated soil moisture deficits in the future and to calculate the future date when irrigation should be applied to replenish an acceptable soil moisture deficit. This program required a one-time information entry into a spreadsheet program on the irrigation system type, crop, planting date, and soil characteristics of the irrigated fields, two times per week data entry of soil moisture readings at 9-and 18-inch depth, and daily entry of irrigation and rainfall inputs. The program then calculated a date in the future to replace a projected one-inch soil moisture deficit. The Watermark TM soil moisture sensors were installed according to manufacturer's recommendations in each plot at 9-and 18-inch depths. Wedgeshaped rain gauges were installed under the sprinkler irrigation system within each plot to measure irrigation applied and another rain gauge installed adjacent to the study site to measure rainfall. Cotton was planted in the second or third week of April each year using a 4-row planter on 40-inch row spacing with a seeding rate of 4-5 seeds per foot. Cotton was chemically defoliated 10 to 14 days prior to harvest by spraying the chemicals Finish (1.33 pt/acre) plus Ginstar (3.0 oz/acre). The four yield rows were harvested between the third week of September and the first week of October using a 2-row cotton picker. Each plot was harvested separately and weighed using a boll buggy (John Deere, Moline, Illinois) equipped with scales to provide accumulated mass which was divided by the harvested area to compute seed cotton yield. Turnout of lint was determined as average seasonal batch from a bulk seed cotton samples in local gin. The average turnout of lint from seed cotton for 1999-2001 seasons was 38% and for 2002-2004 was 35%. An economic analysis was conducted to evaluate irrigated cotton income gains over rainfed cotton using yield and total irrigation data per season for each irrigation capability. The sale price of $0.55/pound lint including a resale value of $200/ton seed; total annual irrigation system ownership costs of $87.95 per acre; and irrigation operating costs of $9.39 per acre-inch for a 140-acre pivot were used for the economical evaluation [23].
Yield data were analyzed statistically with a general linear model (GLM) using the LSD method for means separation at P ≤ 0.05 [24]. Table 1 presents total amount of irrigation water applied per treatment per acre in each season. Table 2 shows average seed cotton yields per treatment per season.
Results and Discussion
In 2004, rainfall was plentiful throughout the growing season, and rainfed and irrigated yields were not statistically (P = 0.05) different ( Table 2). In 2003, rainfall was near optimum through much of the growing season, but a 26-day dry period occurred between August 7 and September 4. A total of only 0.61 inches of rain occurred during this period, and this rainfall was measured in seven minor rainfall events (25). Three timely one-inch irrigation applications during this period boosted irrigated yields significantly (P = 0.05), with more than 451 additional pounds of seed cotton per acre on the highest irrigation treatments (3.0, 4.5 and 6.0 gpm acre -1 ). The lowest irrigation treatment was not significantly different from the rainfed cotton yield ( Table 2).
In 2002, irrigated yields were significantly (P = 0.05) higher than non-irrigated yield, but the highest yields were less than in other years for most irrigated treatments and were less than the 6-year means ( Table 2). The reason for this reduced seed cotton yield was attributed to the very dry conditions late in 2002 growing season when the maximum application rate was not applied to meet peak water demand due to pumping problems resulting in reduced yields in all treatments. Non significant yield differences were noted in 2001 between rainfed and all irrigated treatments, except for the 4.5 gpm acre -1 treatment, the highest yielding treatment. Significant yield differences were measured between rainfed and all irrigated treatments in 1999 and 2000. Although 2000 and 2002 seasons had similar rainfall (Table 1), the higher yield obtained during 2000 may be related to the greater water depth applied during this dry growing season (Table 1). Rainfall variability and treatment effects accounted for the wide range of yield responses for each of these years [25]. In drier seasons, treatments with higher irrigation gave more yields than lower irrigation treatments whereas in wet seasons little or no response was observed.
Yields in the lowest irrigation design flow, 1.5 gpm acre -1 (1 inch every 12.5 days) were not significantly different from rainfed yields during three relatively wet seasons (Table 2). However, it is the lowest deficit irrigation design that boosted yield significantly (P = 0.05) during the dry years 1999, 2000, and 2002. The next highest irrigation design flow rates, at 3.0 gpm acre -1 (1 inch every 6.3 days) did not have yields significantly different from 1.5 gpm acre -1 in four seasons, but had an average 6-year yield significantly higher than 1.5 gpm acre -1 and rainfed cotton. The highest irrigation design flow rates, 3.0, 4.5, and 6.0 gpm acre -1 (1 inch every 6.3, 4.2 and 3.1 days, respectively) produced statistically similar yields in most of the years and resulted in 6-year average yields significantly higher than both rainfed and 1.5 gpm acre -1 treatments ( Table 2).
When correlating seasonal rainfall with annual treatment yields, the correlation coefficient increased with decreasing irrigation capability design (Table 2), as would be expected. Similarly there was a positive relationship between cotton yield and irrigation capabilities except in seasons having sufficient rainfall (Table 2). Cotton yield responses to irrigation observed in most seasons of this study confirm similar results reported by others [1,3,[26][27][28][29][30][31]. The results in this study stress the importance of irrigation to beneficially offset insufficient growing season rainfall. Nevertheless, other studies [27,32,33] reported no response to irrigation in cotton and attributed that to either insufficient irrigation applied or restricted root growth caused by soil compaction. In the present study, the absence of response to irrigation treatments observed during the 2001 and 2004 seasons is likely related to adequate rainfall during these seasons ( Table 1, 2). In a similar study under similar conditions, Balkcom et al. [34] and Balkcom et al. [35] testing different irrigation regimes, found that irrigation increased seed cotton yield. Similarly, Howell et al. [36] in a thermally and rainfall limited environment such as the North Texas High Plains, found that deficit irrigation doubled cotton yield over rainfed yields. In contrast, Enciso et al. [37] reported that irrigation intervals ≤ 16 days did not influence cotton lint yield and quality using subsurface drip irrigation in medium to fine textured soils under limited water conditions. DeTar [11] also showed that deficit irrigation of cotton on sandy soils could greatly reduce yield. In a simulation study Jalota et al. [5] also showed that by reducing the amount of irrigation below the economic level, both yield and evapotranspiration of cotton were reduced to varying degrees depending on soil texture, precipitation and irrigation regimes. Table 3 shows increasing seasonal operating costs for irrigation as depth of irrigation increased with corresponding increasing irrigation capability. Higher operating costs were associated with drier seasons (1999, 2000 and 2002) where total seasonal irrigation depths were higher (Table 1). Gross receipts and estimated net income gain above rainfed control for different irrigation capability treatments are given in Table 4. Gross receipts for lint yields above rainfed control for different Means with the same letter in each row are not significantly different using LSD at P = 0.05. R a , R b = Correlation coefficient for irrigation and rainfall with yield, respectively. irrigation capability treatments were calculated based on a sale value of $0.55/pound lint including a resale value of $200/ton seed [23] Net income gains over rainfed control for overhead sprinkler irrigation capabilities were estimated when the estimated ownership and operating costs were charged against corresponding gross receipts. During seasons with sufficient rainfall (2001, 2003 and 2004), sprinkler irrigation capabilities result in a negative net income gain over rainfed indicating that irrigation added unnecessary (unrecovered) costs. However, during drier seasons, cotton producers with adequate irrigation capabilities realized significant yield increases (500-1000 lb acre -1 ) and positive net income gain (60-360$ acre -1 ). Durham [38] reported that cotton irrigation returned high net profit even during the wetter season of 2004. Over the six-year study period, a cumulative net profit of $470 per acre was realized with an irrigation capability of 4.5 gpm acre -1 . Results from this study indicate that when growing season rainfall is below 12 inches, cotton producers in the Tennessee Valley of Alabama with adequate irrigation capability can realize significant yield increases along with positive net returns over rainfed cotton production. Results provide a rule of thumb of approximately 4.5 gpm acre -1 for estimating the extent of irrigated area based on available water supply rate.
Conclusions
In all treatments, irrigation was found to significantly increase seed cotton yield in seasons with inadequate rainfall. Data from this study indicate that the minimum design flow rate needed to produce optimum economic yields in irrigated cotton is 4.5 gpm acre -1 which is equivalent to approximately one inch every 4.2 days. This information can be used to optimize the design of pivot irrigation pumping plants by matching pump and storage facility size to the total area irrigated in soil type typical of the Tennessee Valley and is not necessarily applicable to other areas or soil types. Thus, cotton producers in the northern Alabama in the Tennessee Valley region with adequate irrigation capabilities can realize significant seed cotton yield increases and positive economic net returns. Results provide a rule of thumb of approximately 4.5 gpm acre -1 for estimating the extent of irrigated area based on available water supply rate.
|
2019-04-13T13:12:02.962Z
|
2012-04-18T00:00:00.000
|
{
"year": 2012,
"sha1": "4d63da6c8604d19a79ea7ca7a29e5368f17b8e97",
"oa_license": "CCBY",
"oa_url": "https://www.omicsonline.org/open-access/effect-of-different-deficitirrigation-capabilities-on-cotton-yield-in-the-tennessee-valley-2168-9768.1000102.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "47e5368e8800339094cb187503088b6009fcd1f2",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Mathematics",
"Environmental Science"
]
}
|
232294744
|
pes2o/s2orc
|
v3-fos-license
|
Transition from Classroom Teaching to E-learning in a Blink of an Eye
The main purpose of this paper is to give an overview and the lessons learnt about the abrupt transition from traditional classroom teaching to distance learning based on the experience of Tallinn University of Technology (TalTech). The rapid outburst of the COVID-19 virus led to a situation that the government had to shut down all the schools and universities and all the teaching activities had to be carried out without physical contact. The universities had one weekend to conduct the transition. The influence of the rapid change in teaching practice was analysed using the feedback of the study program directors, students and academic staff. The main enablers and disablers with the main constraints are presented. Analysis showed that the transition is impeded both by the lack of technological solutions (i.e. remote usage of laboratories) and by human resources (i.e. skills and willingness to conduct the transition by the academic staff). The unique and hopefully non-recurrent situation enables to analyse both the institutional, technological and personal readiness to adapt with the rapid changes in teaching practice. The outcomes of the experience will be used to improve the readiness and competences at all levels at TalTech.
Introduction
Over the years a number of e-learning environments and approaches were used in Tallinn University of Technology (TalTech) which led to the situation where students had to adopt with and have multiple user accounts to use the environments. The level of the e-learning support was volatile and the definition of the support was used very creatively. The presents of e-learning support was optional and not regulated at the university level.
Therefore it was centrally decided that from spring 2018 all compulsory courses in TalTech had to have an e-learning support. For that, the university set up its own elearning environment based on Moodle and defined minimum requirements for the ecourses. Although the initiative was successful, until this spring a minority of the courses were taught only in e-learning environment (which was actually not set as a goal). In most of the cases, the e-learning support was just assisting the classroom teaching.
The rapid outburst of the COVID-19 virus in Estonia in March 2020 [1] led to a rapid transition from classroom teaching to e-learning. This meant that all the teaching activities had to be carried out without physical contact. The universities had one weekend to conduct the transition.
It is evident that neither the institution nor the students and academic staff were ready for such a giant leap in changing the teaching practice in just a few days. Nevertheless the university reacted very quickly and launched precise and clear instructions for the academic staff how to carry on with the teaching activities. Two factorsavailability of the support and clear guidance form the Educational Technology Centre and presents of the e-learning support for all compulsory courseswere enabling the transition for the academic staff. For example before the transition in the School of Engineering 94% of all compulsory courses had the required e-learning support. At the university level, the percentage was 88.7 being lowest at the School of Information Technology (69%). The three major disablers for the transition were short time frame, motivation, preparation and willingness of the academic staff and limited or no remote access to the infrastructure (i.e. lab facilities). The enablers and disablers of the transition are listed in Table 1. This paper analyses the possible constraints of such abrupt change in teaching practice to the overall teaching quality and outcome. Potential impact of the transition with the lessons learnt is discussed.
Initial Phases of the Transition
The influence of the rapid change in teaching practice was analyzed using the feedback of the study program directors, students and lecturers. The timeline of the crisis is presented in Fig. 1 to show the information flow throughout the event. The School of Engineering started to collect the feedback from the study program directors from the first day to support the transition. An online form was set up to collect the information. Online meetings were held to gain more specific details about the shortcomings in the study process. Different measures were analyzed on how to support the transition process and the impact of the rapid transition was studied throughout the especial event. Students feedback was collected indirectly (through the feedback of the study program directors and dean's office personnel) and directly through interviews.
The collected data was and will be used for two purposes: 1) Overall management of the transition during the especial event. Feedback from all parties was be used to support the decision making process and to propose different measures to mitigate the transition. This includes for example sharing the guidance materials and contacts of the Educational Technology Centre, contacting the academic staff whose contribution to the transition was weak, activating students to take part in the distance learning, analyzing the situation and potential threats. 2) Summarizing the feedback for future crisis and analyzing the success of the transition at the institution, study program and individual course level. This will enable to identify the short comes and potential to further develop the e-learning environments and to diversify the teaching practices based on the needs of single courses and the frame of reference.
The generalized responses from the study program directors and students are brought forth in Table 2. Around 80% of the study program directors gave their feedback to the transition process during the first two weeks. Based on the feedback the critical courses were identified and the responsible lecturers were contacted at the university, school/institute and study program level. In most cases, it was possible to carry on with online lectures. Only a few courses postponed the activities for March and April.
The more-or-less smooth transition from classroom teaching to e-learning was greatly supported by the rapid reaction at the university level. University had three days (including a weekend) to prepare guidelines for the staff and students about the teaching process during the emergency situation. Over the weekend clear guidelines were prepared. This included instructions for students, staff, about events and workrelated gatherings, rules at the university's premises and services and general recommendation. At the same day a guideline was sent to all lecturers with recommendations how to carry on with the teaching and on what platforms. A Facebook page for events, news, instructions and recommendations was set up for rapid information sharing. Sample classroom was prepared in MS Teams with guidelines how to use it for teaching purposes. Webinars and online courses about online teaching were announces for the coming weeks.
In the end of April it was evident that the peak of the COVID-19 crisis was exceeded. Therefore new guidelines for staff and students were announced on 22 nd of April. Based on the current knowledge it was assumed that in the near future, some sort of face-to-face teaching would resume by the decision of the Government of the Estonia also in higher education institutions. Therefore it was proposed that studies at TalTech would be organized as follows: Table 2. Collected responses from the study program directors and students during first two weeks of the event Feedback from study program directors Feedback from students Most of the courses continued in online format using different online platforms Some courses were delayed, no information how the learning process will be carried on Problems with guest lecturers as they do not have user accounts for some of the online platforms Students got a lot of individual assignments, the teaching part was forgotten Problems with special software, not enough licenses to be shared with students Information gained from the lecturers was vague Problems with students laptops/PC-s, hardware is not capable to run simulation programs Problems with lab work and practices Labs postponed to May-June Students preferred that lectures carried on in online according to the study plan before the crisis Problems with setting up virtual laboratories because of cyber security issues No contact with some of the lecturers • Majority of the courses would be completed according to the academic calendar.
• Majority of the exams and assessments would be completed according to the academic calendar. • In exceptional cases and in compliance with the rules of the emergency situation, face-to-face teaching could be carried out also during the examination period. • It should be ensured, however, that students are able to complete their courses also by distance learning methods (since some students have left for their home countries abroad, some are in quarantine etc.). • At the request of the study program director and by dean's order, the deadline for completing a course can be shifted, if necessary, to the end of August. The study load of students at the end of academic year will be calculated after that. • The deadline for the defense of graduation theses is according to the academic calendar. In exceptional cases, the deadline for the defense of graduation theses may be extended until the end of August in accordance with the procedure established by the dean of the school. • In exceptional cases, it is possible to defend a graduation thesis conditionally, i.e. a student is allowed to defend his/her graduation thesis even if he/she has not completed all the courses included in his/her study program. • The deadline for confirming the study place was extended until 30 th of August, in order to give the bachelor's and professional higher education graduates the opportunity to continue their studies at the master's level at TalTech starting from the autumn. • The festive graduation ceremonies were cancelled.
Actions During and After the Transition
In the beginning of May it became evident that the emergency situation will end in the coming weeks. The deadline of the emergency situation declared by the government matched with the end date of the spring semester. Therefore last two months of teaching was carried out in distance learning. At the same time data about the courses that needed extension were identified at the university level. Similarly the number of students that needed extension for their defense of graduation thesis were compiled.
At the School of Engineering 403 courses were taught in the spring semester. Out of the 403 only 25 courses were extended and the completion date was set to August. This is 6% of the courses which is very low taking into account the number of courses that include laboratory and/or field practices. In addition 15 of the 25 courses were from one study program. In that case the postponement of the courses was agreed at the institutional level. Therefore the actual need for the postponement was even less.
Similar trend was seen with the defense of the graduation thesis. In total 556 students submitted an application to defend the thesis. Out of the 556 only 33 requested for postponement of the defense to August. This makes again * 6% of all the applicants. Nearly half of the requests (16) were from the study program of architecture.
Two questionnaires were prepared at the university level to analyze the transition to distance learning in the perspective of students and academic staff. At the university level it was decided that students opinion will be gathered during the periodic feedback (twice a year after the semester). At the schools level the approach was somewhat different. In some schools, where the number of students was lower, it was decided to send out specific questionnaires about the transition process already in May when the emergency situation was not yet over. The School of Engineering decided to add some questions related with the transition process to the periodic feedback questionnaire. All the statements are assessed from 1 to 5, where 1 stands for "Completely disagree" and 5 stands for "Completely agree". All questions have possibility to add comments. The additional questions were as follows: 1. Information about the changes in the organization of the studies was available and timely 2. Distance learning was (generally) well organized and supported learning 3. The selected e-learning environments supported me in conducting my studies and my active participation as a learner in the learning process. Add to the comments your preferred e-learning environments 4. Staff in the dean's office was supportive and good-natured when solving the upraised problems 5. Lecturers reacted quickly and adequately when problems upraised Questionnaire for the academic staff was sent out to all lecturers just before the emergency situation was over. The questionnaire was opened for one week and lecturers were asked to answer the questionnaire based on the courses they taught. This meant that there were multiple answers for one course (if there were more than one lecturer) and from one lecturer. All-in-all 171 unique answers were gathered about 172 courses. The following aspects were asked from the academic staff: Questions 6 and 7 were added to the questionnaire only for the academic staff in the School of Engineering. The idea was to have some of the questions in both of the questionnaires (students and academic staff) to compare the results on information sharing and e-learning environments.
The results for some of the questions are presented in the next section.
Results and Discussion
Taking into account the feedback gathered during and after the emergency situation it can be concluded that the rapid transition from classroom teaching to distance elearning was successful. Tallinn University of Technology was quite well prepared both on the e-learning support and technological readiness but the main uncertainty was on the readiness and motivation of the staff and students to adopt with the transition. Luckily the staff was motivated and reacted on time. This can be seen from Fig. 2 where the change of percentages of the mandatory courses equipped with proper e-support are shown.
Blue line shows the number of total courses that had e-support at the School of Engineering and red line shows the percentage of the new courses that had e-support and were taught for the first time during the spring semester. At the beginning of the semester (beginning of February), less than half of the new courses had proper esupport. This number increased quite rapidly during the first months and reached around 85% at the beginning of the COVID-19 crises. At the end of the semester only 6 courses (*1%) did not have an e-support with all the necessary elements according to the university's standard. Lecturers of most of the courses were specialists outside of the university and did not have enough time and/or knowledge to set up the required esupport. In that case members of the academic staff (either program directors or fellow lecturers) were asked to assist them during the process. Extra work of the academic staff members was reimbursed.
First feedback about the transition process at the university level was gathered from the lecturers. Figure 3 shows the lecturer's satisfaction about the information exchange and relevance during the crisis and their opinion on the available e-learning environments. It can be concluded that the information exchange at the university was very good. Majority of the lecturers (*91%) rated this with 4 or 5. The average satisfaction was as high as 4.47. The availability of proper e-learning environments was rated high as well (*86% rated with 4 or 5) but the average satisfaction was a little bit lower at 4.23. This is understandable as the rating is more subjective. Real fact based conclusions can be drawn after the examinations. Figure 4 gives a very good overview how the students reacted to the transition. It is good to see that in majority of the classes the students' participation did not change or even improved (75%). Still it has to be reckoned that in 25% of the cases the students' participation decreased. This is not only affected by the crisis. Previous studies on class attendance have shown that the number of attendees decrease during the semester [e.g. 2,3]. A study carried out in an elite Economics school in Portugal [2] showed that the class attendance decreased from the start of the semester from 84-95% to 49-66%. The percentage of average attendance was more-or-less linearly decreasing throughout the semester. Similar trends were reported in a study performed in Technical University of Denmark taking into account information received from nearly 1 000 undergraduate students [3]. It was shown that the attendance differed based on the students performance but decreased in all performer groups. Therefore it can be concluded that the transition process was successful as majority of the lecturers (75%) did not notice a remarkable decrease in attendance and in some cases the attendance even improved.
Lecturer's feedback
The selected e-learning environments supported studying and teaching, average 4.23 Informa on about changes in the organisa on of studies was available and mely, average 4.47 Transition from Classroom Teaching to E-learning in a Blink Figure 5 presents a subjective assessment of the lecturers on the students' performance and how they obtained the learning outcomes. Nearly 3/4 assumed that the rapid transition from classroom teaching to e-learning did not affect the performance. 14% reported that it had a negative impact and 13% reported a positive impact. This data will be used to compare the actual students' performance after the examination to see the real effects of the transition.
Conclusions
The process of a rapid transition from classroom teaching to e-learning based on the experience of Tallinn University of Technology was analyzed. It was shown that the successful transition was enabled by the fact that most of the mandatory courses had esupport before the crisis. In addition clear guidelines were present and rapidly prepared by the Educational Technology Center to support the lecturers. It was evident that not all the laboratory measurements and practices could be transferred to online environment in just a few days (or even months). Another constraint was the readiness of lecturers and students to switch to a new teaching method in a blink of an eye.
University started to gather data (both centralized and decentralized) from the first day of the transition. This enabled to detect the courses where the transition to elearning was postponed and the lecturers who struggled with the set-up of e-learning environments. This resulted in a low number of courses (*6%) where the teaching activities were postponed to June-August. Similarly only *6% of the students postponed their defense of the thesis indicating that the collaboration with the supervisors carried on successfully in online environments.
Feedback gathered from the lecturers indicated that the rapid transition did not have a significant effect on the class attendance. Moreover in most of the cases the class attendance remained the same or even increased compared with the situation before the crisis. The students' performance can be assessed after the examination period. The lecturers' subjective opinion indicates that the performance is expected to be fairly good.
|
2021-03-22T17:35:03.266Z
|
2021-02-11T00:00:00.000
|
{
"year": 2021,
"sha1": "f7d9102ec731a137894716a44b333b5c7d9acd54",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "f7d9102ec731a137894716a44b333b5c7d9acd54",
"s2fieldsofstudy": [
"Education",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
16976565
|
pes2o/s2orc
|
v3-fos-license
|
Myc-Induced Liver Tumors in Transgenic Zebrafish Can Regress in tp53 Null Mutation
Hepatocellular carcinoma (HCC) is currently one of the top lethal cancers with an increasing trend. Deregulation of MYC in HCC is frequently detected and always correlated with poor prognosis. As the zebrafish genome contains two differentially expressed zebrafish myc orthologs, myca and mycb, it remains unclear about the oncogenicity of the two zebrafish myc genes. In the present study, we developed two transgenic zebrafish lines to over-express myca and mycb respectively in the liver using a mifepristone-inducible system and found that both myc genes were oncogenic. Moreover, the transgenic expression of myca in hepatocytes caused robust liver tumors with several distinct phenotypes of variable severity. ~5% of myca transgenic fish developing multinodular HCC with cirrhosis after 8 months of induced myca expression. Apoptosis was also observed with myca expression; introduction of homozygous tp53-/- mutation into the myca transgenic fish reduced apoptosis and accelerated tumor progression. The malignant status of hepatocytes was dependent on continued expression of myca; withdrawal of the mifepristone inducer resulted in a rapid regression of liver tumors, and the tumor regression occurred even in the tp53-/- mutation background. Thus, our data demonstrated the robust oncogenicity of zebrafish myca and the requirement of sustained Myc overexpression for maintenance of the liver tumor phenotype in this transgenic model. Furthermore, tumor regression is independent of the function of Tp53.
Introduction
Hepatocellular carcinoma (HCC), malignancy of hepatocytes, is the most common primary liver cancer in Central/Southeast Asia and sub-Saharan Africa [1].As a deadly tumor with the traits of late-stage diagnosis, poor therapeutic response and bleak prognosis, it is a research hot spot for oncologists and other scientists.In humans, HCC is associated with multiple risk factors, such as hepatitis virus infection, aflatoxin, alcohol abuse, and non-alcoholic steatohepatitis, which ultimately increase genome instability and transform hepatocytes into a neoplastic state by cumulative mutations.Myc, a transcription factor which is estimated to regulate the expression of about 15% of cellular genes, is well known for its participation in many malignant conversions [2].MYC gene amplification has also been frequently detected in human HCC and is especially related to advanced HCC cases [3,4].
To understand the fundamental mechanisms underlying cancer for developing effective therapies, it is important to investigate tumor biology in animal models [5].The zebrafish (Danio rerio) has now emerged as a promising animal model for cancers because of its small size, high fertility, well developed experimental resources and tools, and low maintenance costs [6].As vertebrate species, both zebrafish and human have many conserved anatomical structures and homologous organs with similar physiological functions.Zebrafish can develop a wide spectrum of tumors in almost every tissue, which greatly resemble human malignancies in both histological characteristics and gene expression profiles [7][8][9][10].Our previous study revealed that transgenic expression of mouse Myc in zebrafish lead to liver tumors [11] and another study also reported liver hyperplasia in medaka caused by transgenic expression of a quite divergent medaka myc gene [12]; however, so far no study has documented the oncogenicity of zebrafish myc.Due to the whole genome duplication occurred during fish evolution following divergence of the teleost and tetrapod lineages, the zebrafish genome contains two myc genes orthologous to human MYC, i.e. myca and mycb [13].In this study we generated two zebrafish transgenic lines with inducible myca and mycb expression, respectively, and found that both myc paralogs were oncogenic in hepatocytes.Especially, overexpression of myca resulted in high grade HCC with histological traits similar to human HCC cases and introducing tp53 null mutation to the transgenic fish accelerated liver tumor progression.Furthermore, the tumor status was addicted to constant overexpression of myca and suppression of transgenic myca expression by removal of the chemical inducer resulted in rapid tumor regression even in the tp53 -/-background.
Both myca and mycb are oncogenic
To investigate the oncogenicity of the two zebrafish myc paralogs, two effector transgenic zebrafish lines carrying GFP fused myca and mycb, respectively, were generated.They were then crossed with the liver-driver line [14] to obtain two double transgenic lines, named mycAG and mycBG respectively for GFP fused myca and mycb.The constructs used for these transgenic lines are illustrated in S1 Fig. and the transgenic system has been described previously for liver-specifically inducible kras v12 expression [14] based on the mifepristone inducible LexPR system [15].
To examine the effects of myc gene expression, mycAG and mycBG fish were induced with mifepristone of different concentrations from 1 month postfertilization (mpf) and sacrificed at 2 month post-induction (mpi) or 3 mpf.The expression of transgenic mycAG and mycBG was increased with mifepristone concentrations, as manifested by the increased GFP fluorescence in Fig. 1C2-C7 as well as by RT-qPCR (S2A Fig. ).Moreover, the overexpression of mycAG and mycBG resulted in overgrowth of liver (Fig. 1, B2-B7), as illustrated by GFP expression (Fig. 1, C2-C7) and also as evident from the enlarged belly (Fig. 1, A2-A7), compared to the liver in the Driver control that had only GFP expression in the liver (Fig. 1, A1, B1, C1 and S1 Fig. ). Significant liver overgrowth was observed even in the case of weak expression with 0.005 μM mifepristone.Notably, the tumor size of the mycAG transgenic fish was significantly larger than that of the mycBG fish while the body length was much smaller, indicating a higher tumor burden in the mycAG fish.
Histological examination of mifepristone-induced Driver control fish showed that GFP expression has no impact on normal liver architecture and histology (Fig. 1, D1).In contrast, the liver of mycAG fish distinguished itself from normal histology with basophilic cytoplasm and enlarged eosinophilic nucleoli (Fig. 1, D2, D4 and D6).With increasing mifepristone concentrations, the liver lesions in mycAG fish progressed from hyperplasia to hepatocellular adenoma.In comparison, the tumor progression in mycBG fish was slow.At 0.005 μM mifepristone, increased mitosis was observed in mycBG fish.Although aberrant nuclei were observed, the hepatocytes retained eosinophilic cytoplasm and two-cell-thick plate organization was similar to that observed in the Driver control (Fig. 1, D3).However, the nucleus abnormality was increased with mifepristone concentration.At 4 μM mifepristone induction, the mycBG fish displayed liver hyperplasia (Fig. 1, D7) similar to that of the mycAG fish at 0.005 μM induction (Fig. 1, D2).
The above findings confirmed that both myca and mycb were oncogenic in zebrafish hepatocytes.The difference in tumor status between the two myc transgenic lines may be attributed to different levels of transgenic expression, which was confirmed by RT-qPCR analyses in S2A ).This was also consistent with the generally higher GFP intensity in induced mycAG fish than mycBG fish in both fry (S2C Fig. ) and adult (Fig. 1, C2-C7).Our RT-qPCR data also confirmed that both mycAG and mycBG expression were much higher than endogenous myca and mycb expression (S2A Fig. ).Nuclear localization of both mycAG and mycBG fusion proteins were observed and these Myc-localized nuclei were significantly larger than these in the wild type and Driver controls, indicating pathological changes of these cells (S2B Fig. ).Collectively, these observations suggest a correlation of severity of neoplasia with the level of myc mRNA expression.
Transgenic expression of mycAG induces rapid liver tumors accompanied with increased proliferation and apoptosis
As tumor progression in the mycAG line was faster and more severe than that in the mycBG line, the mycAG fish provided a more robust platform for further characterization of Mycinduced liver tumors and thus were used in the subsequent experiments.MycAG fish were induced with 2 μM mifepristone from 1 mpf and sampled at different time points.As shown in Fig. 2, A2, B2 and C2, although the liver size was still comparable with control, transformed hepatocytes with basophilic hepatocytes and distinct eosinophilic nuclei had already emerged at 10 dpi (day post-induction).From 20 dpi, all the hepatocytes had been transformed (Fig. 2, C3-C4).Moreover, the sinusoids were dilated in fast tumor progression and formed pseudoglandular phenotype with apparent ascites, as shown in Fig. 2C4 and S3 Fig.
Increased proliferation was also observed during tumor progression in the mycAG fish, as demonstrated by the PCNA staining in Fig. 2D1-D4 and 2E.Usually, apoptosis serves as a barrier in tumor progression; however, interestingly, apoptosis was also observed to increase in the liver tumors of mycAG fish, as revealed by TUNEL assay in Fig. 2E1-E4 and 2F.This is also consistent with some previous observations that Myc deregulation caused apoptosis [16][17][18] and this property may serves as a safeguard in normal conditions and impedes myc deregulation-caused tumor initiation.
Variable levels of mycAG expression results in different types of liver tumors
At 1 mpi, the majority (~87%) of induced mycAG fish showed ascites-like phenotype with yellow fluid in abdomen cavity as well as fluid-filled cysts in the liver.With the tumor progression, ascites gradually decreased in many of the mycAG fish and the relatively uniform pseudoglandular tumor type turned into divergent phenotypes.As indicated in Fig. 3, in 43 fish examined at 6 mpi/7 mpf, 28% fish showed "Small Belly" phenotype (Fig. 3A2, B2, C2), in which the tumor size was relatively small and exhibited a mixture of hepatocytes with either transformed basophilic features or near-normal eosinophilic features in histological evaluation (Fig. 3D2).51% fish developed "Typical" tumor (Fig. 3A3, B3, C3), which were large, smooth tumors with compact tissue organization (Fig. 3D3).14% fish displayed tumor with "Hypervascular" phenotype (Fig. 3A4, B4, C4), which displayed prominent blood vessels overgrowth (Fig. 3D4).Only 7% fish remained the "Ascites" phenotype with pseudoglandular organization of tumor cells (Fig. 3A5-D5).Interestingly, these tumor phenotypes were related to mycAG expression level.As shown in Fig. 3E, the "Hypervascular" tumors had the highest mycAG expression, followed by the "Typical" and "Ascites" phenotypes, while the "Small" phenotype had the lowest mycAG expression.
At 8 to 9 mpi, about 5% of fish developed multinodular HCC with cirrhosis similar to that in humans (Fig. 4A-E).Although metastasis and tissue invasion were not observed in this multinodular HCC, loss of membrane-localized E-cadherin was observed (Fig. 4H,I), indicating increased motility of tumor cells and possible preparation for epithelial-mesenchymal transition.Moreover, increases of expression and nuclear translocation of β-catenin were also observed
Homozygous tp53 M214K mutation promotes tumor progression in mycAG fish
Tp53 is a major tumor suppressor genes and its mutation has been detected in a majority of human cancers [19].To test the effect of tp53 mutation on tumor progression in mycAG fish, we introduced a tp53 M214K homozygous mutation into this mycAG transgenic fish and named this new line as AG53.Apparently, tp53 M214K homozygous mutation significantly promoted tumor progression in both adult (Fig. 5) and larval stages (S4A Fig. ).As shown in Fig. 5, the significant enlargement of belly of AG53 fish could be grossly observed as early as 0.5 mpi when the mycAG fish had no apparent gross changes.The body size of AG53 fish was also significantly smaller than mycAG fish, which may suggest a more serious tumor burden in AG53 fish.Differences were also manifested in histology, as shown in Fig. 5D1-D5.Although both AG53 and mycAG fish were still similar in the features of basophilic cytoplasm, enlarged nuclei and prominent eosinophilic nucleoli, the fast tumor progression in AG53 fish resulted in firm and compact tumor type rather than the pseudoglandular tumor with ascites in mycAG fish at 1.5 mpi.
To investigate the effect of tp53 M214K mutation, we first examined the levels of transgenic mycAG and endogenous myca expression in the Driver control, mycAG and AG53 fish, and found both mRNAs were not affected by the mutation in AG53 fish (S4B Fig. ).As many studies have demonstrated that TP53 pathway plays an important role in Myc-caused apoptosis [18], the effect of tp53 M214K mutation on apoptosis was also examined.Apoptosis in the liver was significantly reduced in AG53 fish when compared to that in the mycAG fish; however, it was still higher than that in the Driver control that rarely had apoptosis signals (S4C Fig.).This observation suggested that tp53 M214K mutation only blocked part of Myc-caused apoptosis, and/ or that some apoptosis pathways other than Tp53 may also be activated by myca expression in tumor progression.Collectively, these observations suggest that the acceleration of tumor progression in tp53 M214K mutation was at least partially aided by the suppression of apoptosis.
Tumor state is dependent of sustained overexpression of transgenic mycAG
One of the important features of the LexPR inducible expression system is the feasibility of inactivation of transgenic expression by withdrawal of mifepristone [14].To investigate the effect of suppression of transgenic mycAG expression in both mycAG and AG53 fish, mifepristone treatment was stopped at 6 mpi/7 mpf.As shown in Fig. 6E, the mRNA expression of transgenic mycAG in the liver greatly decreased within 8 days of mifepristone withdrawal (8 dpr, day post-regression).At 18 dpr, mycAG level was comparable to the endogenous myca expression in the control fish liver.Consistently, GFP signal also faded away rapidly (Fig. 6, C2-C7).Only very faint GFP was observed in mycAG and AG53 fish liver at 8 dpr (Fig. 6, C3 and C5) and essentially no GFP could be detected at 18 dpr (Fig. 6, C4 and C7).
At 6 mpi, the AG53 liver tumor had a uniform tumor type with early HCC traits, while the tumor histologic features in mycAG line fish were divergent, as mentioned earlier.With the removal of mifepristone, the tumors shrunk fast in both mycAG and AG53 fish, while the body size and length increased significantly (Fig. 6, A2-A7).At the histological level, at just 8 dpr, the neoplastic hepatocytes features observed earlier had been replaced by tissue characterized by a relatively normal appearance with eosinophilic cytoplasm and basophilic nuclei (Fig. 6, D3 and D6).At 18 dpr, the appearance of the cells (Fig. 6, D4 and D7) was already similar to the Driver control (Fig. 6, D1).Moreover, cytoplasmic granules also increased in both mycAG and AG53 fish hepatocytes during tumor regression, which was a sign of restoration of hepatocyte function.Previously we have observed a rapid increase of apoptosis during liver tumor regression in our xmrk-induced liver tumors [20].To examine whether apoptosis was involved in tumor regression in tp53 M214K background, TUNEL assay was carried out in the regression fish.Apoptosis was rare in control livers from the Driver fish (Fig. 7C), but increased with the tumor progression in mycAG fish (Fig. 7A) and dramatically increased in the regression process (Fig. 7B and 7F).As the normal liver has a very low level of apoptotic cells and there was a decrease of GFP-labeled hepatocytes, it is likely that these apoptotic cells were mainly from transformed hepatocytes, similar to the observation in tumor regression from xmrk oncogene transgenic zebrafish [20].Compared to the mycAG line fish, almost the same level of apoptosis was also observed in AG53 fish liver tumor regression (Fig. 7E and 7F).Therefore, these observations indicated that some pathways other than p53 were also involved in apoptosis in the process of tumor regression.
Discussion
Both Zebrafish myca and mycb are Oncogenic in Hepatocytes It is generally accepted that vertebrates underwent two rounds of whole genome duplication during evolution from invertebrates and there was an additional round of teleost-specific whole genome duplication about 350 Ma ago [21,22].Phylogenetic analysis of zebrafish myc family also indicated that two paralogs, myca and mycb, arose from a common ancestor in the last whole genome duplication [23].Although the expression of both of them are found in mitosis hotspots in development [13], many differences exist between the two paralogs.For example, myca is mainly expressed in brain while mycb is in lateral line neuromasts [13].In ciliary marginal zone, myca and several other myc family genes are required in maintenance of continuous cell replacement; in contrast, no mycb expression is found in this process [24].We also found that the mycb expression was much higher than myca in adult fish liver (S2A Fig. ) and their subcellular localization might be different too (S2B Fig. ).All these findings suggested that myca and mycb could have different physiological functions.However, overexpression of the two myc genes in our transgenic models indicated that both have equivalent cellular function in oncogenesis.The difference of severity of tumor induction between the two myc oncogenes are likely due to the level of induced expression as we observed an increasing severity of tumor progression from both mycAG and mycBG transgenic lines with increased mifepristone inducer, the possibility that myca is more potent in ongogenesis than mycb could not be completely ruled out.It is also interesting to note that endogenous mycb expression was much lower in induced mycAG fish than those in induced mycBG and Driver fish (S1A Fig. ); this could be due to a negative feedback control of endogenous myc expression by high level of transgenic myc expression.
In the present study, we also found an apparent dosage-dependent induction of transgenic myc expression, which in turn caused an increasing tumor severity (Fig. 1).A long time of induction (up to 8 months) with 2 μM mifepristone caused essentially 100% mycAG fish to grow liver tumors, among which, 5% of them had been confirmed to have multimodular HCC phenotype.The rest fish also showed some other malignant traits or signs of serious liver damage (Fig. 3).For example, ascites itself is an important clinical syndrome in late stage human liver cancer, and at the histological level, the compact hepatocytes organization in the "Typical" phenotype and neo-anginogenesis in the "Hypervascular" phenotype are also important properties in human HCC.Therefore, though not confirmed as HCC, these induced mycAG fish did show the deterioration and transition into malignant status.
Liver tumor progression in mycAG fish is accelerated by suppression of apoptosis with tp53 M214K Mutation
It seems that different levels of mycAG expression also resulted in different tumor types, although all the mycAG fish exhibited similar phenotypes at the early tumor stage (before 1 mpi), such as pseudoglandular tissue pattern and ascites (S3 and S4 Figs.).It was apparent that sustained high level expression of myca was necessary in the development of advanced neoplasms in later stages.Myc provoked apoptosis has been observed in many studies [18] and we also observed increased apoptosis in the liver tumor of mycAG fish (Fig. 2C1-C4).However, HCC was still developed in our model eventually, suggesting that the oncogenicity of mycAG fish is robust enough to overcome the apoptosis effect and to maintain the progression of malignant status.Moreover, it has also been found that in some tumor models, attenuating apoptosis is necessary for successful malignant transformation [25,26].In this project we observed that introducing a tp53 M214K homozygous mutation which significantly suppressed apoptosis could accelerate liver tumor progression.
Liver tumor regression does not require Tp53
Reversible neoplasms have been reported in antisense-treated human cancer cell lines and animal cancer models with conditional oncogene expression [27], including our previously reported transgenic zebrafish models with inducible expression of Xiphophorus xmrk oncogene and mouse Myc respectively [11,20].This phenomenon, i.e. malignant status dependency on sustained activation of a specific oncogene, is named "oncogene addiction" [27].Addiction to Myc has also been reported in several previous studies.For example, in a tet-off mice acute myeloid leukemia model, introduction of doxycycline lead to suppression of transgenic MYC expression and caused regression of the tumor [28].Similarly, in a Myc-induced mouse HCC model, inactivation of transgenic Myc also caused rapid tumor regression with increased apoptosis [29].However, not all the tumors could regress after the elimination of the original oncogenic factor, especially when additional oncogenic mutations have occurred, which is essentially in all cases in human cancers.For example, the mammary adenocarcinoma in a mouse model established by MYC overexpression could not regress after abolishing MYC expression because of a secondary spontaneous activating mutation in Kras [30].
In the mycAG and AG53 transgenic zebrafish reported here, inactivation of mycAG expression after of mifepristone resulted in rapid tumor regression in both groups of fish and all the hepatocytes were reverted to a normal appearance histologically (Fig. 6).While it is anticipated for a rise of apoptosis in mycAG fish during tumor regression, which is consistent with several previous reports [20,28], it is unusual to observe a similar increase of apoptosis in AG53 fish.TP53 is a well-known tumor suppressor gene and a common function of Tp53 is to induce apoptosis of damaged or tumorigenic cells for elimination [19].It has been previously reported for a Myc transgenic mouse model with hematopoietic tumors that tumor regression by inactivation of transgenic Myc requires Tp53 as only incomplete tumor regression was observed when Tp53 is lost [31].In our AG53 fish model, although the homozygous tp53 M214K mutation could largely abolish apoptosis in the tumor progression stage (S4C Fig. ), but it seems that there was no effect during tumor regression.These observations suggested that the malignant status of hepatocytes is addicted to sustained overexpression of zebrafish myca but tumor regression does not require the presence of Tp53.Since similar levels of apoptosis were observed in both mycAg and AG53 fish during the initial stage of tumor regression, it is likely that apoptosis during tumor regression is independent of the Tp53 pathway.
Generation of mycAG, mycBG and AG53 transgenic fish
This study involving zebrafish was carried out in strict accordance with the recommendations in the Guide for the Care and Use of Laboratory Animals of the National Institutes of Health.The protocol was approved by the Institutional Animal Care and Use Committee (IACUC) of the National University of Singapore (Protocol Number: 096/12).All surgery was performed under sodium pentobarbital anesthesia, and all efforts were made to minimize suffering.The Liver-Driver line, Tg(fabp10:LexPR; LexA:EGFP), was generated in a previous study [14].Two Effector lines, Tg(cryB:mCherry; LexA:EGFP-myca) and Tg(cryB:mCherry; LexA:EGFP-mycb), were generated in the present study using constructs depicted in S1 Fig.Both transgenic lines were identified by visualization of mCherry expression in lens under the cryB promoter.To enhance genome integration, the Ac/Ds transposon system was adopted [32], in which, a DNA construct was co-injected with in vitro transcribed transposase mRNA into zebrafish embryos at the 1-2 cell stage.The injected embryos were then raised for founder screening and transgenic F1 were confirmed by mCherry expression.From the F1 generation, both effector lines were maintained by crossing with the liver-driver fish and double transgenic fish were selected based on mCherry expression in the lens (Effector) and constitutive GFP expression in the liver (Driver).Multiple myca and mycb effector transgenic lines were generated and they all generated obvious liver tumors in early stages (S2F Fig. ); in this study, we followed only one transgenic line each of these myc genes and the double transgenic fish were named mycAG and mycBG accordingly in this report.
The mycAG fish with tp53 M214K homozygous mutation were generated by crossing mycAG fish with a tp53 M214K homozygous mutant fish [33].The heterozygous offspring were raised by incrossing and homozygosity for tp53 M214K was selected by genotyping with PCR primer as indicated in S1 Table [34].The mycAG fish were then maintained in homozygous tp53 M214K , named AG53, by incrossing for experiments.
Mifepristone treatment
Mifepristone (RU-486, Sigma-Aldrich #M8046) was first dissolved in ethanol and diluted in fish water for the final concentrations.The treatment was conducted in petri dishes or 6-L tanks at a density of ~50 larvae or ~25 adult per dish or tank, respectively.The water with mifepristone topped up every 3-4 days and totally changed every two weeks.
Reverse transcription-quantitative PCR (RT-qPCR)
Total RNA was extracted using TRIzol (Invitrogen #15596-018) and reverse-transcribed into cDNA.To distinguish the endogenous myca and mycb mRNAs from the transgenic mycAG and mycBG mRNAs, myca and mycb primers were targeted at the coding region and 3 0 untranslated regions while mycAG and mycBG primers at the myc coding region and the GFP region.β-actin was used as an inner control.The primer sequences are presented in S1 Table.
TUNEL assay
TUNEL assay was conducted with ApopTag Plus In Situ Apoptosis Fluorescein Detection Kit (Chemicon international, #S7111).After rehydration and treatment with 20 μg/ml proteinase K in PBS at room temperature, the sections were treated with TDT enzyme and incubated with alkaline phosphate-conjugated Anti-DIG.Next the slides were balanced in pH 9.5 buffer (0.1 M Tris-HCl/50 mM Mgcl2/ 10 mM NaCl/0.1% Tween 20) and finally the signal colors were developed using nitro-blue tetrazolium/5-bromo-4-chloro-3 0 -indolyphosphate in pH 9.5 buffer.
Statistical analyses
T-test was used for all assays in the present studies, including cell counting in DOPI, PCNA and TUNEL staining, measurement of 2D liver size and RT-qPCR.Statistical significance was indicted in figure legends.(A) Comparison of liver size between mycAG and AG53 larvae.Larvae were induced with 2 μM mifepristone from 3 dpf and photographed at 5 dpi (8 dpf).Liver size was measured based on 2D GFP images as previously described [35].## and # indicate significant difference with p-values of 0.01 and 0.05 respectively when compared with the liver size in the Driver53 (Driver fish in tp53 M214K mutation background).§ § and § indicate significant difference with p-values of 0.01 and 0.05 respectively when compared with the liver size in the Driver.++ indicates significant difference (P < 0.01) when compared with the liver size in AG53.(B) Induction of mycAG expression in mycAG and AG53 fish.Total RNA was extracted from 5 dpi/8 dpf larvae treated with 2 μM mifepristone for RT-qPCR analyses.Each group has three biological replicates and there was no significant effect on induction of mycAG mRNA by tp53 M214K mutation.# indicates the expression differences when compared to endogenous myca expression in Driver fish (p < 0.05).(C) Apoptosis in AG53 and mycAG fish livers.Liver sections from 2 mpi (3 mpf) driver, mycAg and AG50 fish were used for TUNEL staining.Each group has 3 to 4 biological replicates and apoptosis was counted and represented in column.##, indicates the difference in proliferation when compared with Driver is significant (P < 0.01).§ § indicates significant difference between mycAG and AG50 fish (P < 0.01).(PDF) S1 Fig., where transgenic mycAG expression was almost 5 fold of mycBG expression.There was a dosage-dependent induction of GFP expression with increasing mifepristone concentrations (S2C Fig.), confirmed by RT-qPCR (S2D Fig.), and noticeable increase of liver size based on 2D image measurement (S2E Fig.
Figure 1 .
Figure 1.Liver tumor progression in mycAG and mycBG fish.MycAG fish and mycBG fish were induced with mifepristone of increasing concentrations, as indicated at the top of the figure, from 1 mpf and sacrificed at 2 mpi (3 mpf).(A1-A7) Exterior observation of fish from each treatment group.(B1-B7) Gross observation of liver tumors after removal of abdominal wall.(C1 to C7) The same views as those in (B1-B7) for observation of GFP expression that illustrate the shape of livers.In B2 and C2, images from uninduced mycAG fish are included as insets for comparison and there were no enlarged liver (B2) and no visible GFP expression in the liver although green fluorescent signals was observed in the gut in some fish (C2).(D1-D7) H&E staining of liver sections.doi:10.1371/journal.pone.0117249.g001
Figure 2 .
Figure 2. Rapid induction of live tumors by induced mycAG expression.MycAG fish were induced with 2 μM mifepristone from 1 mpf and sampled at 10, 20 and 30 dpi for gross observation as indicated in the figure.(A1-A4) Gross observation of liver tumors after removal of abdominal wall.(B1-B4) The same views as those in (A1-A4) for observation of GFP expression that illustrate the shape of livers.(C1-C4) H&E staining to show cellular alteration in the fish liver.(D1-D4) PCNA staining by immunocytochemistry to show cell proliferation.(E1-E4) TUNEL assay to reveal the apoptosis in the liver.(F) Quantitative analyses of cell proliferation and apoptosis.## indicates highly significant difference (P<0.01) in proliferation when compared that in the Driver
Figure 3 .
Figure 3. Diverse liver tumor phenotypes of mycAG zebrafish at 6 mpi.MycAG fish were induced by 2 μM mifepristone from 1 mpf and sampled at 7 mpf (6 mpi) for gross observation and histological examination.Four phenotypes were observed: Small, Typical, Hypervascular and Ascites, as indicated at the top of the figure with total numbers and percentages.(A1-A5) Exterior observation of each phenotype.(B1-B5) Gross observation of liver tumors after removal of body wall.(C1 to C5) The same views as those in (B1-B5) for observation of GFP expression that illustrate the shape of livers.(D1-D5) H&E staining of liver sections.(D1-D4) have the same magnification as indicated in scale bar in (D1).The scale bar in (D1) represents magnification for all of (D1-D4) and blow-up area in (D5).(E) Transgenic mycAG expression in each phenotype.Transgenic mycAG expression in the liver was measured by RT-qPCR and the level of expression was relative to baseline myca expression in the control Driver fish.doi:10.1371/journal.pone.0117249.g003
Figure 4 .
Figure 4. Development of multinodular HCC in late stage of mifepristone induction.MycAG fish were induced by 2 μM mifepristone from 1 mpf and sampled at 9 mpf (8 mpi) for gross observation, histological examination and immunocytochemistry. (A-D) Gross observation of multinodular HCC in two examples of mycAG fish in both bright field and GFP channel.(E) Confocal microscope image to show GFP positive hepatocytes and GFP negative cirrhosis stroma.(F,G) H&E staining of multinodular HCC sections with two different magnifications.(H,I) Immunocytochemical staining of E-cadherin in liver sections from a Driver fish (H) and a mycAG fish (I).(J,K) Immunocytochemical staining of β-catenin in liver sections from a Driver fish (J) and a mycAG fish (K).doi:10.1371/journal.pone.0117249.g004
Figure 5 .Figure 6 .
Figure 5. Accelerated liver tumor progression of mycAG fish in homozygous tp53 M214K background (AG53).MycAG and AG53 fish were induced by 2 μM mifepristone from 1 mpf and sampled at 0.5 mpi and 1.5 mpi for gross observation and histological examination, as indicated at the top of the figure.(A1-A5) Exterior observation of each phenotype.(B1-B5) Gross observation of liver tumors after removal of body wall.(C1 to C5) The same views as those in (B1-B5) for observation of GFP expression that illustrate the shape of livers.(D1-D5) H&E staining of liver sections.doi:10.1371/journal.pone.0117249.g005
Figure 7 .
Figure 7. Increase of apoptosis after mifepristone withdrawal.MycAG line and AG53 fish were induced by 2 μM mifepristone from 1 mpf and mifepristone was removed after 6 month induction for tumor regression.These fish were then collected for TUNEL assay.(A-E) TUNEL assay for apoptosis on liver sections from different types of fish at different time points as indicated.dpr, day post regression.(F) Quantification of apoptotic cells in liver sectons.# and ## indicate significant difference with p-values of 0.05 and 0.01 respectively when compared to Driver at 0 dpr.§ and § § indicate significant difference with p-values of 0.05 and 0.01 respectively when compared to Driver at 8 dpr.※ indicates significant difference (P<0.05) when compared with AG at 0 dpr.doi:10.1371/journal.pone.0117249.g007
S2
Fig. Effects of mycAG and mycBG expression in zebrafish liver.(A) Expression of transgenic and mycBG in comparison with expression of endogenous myca and mycb genes.Liver RNA from 1 mpi (2 mpf) fish treated with 2 μM mifepristone were analysed by RT-qPCR and expression values are relative to the level of endogenous myca mRNA, which is arbitrarily set as 1. (B) Subcellular localization of mycAG and mycBG fusion proteins.Fish were all treated with 2 μM mifepristone and liver tissues were collected at 1 mpi (2 mpf) for cryosection.GFP signal was recorded by confocal microscope with the same exposure time.Nuclei were stained by DAPI and recorded in the blue channel.Wild type liver was used as a negative control.(C) Dosage-dependent effect of mifepristone induction on GFP or Myc-GFP expression and liver size in Driver, mycAG and mycBG larvae.These transgenic fish were induced with mifepristone at different concentrations from 3 dpf and photographed at 8 dpi (5 dpi).(D) Dosagedependent increase of mycAG and mycBG expression as measured by RT-qPCR.(E) Quantification of liver size based on 2D images at 4 μM mifepristone.Size of liver of larvae was measured according to GFP signal area.# and ## indicate significant difference with p-values of 0.05 and 0.01 respectively when compared with the liver size in the Driver.(F) Induction of liver tumors from other mycAG transgenic families.Representative fish from three other mycAG transgenic families are shown with gross observations of liver tumors (upper panels) and GFP expression (lower panels) to illustrate the liver tissues as described in Fig. 1B and 1C.(TIF) S3 Fig. Ascites in mycAG zebrafish.MycAG fish were induced by 2 μM mifepristone from 1 mpf and sampled at 7 mpf (6 mpi) for gross observation and histological examination.(A) Gross observation of the mycAG fish with ascites phenotype in monotone background.(B) Gross observation of the same fish in (A) under a dark background to show transparent belly.(C) H&E staining of the fish presented in (A,B).(D, E) Isolated liver tumor from another mycAG fish with ascites for GFP view (D) and for view in a dark background to show transparent, foam-like structure (E).Liquid was observed in cysts in the liver.(F) H&E staining of the liver tumor presented in (C,D).(PDF) S4 Fig. Reduction of apoptosis and acceleration of liver tumor progression in mifepristone-induced mycAG fish with tp53 M214K mutation (AG50).
|
2016-05-04T20:20:58.661Z
|
2015-01-22T00:00:00.000
|
{
"year": 2015,
"sha1": "341d29864bdba051c8221d51380b449a52ba58ee",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0117249&type=printable",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "341d29864bdba051c8221d51380b449a52ba58ee",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
71186746
|
pes2o/s2orc
|
v3-fos-license
|
Effects of Endothelin A Receptor Antagonist BQ 123 on Femoral Artery Pressure and Pulmonary Artery Pressure in Broiler Chickens
Endothelin-1 (ET-1) is an important factor in regulation of cardiovascular tone in humans and mammals, but the biological function of ET-1 in the avian vascular system has not been determined. The purpose of this study was to characterize the role of endogenous ET-1 in the vascular system of poultry by investigating the effect of endothelin A receptor (ETAR) antagonist BQ123 on the femoral artery pressure (FAP) and the pulmonary artery pressure (PAP) in broiler chickens. First, we found that plasma and lung homogenate ET-1 levels were both increased with age over the seven weeks life cyccle of broiler chickens. Second, 60 min after intravenous injection, BQ123 (0.4 μg kg and 2.0 μg kg, respectively) induced a significant reduction in FAP and PAP (p<0.05). Third, chronic infusion of BQ123 (2.0 μg kg each time, two times a day) into abdominal cavities led to significant decrease in systolic pressure of the femoral (p<0.05) and pulmonary arteries (p<0.01) in broiler chickens at 7 and 14 days after treatment. Taken together, the ETAR antagonist BQ123 lead to a significant reduction of FAP and PAP, which suggests that endogenous ET-1 may be involved in the maintenance and regulation of systemic and pulmonary pressure in broiler chickens. (
INTRODUCTION
Vascular endothelium plays important role in regulation of cardiovascular tone (Takahashi et al., 1998).It is not only the important barrier and semipermeable membrane, but also the important metabolizable and incretionary organ.The vascular endothelium controls vasomotor tone and microvascular flow and regulates trafficking of nutrients and several biologically active molecules (Langouche et al., 2005).Under normal conditions, the vascular endothelium secretes a variety of vasoactive substances including Endothelin (ET) (Mukai et al., 2006).ET, a 21-amino acid peptide having potential, strong and long-lasting vasoconstrictor activity, is important in the control of systemic blood pressure and/or local blood flow (Yanagisawa et al., 1988).ETs are a family of peptide hormones with three members, endothelin-1 (ET-1), endothelin-2 (ET-2) and endothelin-3 (ET-3) (Inoue et al., 1989), with ET-1 appearing to be the most important in vascular regulation (Levin, 1995).The effects of the ET peptides are mainly mediated through two known distinct specific ET receptors.They have been classified as type A and B receptors (ET A R and ET B R).The ET A R has a greater affinity for ET-1 than for the other two ETs, whereas the ET B R displays similar affinity for all three ETs (Sakurai et al., 1992).ET A R is mainly located on vascular smooth muscle cells and mediates the vasoconstrictor and mitogenic effects of ET-1, and ET B R is mainly located on vascular endothelium and mediates the vasodilator activity (Sakurai et al., 1992).Scientists provided the evidence of the high degree of similarity of the ET system in birds and mammals (Kempf et al., 1998).The ET A R is conserved between birds and mammals, since the complete sequence of the cET A R in bird displays a high identity with the sequence of the mammalian ET A R (Kempf et al., 1998).At least two distinct types of ET receptors coexist on chick cardiac membranes: one of two has higher affinity for ET-1 and ET-2 than ET-3, and the other has preference toward ET-3 (Watanabe et al., 1989).The cDNA of the chick ET A R has been cloned, sequenced and expressed and its affinity for ET antagonists is very similar to that shown by its mammalian counterparts (Kempf et al., 1998).
The ET system has been found to be involved in multiple physiologic functions related to the nerve, renal, cardiovascular, respiratory, gastrointestinal and endocrine system (Gglie et al., 2004).Because of its vasoconstrictive and mitogenic properties, ET-1 affects cardiovascular, pulmonary and renal function, and may be involved in the development of several diseases such as atherosclerosis, myocardial infarction, renal disease and systemic and pulmonary hypertension in human (Ferri et al., 1995).The ascites syndrome (AS) in broiler chickens, also known as pulmonary hypertension syndrome (PHS), is characterized by pulmonary hypertension and right ventricular hypertrophy, which is pathophysiologically similar to pulmonary hypertension (PH) in humans and mammals.
Recent study showed that ET-1 was associated in the development of PH in broilers (Ying et al., 2005).
Although the role of ET system has been well characterized in regulation of cardiovascular function in humans and mammals, the biological function of ET-1 in avian vascular system has not been identified.The purpose of this study is to characterize the role of endogenous ET-1 in poultry vascular system by investigating the effect of a highly selective ET A R antagonist BQ123 on the femoral artery pressure (FAP) and the pulmonary artery pressure (PAP) in broiler chickens.This study could also provide the basic data for further investigating the relationship between PHS and ET system.
Animals and drug preparation
One-day-old commercial AA male broiler chickens (Beijing Huadu Breeding Co. Ltd., Beijing, China) were maintained in environmental chambers at a normal temperature (23±1°C) and relative humidity (60±1%).During the brooding period, continuous lighting was used in the first 3 days.Then chickens were exposed to 23 h light and 1 h darkness from day 4 to 7 and 16 h light and 8 h darkness from day 8 to 49. Ambient temperature was set at 32±1°C on day 1 and then gradually decreased until 23±1°C on day 15.They were given free access to water and a commercial chick starter diet and grower diet (as shown in Table 1).Water and food were available at all times.Other standard experimental protocols included immunizing routinely and monitoring for overt signs of disease.
Dimethyl Sulfoxide (DMSO, Sigma Chemical Co., St Louis, MO, USA) was dissolved in normal saline.BQ123 (Peninsula Laboratories, Belmont, CA, USA) was dissolved in 0.3% DMSO.All other chemicals were analytical grade.Twice distilled water which had been de-ionized through a Millipore-Q system was used in all experiments.
Measurement of ET-1
We determined the levels of plasma and lung homogenate ET-1 in broiler chickens by radioimmunoassay (Ferri et al., 1995).Blood samples for the plasma ET-1 assay were collected into pre-chilled tubes containing EDTA-Na 2 (10%) and aprotinin (500 KIU/ml blood) and promptly centrifuged at 1,600×g at 4°C for 15 min.Supernatant was pipetted into polypropylene tubes and stored at -80°C until assayed.After measurement of the pressure, chickens were immediately killed by cervical dislocation.The lungs were dissected, snap-frozen in liquid nitrogen and stored at -80°C until processed for assays.Frozen lung tissue was homogenized at a ratio of 100 mg tissue/ml normal saline.The homogenate was centrifuged at 1,600×g at 4°C for 15 min to remove crude debris, and the supernatant was saved as samples for the assay.Commercial radioimmunoassay kit (Peninsula Laboratories, Belmont, CA, USA) was used to measure ET-1 concentration.Cross-reactivity of the system for endothelin-1 is 100%, but less than 7% for both endothelin-2 and endothelin-3 according to the manufacturer.Intra-and interassay coefficients of variation in our laboratory were 4< 10%.Recovery was 80%.Because of high degree of similarity of the ET system in birds and mammals, the same method can be used to measure the concentrations of plasma and lung homogenate ET-1 in broiler chickens and mammals.
Measurement of FAP and PAP
In this study, FAP and PAP were used to represent the systemic and pulmonary vascular pressures, respectively.A modified method using a right cardiac catheter was adopted to determine the pulmonary artery systolic pressure (PASP) and the pulmonary artery diastolic pressure (PADP) just before killing (Guthrie et al., 1987).Chickens were restrained in a dorsal position on the operating-table and locally anesthetized with 5% procaine chloride in the middle of right neck and the inside of left thigh.The right jugular vein was isolated and a polyethylene plastic catheter (0.9 mm, external diameter) was passed into the pulmonary artery to monitor PASP and PADP continually.Meanwhile,
Experimental protocols
To determine the effect of short-time ET A R blockade on FAP and PAP in normal broiler chickens, 28-day-old birds were administered with 0.3% DMSO (n = 10, as control group), 0.4 µg kg -1 BQ123 (n = 10) and 2.0 µg kg -1 BQ123 (n = 10) respectively.Ten minutes after baseline hemodynamics measurement, different doses of BQ123 were infused into the wing vein.FASP, FADP, PASP and PADP were continuously recorded for 60 min after the infusion.To observe the changes of FAP and PAP after chronic administration of ET A R antagonist, birds (n = 10) were abdominally injected with 2.0 µg kg -1 BQ123 (two times a day) at 16 to 30 days of age.The same volume of 0.3% DMSO was administrated into abdominal cavities in the control group (n = 10).At 23 and 30 days of age, hemodynamic indexes were measured.The doses of BQ123 were selected following preliminary studies based on the responses of several doses (range from 0.1 µg kg -1 to 2.0 µg kg -1 , data not shown).
The study was performed in accordance with local ethical guidelines.
Statistical analysis
All data were presented as the mean±SD.Either the student's t test or one-way ANOVA with multiple comparison methods by SPSS for windows TM package (SPSS Inc., Chicago, IL, USA) was used for statistical analysis.
Changes of plasma and lung homogenate ET-1 levels
As shown in Table 2, plasma ET-1 (range from 40.82±1.90pg ml -1 to 93.70±1.19pg ml -1 ) and lung homogenate ET-1 levels (range from 503.84±27.40pg g -1 to 701.04±16.18pg g -1 wet weight) were both increased with age over seven weeks' life circle of broiler chickens.The ET-1 concentrations of plasma were far lower than those of lung homogenate in broiler chickens.Sixty minutes after intravenous injection of BQ123 (0.4 µg kg -1 and 2.0 µg kg -1 ) did not have the significant effect on plasma and lung homogenate ET-1 levels (data not shown).But as shown in Table 3, the chronic intraperitoneal injection of BQ123 (2.0 µg kg -1 ) led to the significant reduction of lung homogenate ET-1 level at 7 and 14 days after treatment (p<0.05).The plasma ET-1 level didn't show the significant difference compared with the control group.
Effects of intravenous infusion of BQ123 on FAP and PAP
As shown in Figure 1 and 2, intravenous infusion of BQ123 (0.4 µg kg -1 and 2.0 µg kg -1 ) caused the significant decreases in FASP, FADP, PASP and PADP (p<0.05).FASP and FADP reached the bottom values at 30 min after infusion of two doses of BQ123, and retuned to the baseline at 60 min after infusion.High dosage of BQ123 (2.0 µg kg -1 ) caused more decrease in pressure than low dosage (0.4 µg kg -1 ) at corresponding time point.The changes of PASP and PADP were similar to those of FASP and FADP, and differently, PASP and PADP reached the bottom values at 20 min after infusion.
Effects of chronic administration of BQ123 on FAP and PAP
As shown in Figure 3, at 7 and 14 days after chronic intraperitoneal injection of 2.0 µg kg -1 BQ123, FASP (p<0.05) and PASP (p<0.01)decreased significantly, and no obvious differences between treatment and control groups were found in FADP and PADP.
DISCUSSION
Under normal physiological conditions, ET-1 is not a circulating hormone, and rather acts as autocrine/paracrine factor at multiple sites.The lung represents a primary target for ET-1 effects and is a special site for ET-1 metabolic pathways (Gglie et al., 2004).Since it is necessary to determine the levels of plasma and lung homogenate ET-1 for studying the function of endogenous ET-1, we measured the ET-1 levels in broiler chickens, and obtained the following findings.First, plasma and lung homogenate ET-1 levels were both increased with age over seven weeks' life circle of broiler chickens, consistent with the previous report of Bo et al. (2004) who also measured the ET-1 concentrations in broiler chickens.In addition, earlier study (Battistelli et al., 1996) also demonstrated that plasma ET concentrations were also increased with age in humans.Second, the ET-1 level of lung homogenate was far higher than that of plasma, which suggested that the lung might also be an important target for ET-1 effects in broiler chickens.Third, the plasma ET-1 levels (above 41 pg ml -1 ) of broiler chickens were much higher than those (all below 10 pg ml -1 ) of human (Apostolopoulou et al., 2003;Gglie et al., 2004) and other mammals (rabbit, Gratton et al., 1997 andcanine, Willtte et al., 1997;rat, Ronald et al., 2000).1A) and FADP (Figure 1B).FASP and FADP continued to decrease after intravenous infusion of BQ123 at 0.4 µg kg -1 (□, n = 10) or 2.0 µg kg -1 (∆, n = 10).BQ123 at 2.0 µg kg -1 caused more decrease in pressure than at 0.4 µg kg -1 at the same time point.Chickens in control group (•, n = 10) with infusion of 0.3% DMSO.All values were means±SD.FASP, femoral artery systolic pressure; FADP, femoral artery diastolic pressure.2A) and PADP (Figure 2B).FASP and FADP significantly decrease after intravenous infusion of BQ123 at 0.4 µg kg -1 (□, n = 10) or 2.0 µg kg -1 (∆, n = 10).BQ123 at 2.0 µg kg -1 caused more decrease in pressure than BQ123 at 0.4 µg kg -1 at the same time point.Chickens in control group (•, n = 10) with infusion of 0.3% DMSO.All values were means±SD.PASP, pulmonary artery systolic pressure; PADP, pulmonary artery diastolic pressure.
The role of high plasma ET-1 levels in broiler chickens
remains to be further investigated.Many studies have shown that endogenous ET-1 contributes to the maintenance of basal vascular tone and blood pressure in human and mammal (Haynes et al., 1994;Haynes et al., 1996).The vasoconstrictor effect of ET-1 is induced by ET A R (Yanagisawa and Masaki, 1989).BQ123 (-D-Asp-L-Pro-D-Val-L-Leu-D-Trp-) was found to selectively inhibit the binding of ET-1 to the ET A R and to functionally antagonize ET-1-induced vasoconstriction.It is a useful tool to assess the physiological and pathophysiological roles of ET-1 and ET A R. In previous studies, BQ123 was mainly used to study the role of ET-1 in regulating cardiovascular tone in mammals.In this paper we used two experimental protocols to indirectly characterize the role of endogenous ET-1 in avian vascular system.In first experiment, the intravenous infusion of BQ123 was conducted to observe the transient changes of FAP and PAP by directly antagonizing the pressor effect of endogenous ET-1.Our data showed that intravenous infusion of BQ123 didn't cause significant changes in plasma and lung homogenate ET-1 levels, but led to the significant decreases of FASP, FADP, PASP and PADP in broiler chickens.Furthermore, high dosage of BQ123 (at 2.0 µg kg -1 ) induced the more reduction in systemic and pulmonary pressure compared with low dosage (at 0.4 µg kg -1 ), and longer time was needed to return to baseline in high dosage group.So BQ123 reduced blood pressure in a dosage-sensitive manner.Some related studies on human and mammal got the similar results.Bigaud and Pelron (1992) reported a decrease in femoral arterial blood pressure in the anaesthetized rats following intravenous administration of BQ123.BQ123 also antagonized ET-1induced contraction of the canine pulmonary artery (Willtte et al., 1997).
In second experiment, the chronic intraperitoneal infusion of BQ123 was used to observe the effect of BQ123 on FAP and PAP in broiler chickens with long-term adaptation to antagonism.We found that chronic intraperitoneal administration of BQ123 led to the significant decrease of PASP and FASP, and the change of PAP was more distinct than that of FAP.Although, at two weeks after treatment, the concentrations of lung homogenate ET-1 significantly decreased in broiler chickens, we could see, as shown in Table 2 and Figure 3, that the degree of pressure reduction was much greater than that of ET-1 level decrease.Those results indicated that the reduction of FAP and PAP might not be completely induced by the decreased ET-1 levels, and was probably in part secondary to the antagonism of BQ123.
It is recognized that vascular endothelial dysfunction contributes to the development and perpetration of PH by disturbing the balances between vasodilating and vasoconstrictive forces, and between proliferative and antiproliferative forces.ET-1, binding to the ET A R, has been shown to be involved in the pathogenesis of PH.In mammals, ET-1 receptor antagonists not only reduced PH, but also resulted in reversal of vascular remodeling and right ventricular hypertrophy (Langleben et al., 1999).Ascites syndrome is a condition in which the abdominal cavity is filled with serous fluid, leading to death or potential condemnation (Bo et al., 2005).It is characterized by chronically elevated pulmonary pressure, right ventricular hypertrophy and failure and central venous congestion, which eventually results in the accumulation of fluid in the abdominal cavity (Kai et al., 2006).The ascites has been a worldwide source of concern to the poultry
industry for several decades (Ipek et al., 2006).Therefore our study can provide the basic data for further investigating the relationship between the ET system and the ascites in broiler chickens which is pathophysiologically similar to PH in humans and mammals.
In conclusion, the ET A R antagonist BQ123 leads to the significant reduction of FAP and PAP in broiler chickens, which suggests that endogenous ET-1 may be involved in the maintenance and regulation of systemic and pulmonary pressure in broiler chickens.
Table 1 .
Percentage of diet composition (% unless otherwise stated) the left femoral artery was isolated and another catheter was placed into the left femoral artery for monitoring the femoral artery systolic pressure (FASP) and the femoral artery diastolic pressure (FADP).All catheters were flushed with sodium citrate in 0.9% sterile saline to avoid clotting.Mechanical responses were digitized, displayed, analyzed, stored and graphed by using Biopac System (BIOPAC Inc., Goleta, CA, USA).Anesthesia was supplemented with 5% procaine chloride as needed.The sensors were placed at the same level as the birds' heart.
|
2019-01-02T23:26:51.589Z
|
2007-08-30T00:00:00.000
|
{
"year": 2007,
"sha1": "049ad103c113f350c0a9ebcfc69efa3ff0490fb8",
"oa_license": "CCBY",
"oa_url": "https://www.animbiosci.org/upload/pdf/20-208.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "049ad103c113f350c0a9ebcfc69efa3ff0490fb8",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
253125025
|
pes2o/s2orc
|
v3-fos-license
|
Cytokine-primed umbilical cord mesenchymal stem cells enhanced therapeutic effects of extracellular vesicles on osteoarthritic chondrocytes
In recent years, extracellular vesicles (EVs) secreted by mesenchymal stem cells (MSCs) have emerged as a potential cell-free therapy against osteoarthritis (OA). Thus, we investigated the therapeutic effects of EVs released by cytokine-primed umbilical cord-derived MSCs (UCMSCs) on osteoarthritic chondrocyte physiology. Priming UCMSCs individually with transforming growth factor beta (TGFβ), interferon alpha (IFNα), or tumor necrosis factor alpha (TNFα) significantly reduced the sorting of miR-181b-3p but not miR-320a-3p; two negative regulators of chondrocyte regeneration, into EVs. However, the EV treatment did not show any significant effect on chondrocyte proliferation. Meanwhile, EVs from both non-priming and cytokine-primed UCMSCs induced migration at later time points of measurement. Moreover, TGFβ-primed UCMSCs secreted EVs that could upregulate the expression of chondrogenesis markers (COL2 and ACAN) and downregulate fibrotic markers (COL1 and RUNX2) in chondrocytes. Hence, priming UCMSCs with cytokines can deliver selective therapeutic effects of EV treatment in OA and chondrocyte-related disorders.
Introduction
Extracellular vesicles (EVs) are nano-sized, lipid membraneenclosed particles that modulate the physiological conditions of the recipient cells (1). By effectively delivering a wide range of bioactive molecules involved in critical signaling pathways associated with apoptosis, proliferation, migration, extracellular matrix (ECM) synthesis, cartilage regeneration, and inflammation management, EVs have been studied for their therapeutic effects on several cartilage-related diseases (2). Recently, multiple approaches have been employed to enhance therapeutic effect, and targetable EV delivery, including engineering secreted cells, loading therapeutic molecules into naturally secreted vesicles (3), and conjugating the vesicles with targeting ligands (4). Additionally, the therapeutic cargo of EVs secreted by mesenchymal stem cells (MSCs) varies depending on MSC tissue sources (5). Since MSCs are very sensitive to environmental conditions, priming these cells with cytokines as supplements in the culture media can influence bioactive molecules packed in the derived EVs, thereby affecting the biological activities of the vesicles (6, 7).
In this study, we primed MSCs originated from the umbilical cord (UCMSCs) with anti-inflammatory cytokines (transforming growth factor beta -TGFb and interferon alpha -IFNa) and inflammatory cytokines (including tumor necrosis factor alpha -TNFa), which are linked with osteoarthritis (OA) pathogenesis. In healthy cartilage, TGFb stimulates chondrocyte proliferation while suppressing chondrocyte hypertrophy and maturation, as well as promoting chondrocytes to synthesize ECM components (8). Additionally, the inhibition of TGFb signaling leads to chondrocyte terminal differentiation and the early onset of OA (9). Another stimulant in the anti-inflammatory, IFNa, plays a vital role in autoimmunity and inflammation and could effectively protect against antigen-induced arthritis by inhibiting proinflammatory cytokine (interleukin 1b (IL-1b), IL-6, IL-17, TNF, IL-12, and IFNg) production while inducing TGFb synthesis (10). Moreover, injection of IFNa into the synovial fluid promotes the generation of functional antagonists such as interleukin 1 receptor antagonist (IL-1Ra), soluble tumor necrosis factor receptors (sTNFR), and osteoprotegerin (OPG) for known OA-inducing factors of IL-1, TNF, and osteoprotegerin ligand (11). However, direct administration of these two cytokines to patients frequently harms the nearby tissues, such as a synovial membrane or subchondral bone, as well as general health, including headache, malaise, fever, and even depression (12). The inflammatory cytokine TNFa which is one crucial catabolic factor for cartilage, promotes synovial fibroblasts to release Matrix metalloproteinase (MMPs), resulting in cartilage destruction during OA progression (13). This cytokine is also able to signal chondrocyte apoptosis leading to a more severe OA phenotype (13). Moreover, priming UCMCS with cytokines enhances the anti-inflammatory and immunomodulatory potential of the secreted EVs (14)(15)(16). TNFa stimulation was shown to induce the expression of immunosuppressive factors in the parental MSCs, which produced exosomes that can modulate M2/M1 macrophage differentiation (14). The molecular content changes in EV derived from cytokine-stimulated MSCs can interfere with inflammation v i a PGE2/ COX10 me chanism ( 15). Although the immunomodulatory effect of EVs from cytokines-primed cells in the context of OA has been reported before (14,17), no publication was found indicating their influence on chondrocytes.
In OA, several miRNAs found in EVs have been demonstrated to regulate key signaling pathways involved in ECM maintenance, chondrocyte proliferation, migration, apoptosis, and inflammation (2). Thus, modulating miRNA composition might directly influence the EV therapeutic effect. In this study, we focused on two candidate miRNAs, miR-320a-3p and miR-181b-3p, which involve in cartilage homeostasis. Previous studies showed that miR-320a played essential roles in the secretion of matrix degradation factors (18) and chondrocyte proliferation (19) in OA models. Although not many studies focused on miR-181b-3p, it was described to inhibit proliferation as well as promote apoptosis of chondrocytes in OA (20).
As described, different EV sub-populations carry a different set of bioactive molecules (21), for instance, miRNAs content; thus, we assumed that they would have a distinct impact on chondrocytes. We further hypothesized that priming UCMSCs with the cytokines could alter the miR-181b-3p and miR-320a-3p levels in the secreted EVs, thereby modulating the effect of EVs in chondrocytes proliferation, migration, and their markers.
EVs isolated from UCMSC culture medium were subjected to morphology analysis by transmission electron microscope (TEM). Three EV sub-populations including apoptotic bodies (ABs), microvesicles (MVs), and exosomes (EXs), were observed with distinguished shapes and sizes ( Figures 1C1 -C3). ABs showed variable shapes with a diameter scale of approximately 500 nm to 2000 nm, and they are packed within a rough membrane ( Figure 1C1). MVs had variable membrane-bound morphologies with uneven surfaces and diameters ranging from 100 nm to 1 µm ( Figure 1C2). EXs exhibited a typical cup-shaped morphology with their size ranging from 40 to 200 nm ( Figure 1C3).
Additionally, the isolated EVs expressed standard EV protein markers (CD9 and CD63). As a control indicator, all three EV populations strongly expressed the internal reference protein of GAPDH. For general EV marker expression, CD9 was present abundantly in MVs and EXs and lightly in ABs. EXs also expressed a typical exosomal marker of CD63, which was absent in MVs and ABs ( Figure 1D). The morphology and protein marker analysis confirmed the identity of three separated EV populations from UCMSC conditioned media.
Differential levels of microRNA 320a-3p and 181b-3p into EVs secreted by cytokine-primed UCMSCs We measured the expression of selected miRNAs associated with OA pathogenesis present in UCMSCs and three secreted EV populations (ABs, MVs, and EXs) from normal and cytokineprimed conditions. Using qRT-PCR, we quantified the levels of two candidate miRNAs: miR-181b-3p and miR-320a-3p. Generally, both miR-181b-3p and miR-320a-3p were detected in all UCMSCs and isolated EV sub-populations of ABs, MVs, and EXs from different culture conditions (Figure 2 and Supplementary Figure 1).
In UCMSCs, cytokine treatments acted differentially on the expression of two candidate miRNAs. Particularly, TGFb and TNFa significantly induced the levels of miR-320a-3p present in UCMSCs, indicated by lower delta Ct values (Supplementary Figure 1A). Besides, no significant impact was detected on the expression of miR-181b-3p (Supplementary Figure 1B).
In contrast, cytokine-priming significantly modulated the levels of miR-181b-3p while producing a little impact on the levels of miR-320-3p packed into EVs. Cytokine treatment suppressed the selective sorting of miR-181b-3p in all three EV sub-populations compared to the non-priming group, indicated by higher delta Ct values when normalizing to secreted cells, UCMSCs ( Figure 2B). Comparing the effects among different cytokines in each EV subpopulations, IFNa and TNFa cytokine treatments further limited the miR-181b-3p content in ABs and MVs ( Figure 2B). Indeed, we detected a greater relative expressionof miR-181b-3p in TGFb-ABs compared to IFNa-ABs (p = 0.0359) and TNFa-ABs (p = 0.0101), as well as in TGFb-MVs compared to TNFa-MVs (p = 0.028). miR-181b-3p also expressed stronger in MVs from IFNa-primed UCMSCs compared to TNFa-primed ones (p = 0.038) ( Figure 2B). However, in the EX population, no significant difference in miR- 181b-3p inhibiting effects was observed among three different cytokine priming conditions ( Figure 2B). Meanwhile, the amount of miR-320a-3p packed in ABs, MVs, and EXs was mostly stable in all non-priming and cytokine-priming cultures. Only a small change of miR-320a-3p content in the EX population was detected under the effect of TGFb and TNFa, where this miRNA was relatively expressed higher in CT-EXs and TNFa-EXs compared to TGFb-EXs (p = 0.0299 and p = 0.0031, respectively) ( Figure 2A). Taken together, cytokine-primed UCMSCs selectively reduced the sorting of miR-181b-3p into EVs, whereas there were no effects on the miR-320a-3p.
Chondrocytes were successfully isolated from human articular cartilage Human chondrocytes were isolated from the knee articular cartilage tissue digested with collagenase and were cultured in the DMEM/F12. As shown in Figure 3A, cells at P1 exhibited flattened and polygonal shape which is a typical morphology of chondrocytes. Additionally, isolated cells were positive with Alcian Blue staining dye, which is specific for chondrocyte cells and appears blue due to proteoglycans secretion ( Figure 3B).
Cytokines affecting UCMSC-derived EV capacity in promoting chondrocyte proliferation
We performed an MTT assay on human chondrocytes to examine the effect of EVs derived from cytokine-priming UCMSCs on chondrocyte proliferation. In general, EVs generated from UCMSCs, either priming with cytokines or not, did not have any statistically significant effect on chondrocyte proliferation compared to EV-depleted media (No-EV) and among treatment groups ( Figure 4).
EVs derived from TGFb-primed UCMSCs promoted chondrocyte migration
We performed the wound scratch assay to access the capacity of EVs from cytokine-primed UCMSCs in regulating chondrocyte migration. In general, EVs from either nonpriming or cytokine-primed UCMSCs significantly promoted chondrocyte migration starting from the 44-hour time point (later experimental time points) (CT-ABs, p = 0.0445 at 68 hours; and CT-MVs, p = 0.0105 at 44 hours; when compared to EV-depleted media) ( Figure 5 and Supplementary Figure 2).
EVs derived from cytokine-primed UCMSCs alter the expression of chondrocyte markers by chondrocytes
To investigate the molecular alterations of chondrocytes in different EV-treated culture conditions, we isolated total RNA from cells after one-week culture under EV treatment and subjected them to qRT-PCR. The relative expression level of chondrocyte mRNAs, including normal chondrocyte markers of Collagen type II (COL2A1), Cartilage oligomeric matrix protein (COMP), Aggrecan (ACAN), and hypertrophic chondrocyte markers of Collagen type I (COL1A1), Runt-related transcription factor 2 (RUNX2), were calculated and represented as fold change.
In general, the treatment with either normal EVs or EVs associated with cytokine priming acted differentially on the expression of chondrocyte markers. We observed the highest expression of COL2A1 in chondrocytes treated with TGFb-MVs in all experimental groups ( Figure 6A). Notably, non-priming EVs greatly enhanced the expression of COMP in chondrocytes, much stronger than any studied groups ( Figure 6B). When The influence of EV treatment on chondrocyte proliferation at 48h. Chondrocyte proliferation under the treatment of (A) AB populations; (B) MV populations; (C) EX populations. No statistically significant effect was observed among all EV treatment groups. CT-AB/MV/EX: chondrocytes treated with AB/MV/EX secreted from non-priming UCMSCs; TGFb-AB/MV/EX: chondrocytes treated with AB/MV/EX secreted from TGFbprimed UCMSCs; IFNa-AB/MV/EX: chondrocytes treated with AB/MV/EX secreted from IFNa-primed UCMSCs; TNFa-AB/MV/EX: chondrocytes treated with AB/MV/EX secreted from TNFa-primed UCMSCs; No-EV: chondrocytes cultured in DMEM/F12 5% EV-depleted FBS and no EV addition. The proliferation rate was acquired by normalizing absorbance measured at time point of 48 hours after incubation to absorbance measured at seeding time (0h), and the data were obtained from three independent biological replicates (n = 3). Statistical significance was determined by Two-Way ANOVA. Error bars indicate ± SD. analyzing the expression of ACAN, we observed that treatment with TGFb-MVs significantly upregulated ACAN mRNA expression by chondrocytes compared to EV-depleted media (p = 0.0156) ( Figure 6C).
Taken together, treatment of chondrocytes with cytokineprimed EVs partially rescued the chondrocytes from hypertrophic phenotype and established the primary, normal physical state of the cells.
Discussion
In recent years, several techniques have been developed to optimize the therapeutic efficacy of EVs. Evidently, adding cytokines, such as IFNg, TNFa, and IL1b, to the conventional MSC culture media affected the contents and biological activities of the derived EVs associated with OA (17,23). Therefore, we investigated the influence of EVs derived from MSCs primed with anti-inflammatory (TGFb and IFNa) and inflammatory (TNFa) cytokines on osteoarthritic chondrocytes. We found that cytokine priming did not affect the typical morphology and markers of UCMSCs. Additionally, the secreted EVs displayed distinguished sizes, morphologies, as well as surface markers (CD9 and CD63), which were in accordance with ISEV guidelines (24). These characteristics are also described in the previous studies of EVs from cytokine-primed UCMSCs (25,26). This information allowed us to ensure the normal EV identity and further study the molecular profile and therapeutic effects of EVs under cytokine stimulations. Furthermore, stimulation of secreted cells with cytokines can also increase the amount of EVs produced by UCMSCs (14), which can potentially enrich therapeutic efficacy. Hence, the assessment of EV production from cytokine-stimulated UCMSCs should be considered in our future investigation.
As mentioned, the treatment of UCMSCs with various stimuli could affect the biological contents of EVs, including miRNAs, which have been reported for their potential roles in OA treatment (27,28). Especially, it is emphasized that cytokine treatment can result in EVs with rich RNA profiles for inflammatory control (15), which can further reverse OA condition. This study reported the detection of miR-320a-3p and miR-181b-3p, which are involved in healthy cartilage B C A FIGURE 5 Priming UCMSCs with cytokines differentially affected the capacity of derived EVs to stimulate chondrocyte migration. Chondrocyte migration was analyzed using a wound-scratch assay under the treatment of (A) UCMSC-ABs maintenance and OA pathogenesis (18)(19)(20) in MSCs and three EV sub-populations released by UCMSCs. Our result indicates that miR-320a-3p was expressed higher in non-priming UCMSCs, while the expression level of miR-181b-3p was similar among groups. However, cytokines treatment diminished the packaging of miR-181b into EVs, shown by a significantly low relative expression of this EV miRNA associated with cytokine priming when normalized to the levels in UCMSCs. In literature, miR-181b promoted the NF-kB pathway, which leads to cartilage destruction and synovium membrane degradation (29-31). Blocking miR-181b activity reduced MMP13 expression but increased COL2 expression in articular chondrocytes (32). The attenuation of miR-181b activity can indirectly signal the FPR2-formyl peptide receptor and induce anti-inflammatory effects (33,34). Thus, the reduction of miR-181b observed in EVs originating from cytokines-primed cells can be a positive marker for readjusting the appropriate EV components to produce more direct effects in cartilage regeneration. This exciting information requires further studies to validate whether two cytokines, IFNa and TNFa, could be the appropriate stimulus to enhance the therapeutic efficacy of UCMSC-EVs for OA treatment.
On the other hand, the level of miR-320a-3p remained stable across experimental EV treatments. Previous studies showed pieces of diverse evidence of miR-320a function in cartilage homeostasis (18,19,35). Peng et al. (19) also demonstrated the protective effects of miR-320a over cartilage degeneration by negatively regulating BMI-1 (19). However, miR-320a has also been shown as a potential OA marker as this miRNA promoted OA-induced matrix breakdown via the NF-kB pathway and interfered with osteoblast reformation (18,35,36). Thus, a future study is required to evaluate the roles of miR-320a-3p in OA pathogenesis and examine an alternative approach to adjusting this miRNA content in UCMSC-derived EVs.
Next, to examine our EVs' bioactivity in vitro, we isolated human primary chondrocytes from articular cartilage tissues obtained from a patient suffering from a knee injury and performed proliferation, migration and mRNA markers analysis assays. Our isolated cells showed typical chondrocyte morphology and were positive with specific staining dye for proteoglycan. For functional analysis, in general, all EVs from either non-priming or cytokines-primed UCMSCs at the dose of 10 mg/mL did not promote chondrocyte proliferation significantly. This result may be due to an insufficient dose of EVs might be the issue, as higher doses (20,40, 80 µg/mL) of BMMSC-EXs have been shown to increase the proliferation rate of chondrocytes (37). Additionally, the outcomes may be due to chondrocytes obtained from patients with knee injury reported herein instead of healthy chondrocyte cell lines. Hence, further experiments with chondrocytes induced with OA characteristics and higher EV dosage will be conducted to examine these possibilities. Notably, the miRNA distribution in EVs is an essential factor that might affect chondrocyte proliferation and migration; however, the regulation of miR-181b on these two biological processes remains unclear. A member of the miR-181 family, miR-181a, exerted adverse effects on chondrocyte proliferation by upregulating the expression of caspase-3, PARP, MMP-2, and MMP-9 to induce apoptosis and cartilage destruction (20). Thus, it is predicted that miR-181b can inhibit chondrocyte proliferation, and suppressing miR-181b expression can restore this ability. However, in this current study, the reduction in miR-181b might not contribute to proliferation results observed here, or other factors have surpassed its influence. Contrary to cell proliferation, EVs from cytokines-primed UCMSCs expressed a higher capacity to promote chondrocyte migration compared to chondrocytes cultured in EV-depleted media, with the most significant effect belonging to TGFb-EVs but only in later time points. In the previous study, TGFb was shown to promote the PI3K-Akt signaling pathway, which was demonstrated to induce chondrocyte migration in a rat model (38,39). Additionally, TGFb stimulation can also regulate the integrin signaling pathway involving changes in integrin-ECM binding and the activation of FAK, which are critical factors in cell migration (40,41). It is noted that all results obtained herein were in the comparison with chondrocytes cultured in EVdepleted media (DMEM/F12 supplemented with 5% EVdepleted FBS) but not as in most studies used PBS or chamber consisting of low serum media (upper) and PBS (lower) as the control group (42-44). Indeed, long-term cell storage with PBS increases cell death and thus cannot access cell functionality efficiently. Those factors may be reasons for the differences observed in this study compared to others.
In this study, we investigated the alteration in mRNA levels of chondrocyte markers under the treatment of EVs at high passage culture. The later culture passage exhibited an increase in COL1 and a decrease in COL2. However, higher expression of healthy chondrocyte markers, including COMP and ACAN, was also detected, which supports cartilage regeneration and ECM synthesis at the later stage. Meanwhile, the expression of hypertrophic markers such as RUNX2 diminished. Besides, we observed that EVs from cytokines-primed UCMSCs downregulated the expression of COL1 and RUNX2 and upregulated COL2 and ACAN expression, but this effect was not consistent among EV populations. MiR320a was previously linked with low expression of hypertrophic marker RUNX2 (19). However, a stable level of miR-320a-3p in most of the isolated EVs from both non-priming and cytokine-primed UCMSCs hinders us from revealing the association between EV contents and chondrocyte markers. Notably, the increase in COMP expression was much more substantial in chondrocytes treated with EVs derived from non-priming UCMSCs. This means that EVs contribute to chondrocyte malfunctions or the recovery of damaged chondrocytes. However, further experiments, especially on miRNAs and target mRNAs, should be investigated to understand the mechanism of the effect of EV contents on chondrocyte mRNA expression.
In conclusion, cytokines influenced the miRNA composition of UCMSCs-derived EVs and their effects on chondrocyte physiology regarding cell proliferation and migration, as well as chondrocyte markers. However, it is noted that the results presented here are preliminary data that require more investigations on other miRNAs/proteins found in EVs in addition to the target genes and signaling pathways affecting the chondrocyte bioactivities. Additionally, for future perspectives, studies should be performed to examine the roles of different cytokines on UCMSC-derived EVs and their cargos in other aspects of OA, such as chondrocyte apoptosis and inflammation.
Experimental procedures Ethical approval
Ethical approval for collecting and using human MSCs from the umbilical cord and human chondrocytes from articular cartilage was issued by the Vinmec International General Hospital Joint Stock Company's ethics committee (Ethical approval number: 311/2018/QĐ-VMEC). The umbilical cord tissues were collected from three healthy donors aged 20 to 40, and human cartilage tissues were acquired from three donors with knee arthroplasty. Donors signed written informed consent before donating their samples.
Umbilical cord-derived mesenchymal stem cell culture UCMSCs were isolated from the umbilical cord following what was described in our previous study and stored for further experiments (45). UCMSCs at passage two (P2) were thawed and seeded at a density of 5,000 cells/cm 2 in DMEM/F12 (Gibco, Massachusetts, USA) with 10% (v/v) fetal bovine serum (FBS). Cells were incubated in 37°C/5% CO 2 condition and subcultured with the same density until passage 5 (P5). The cells at P5 were cultured with EV-depleted media for three days prior to cytokine treatments (DMEM/F12 supplemented with 10% EV depleted-FBS, in which FBS was centrifuged at 120,000 × g for 18 hours at 4°C to eliminate EVs). Cells were maintained in EV-depleted media before exposing to cytokines individually for 48 hours with the following concentrations: 10 ng/mL TGFb, 20 ng/mL IFNa, or 20 ng/mL TNFa. The conditioned media were harvested when cells reached 95% confluency for EV isolation (cell culture media were not renewed throughout incubation). After conditioned media collection, UCMSCs were characterized with Human MSC Analysis Kit (BD Biosciences) following the manufacturer's protocol, and flow cytometry data were analyzed with Navios Software 3.2.
Extracellular vesicle isolation
The conditioned media was centrifuged at 300 × g for 10 minutes at 4°C to remove cell debris. Sequential centrifugation steps were performed to separate three EV populations as follows: 2,000 × g for 20 minutes at 4°C to collect apoptotic bodies (ABs), 16,500 × g for 30 minutes at 4°C to pellet microvesicles (MVs), and 100,000 × g for 90 minutes at 4°C for isolation of exosomes (EXs) (Optima XPN-100 Ultracentrifuge, Beckman Coulter, California, USA). EV pellets were resuspended in DMEM/F12 or PBS and stored at − 80°C for further usage.
Extracellular vesicle marker analysis by western blot
Protein extraction and western blot were performed as described previously (45). Total EV protein concentrations were determined by Pierce ™ BCA Protein Assay Kit (Thermo Scientific, Massachusetts, USA) and as equivalent to an optical density (OD) measured at 562 nm (SpectraMax M3, Molecular Devices, California, USA). Then, 15 µg of EV proteins were electrophoretically separated by 4 -12% NuPAGE gels (Invitrogen, Massachusetts, USA) and probed with primary antibodies (Abcam, Cambridge, UK) against GAPDH, CD9, and CD63 overnight at 4°C, followed by the incubation with goat anti-Rabbit IgG secondary antibody (Invitrogen, Massachusetts, USA). Antibody binding was stained with ECL substrate and visualized on ImageQuant LAS 500 (GE Healthcare Life Sciences, Illinois, USA).
Extracellular vesicle morphology analysis by transmission electron microscopy
EV samples were fixed and stained following the protocol described in our previous study (45). Imaging was performed using a JEOL 1100 Transmission Electron Microscope (JEOL Ltd., Tokyo, Japan) at 80 kV at the National Institute of Hygiene and Epidemiology (NIHE).
Chondrocyte isolation and characterization
Human cartilage tissues were collected by the surgical doctors, stored in saline water at 4°C, and transferred to the laboratory. Before processing, the tissue was washed once with ethanol 70%, twice with PBS, and once with DMEM/F12; each solution was supplemented with 1% Pen/Strep (Thermo Fisher Scientific, USA) to ensure sterile and eliminate contaminants. The tissue was minced and digested in Hanks' Balanced Salt Solution (HBSS) (Thermo Fisher Scientific, USA) 0.2% collagenase t ype I 10000 U /mL s olution (Gibco, Massachusetts, USA) (10 mL for every 1 gram of tissue) for 20 hours at 37°C. Cell culture media (DMEM/F12 supplemented with 10% FBS (v/v)) was added in a volume ratio of 1: 1 with HBSS. The harvested pellets were resuspended in DMEM/F12 supplemented with 1% Pen/Strep and 10% FBS (v/v), then seeded into a T25 cell culture flask and incubated at 37°C and 5% CO 2 . The media were replaced by every three days during the cultures. After reaching 80% confluency, the cells were either stored or subcultured at the density of 10,000 cells/cm 2 to the next passage.
The images of chondrocytes were captured under Eclipse Ti-S Inverted Microscope (Nikon Instruments, Japan), and cells at P0 were processed to form the colony and stained with Alcian Blue to confirm cell type.
Total RNA extraction
Total RNA was extracted using Trizol ™ reagent (Thermo Scientific, Massachusetts, USA) with a ratio of 9: 1 Trizol versus cell/particle suspension. The lysis mixture was added with MgCl 2 and chloroform and incubated at RT. The aqueous phase was collected and incubated in with isopropanol overnight at -20°C. Total RNA was then pelleted with centrifugation and then washed twice with RNase-free 75% ethanol before air-drying and resuspending in RNase-free water (volume based on pellet size).
Quantitative reverse transcription-PCR
Total RNA with sufficient quality was subjected to qRT-PCR to confirm the presence of EV miRNAs and chondrocyte mRNAs.
For EV miRNA analysis, extracted RNAs were used as templates to prepare cDNA using the miScript II RT kit (Qiagen, Hilden, Germany), following the manufacturer's instructions. Then, cDNA-containing mixtures (10 mL) were subjected to qPCR using the miScript SYBR Green PCR kit (Qiagen, Hilden, Germany) and two specific primers, miScript Primer Assay 10X (Qiagen, Hilden, Germany) designed to target miR-320a-3p and miR-181b-3p. The incubation was performed on Applied Biosystems 7500 Block (Applied Biosystems, Massachusetts, USA). The relative expression of miRNAs in UCMSCs was normalized to reference gene RNU6B (Qiagen, Hilden, Germany) and miRNAs in EVs was normalized to their secreted cells (UCMSCs) and represented by the DCt values, with a higher DCt value representing a less selective sorting of this miRNA into EVs and vice versa.
For chondrocyte mRNAs analysis, cells were cultured for one week under treatment as described in Table 1, and chondrocyte RNAs were isolated as above. cDNA was prepared using SuperScript ™ IV Reverse Transcriptase (Thermo Scientific, Massachusetts, USA), and step by step was performed according to the manufacturer's protocol. cDNA products were then subjected to qPCR reaction, using specificdesigned primers that targeted chondrocyte RNAs, including normal chondrocyte markers of Collagen type II (COL2A1), Cartilage oligomeric matrix protein (COMP), and Aggrecan (ACAN) and hypertrophic chondrocyte markers of Collagen type I (COL1A1) and Runt-related transcription factor 2 (RUNX2), and GAPDH as an internal control (primer sequences were listed in Supplementary Table 1). 2 -DDCt method was applied to calculate the relative fold gene expression of samples.
Proliferation assay
Human articular chondrocytes were seeded (2,500 cells in each well of 96-well-plate) and incubated in media as listed in Table 1. No-EV was used as the control. Cells were incubated at 37°C and 5% CO 2 for 48 hours to proliferate. The cell proliferation rate was assessed by performing a 3-(4,5dimethylthiazol-2-yl)-2,5-diphenyl tetrazolium bromide (MTT) assay (Abcam, Cambridge, UK) following the manufacturer's protocols. The proliferation rate was equivalent to the relative absorbance measured at 562 nm (SpectraMax M3, Molecular Devices, California, USA) at time points of 0 hours (as used for normalization) and 48 hours. The proliferation rate was calculated based on the OD values obtained from two time points.
Migration assay
Human articular chondrocytes were cultured in a 24-well plate with a density of 1.05 × 10 5 cells/well at 37°C and 5% CO 2 for attachment. After reaching 100% confluency, cells were then incubated with Mitomycin C (10 mg/mL) for 2 hours to inhibit cell proliferation. A physical scratch was created on the cell attachment layer, and detached cells were removed by washing with media. Treatments were established similarly to proliferation assay (Table 1). Cell migration to close the wound area was captured by an inverted microscope at multiple time points. The wound area was measured using ImageJ software (version 1.48) and calculated for the closure percentage over time, which represents the rate of cell migration.
Statistical analysis
The statistical analysis was performed on GraphPad Prism 9 (GraphPad Software, California, USA) using One-Way and Two-Way ANOVA, and Tukey HSD tests. The statistical significance was defined as a p-value < 0.05. All data were shown as means ± SD of three biological replicates.
Data availability statement
The original contributions presented in the study are included in the article/Supplementary Material. Further inquiries can be directed to the corresponding authors.
|
2022-10-27T13:47:24.923Z
|
2022-10-27T00:00:00.000
|
{
"year": 2022,
"sha1": "ec83e06682cb2ea53e54b73c8c84f8ed8e285a3a",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "ec83e06682cb2ea53e54b73c8c84f8ed8e285a3a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
84843917
|
pes2o/s2orc
|
v3-fos-license
|
Riemann--Hilbert Problems
These lectures introduce the method of nonlinear steepest descent for Riemann-Hilbert problems. This method finds use in studying asymptotics associated to a variety of special functions such as the Painlev\'{e} equations and orthogonal polynomials, in solving the inverse scattering problem for certain integrable systems, and in proving universality for certain classes of random matrix ensembles. These lectures highlight a few such applications.
Lecture 1
These four lectures are an abridged version of 14 lectures that I gave at the Courant Institute on RHPs in 2015. These 14 lectures are freely available on the AMS website AMS Open Notes.
Basic references for RHPs are [8,12,28]. Basic references for complex function theory are [19,23,24]. Many more specific references will be given as the course proceeds.
Special functions are important because they provide explicitly solvable models for a vast array of phenomena in mathematics and physics. By "special functions" I mean Bessel functions, Airy functions, Legendre functions, and so on. If you have not yet met up with these functions, be assured, sooner or later, you surely will.
It works like this. Consider the Airy equation (see, e.g. [1,29]) Seek a solution of (1.1) in the form Other contours provide other, independent solutions of Airy's equation, such as Bi(x) (see [1]). Now the basic fact of the matter is that the integral representation (1.2) for Ai(x) enables us, using the classical method of stationary phase/steepest descent, to compute the asymptotics of Ai(x) as x → +∞ and −∞ with any desired accuracy. We find, in particular [1, p. 448], that for ζ = 2 3 x 3/2 and that There are similar precise results for all the classical special functions. The diligent student should regard Abramowitz & Stegun [1] as an exercise book for the steepest descent method -verify all the asymptotic formulae! Now in recent years it has become clear that a new and extremely broad class of problems in mathematics, engineering and physics is described by a new class of special functions, the so-called Painlevé functions. There are six Painlevé equations and we will say more about them later on. Whereas the classical special functions, such as Airy functions, Bessel functions, etc. typically arise in linear (or linearized problems) such as acoustics or electromagnetism, the Painlevé equations arise in nonlinear problems, and they are now recognized as forming the core of modern special function theory. Here are some examples of how Painlevé equations arise: Then [16] as t → ∞, in the region |x| c t 1/3 , c < ∞, (1.10) u(x, t) = 1 (3t) 1/3 p where p(s) is a particular solution of the Painlevé II (PII) equation p ′′ (s) = s p(s) + 2 p 3 (s). Example 1.11. Let π := (π 1 π 2 . . . π N ) ∈ S N be a permutation of the numbers 1, 2, . . . , N. We say that π i 1 , π i 2 , . . . π i k is an increasing subsequence of π of length k if i 1 < i 2 < · · · < i k and π i 1 < π i 2 < · · · < π i k .
Question. How does ℓ N behave statistically as N, n → ∞? Theorem 1.12 ([2]). Center and scale ℓ N as follows: where u(s) is the (unique) solution of Painlevé II (the so-called Hastings-McLeod solution) normalized such that u(s) ∼ Ai(s) as s → +∞.
The distribution on the right in Theorem 1.12 is the famous Tracy-Widom distribution for the largest eigenvalue of a GUE matrix in the edge scaling limit. Theorem 1 is one of a very large number of probabilistic problems in combinatorics and related areas, whose solution is expressed in terms of Random Matrix Theory (RMT) via Painlevé functions (see, e.g., [3]).
The key question is the following: Can we describe the solutions of the Painlevé equations as precisely as we can describe the solutions of the classical special functions such as Airy, Bessel, . . . ? In particular, can we describe the solutions of the Painlevé equations asymptotically with arbitrary precision and solve the connection/scattering problem as in (1.4) and (1.5) for the Airy equation (or any other of the classical special functions): known behavior as x → +∞ ⇒ known behavior as x → −∞ and vice versa. As we have indicated, at the technical level, connection formulae such as (1.4) and (1.5) can be obtained because of the existence of an integral representation such as (1.2) for the solution. Once we have such a representation the asymptotic behavior is obtained by applying the (classical) steepest descent method to the integral. There are, however, no known integral representations for solutions of the Painlevé equations and we are led to the following questions: Question 1: Is there an analog of an integral representation for solutions of the Painlevé equations? Question 2: Is there an analog of the classical steepest descent method which will enable us to extract precise asymptotic information about solutions of the Painlevé equations from this analog representation?
The answer to both questions is yes: In place of an integral representation such as (1.2), we have a Riemann-Hilbert Problem (RHP), and in place of the classical steepest descent method we have the nonlinear (or non-commutative) steepest descent method for RHPs (introduced by P. Deift and X. Zhou [16]).
So what is a RHP? Let Σ be an oriented contour in the plane, see By convention, if we move along an arc in Σ in the direction of the orientation, the (±)-sides lie on the left (resp. right). Let v : Σ → GL(k, C), the jump matrix, be an invertible k × k matrix function defined on Σ with v, v −1 ∈ L ∞ (Σ).
We say that an n × k matrix function m(z) is a solution of the RHP If, in addition, n = k and we say that m(z) solves the normalized RHP (Σ, v).
RHPs involve a lot of technical issues. In particular • How smooth should Σ be?
• What measure theory/function spaces are suitable for RHPs?
• What happens at points of self intersection (see Figure 1.14)? • In what sense are the limits m ± (z) achieved?
• In the case n = k, in what sense is the limit m(z) → I k achieved?
• Does an n × k solution exist?
• In the normalized case, is the solution unique?
And most importantly • at the analytical level, what kind of problem is a RHP? As we will see, the problem reduces to the analysis of singular integral equations on Σ.
There is not enough time in these 4 lectures to address all these issues systematically. Rather we will address specific issues as they arise.
As an example of how things work, we now show how PII is related to a RHP (see, e.g. [22]). Let Σ denote the union of six rays Σ k = e i(k−1) π/3 ρ, ρ > 0, 1 k 6 oriented outwards. Let p, q, r be complex numbers satisfying the relation (1.15) p + q + r + pqr = 0.
Let v(z), z ∈ Σ, be constant on each ray as indicated in Figure 1.16 and for fixed and so on. For fixed x, let m x (z) be the 2 × 2 matrix solution of the normalized RHP (Σ, v x ). Then (This result is due to Jimbo and Miwa [27], and independently to Flaschka and Newell [20].) The asymptotic behavior of u(x) as x → ∞ is then obtained from the RHP (Σ, v x ) by the nonlinear steepest descent method.
In the classical steepest descent method for integrals such as (1.2) above, the contour Σ is deformed so that the integral passes through a stationary phase point where the integrand is maximal and the main contribution to the integral then comes from a neighborhood of this point. The nonlinear (or non-commutative) steepest descent method for RHPs involves the same basic ideas as in the classical scalar case in that one deforms the RHP, Σ → Σ ′ , in such a way that the exponential terms (see e.g. e 2iθ above) in the RHP have maximal modulus at points of the deformed contour Σ ′ . The situation is far more complicated than the scalar integral case, however, as the problem involves matrices that do not commute. In addition, terms of the form e −2iθ also appear in the problem and must be separated algebraically from terms involving e 2iθ , so that in the end the terms involving e 2iθ and e −2iθ both have maximal modulus along Σ ′ (see [16][17][18]). A simple example of the nonlinear steepest descent method is given at the end of Lecture 4.
One finds, in particular, ( [18], and also [22,25]) the following: Let −1 < q < 1, p = −q, r = 0. Then as x → −∞, These asymptotics should be compared with (1.4), (1.5) for the Airy function. Note from (1.4) that as x → +∞ is a clearly a nonlinearization of the Airy equation and so we expect similar solutions when the nonlinear term 2 u 3 (x) is small.
Also note that (1.17) and (1.18) solve the connection problem for PII. If we know the behavior of the solutions u(x) of PII as x → +∞, then we certainly know q from (1.20). But then we know ν = ν(q) and φ = φ(q) in (1.18) and (1.19) and hence we know the asymptotics of u(x) as x → −∞ from (1.17). Conversely, if we know the asymptotics of u(x) as x → −∞, we certainly know ν > 0 from (1.17) and hence we know q 2 from (1.18), q 2 = 1 − e −2π ν . But then again from (1.17), we know φ, and hence sgn(q) from (1.19). Thus we know q, and hence the asymptotics of the solution u(x) as x → +∞ from (1.20). Finally note the similarity of the multiplier in the RHP for PII. Setting z → i z in (1.21) which agrees with (1.22) up to appropriate scalings. Also note from (1.15) that PII is parameterized by parameters lying on a 2-dim variety: this corresponds to the fact that PII is second order.
The fortunate and remarkable fact is that the class of problems in physics, mathematics, and engineering expressible in terms of a RHP is very broad and growing. Here is one more, with more to come! The RHP for the MKdV equation (1.9) is as follows (see e.g., [16]): There is a bijection from the initial data u(x, t = 0) = u 0 (x) for MKdV onto such functions r(z) -see later. The function r(z) is called the reflection coefficient for u 0 , see (4.13). Let m = m x,t (z) be the solution of the normalized RHP (Σ, v x,t ). Then is the solution of MKdV with initial condition u(x, t = 0) = u 0 (x) corresponding to r(z).
Here The asymptotic result (1.10) is obtained by applying the nonlinear steepest descent method to the RHP (Σ, v x,t ) in the region |x| c t 1/3 . In this case PII emerges as the RHP (Σ, v x,t ) is "deformed" into the RHP (Σ, v x ) in Figure 1. 16.
As we will see, RHPs are useful not only for asymptotics, but also they can be used to determine symmetries and formulae/identities/equations, and also for analytical purposes.
Lecture 2
We now consider some of technical issues that arise for RHPs, which are listed with bullet points above.
A key role in RH theory is played by the Cauchy operator. We first consider the case when Σ = R. Here the Cauchy operator C = C R is given by for suitable functions f on R (General refs for the case Σ = R, and also when Σ = {|z| = 1}, are [19] and [23].) Assume first that f ∈ S(R), the Schwartz space of functions on R. Let z = x + i ǫ, x ∈ R, ǫ > 0. Then Then, by dominated convergence, Write
As
s − x (s − x) 2 + ǫ 2 is an odd function about s = x, II <ǫ can be written as which goes to 0 as ǫ ↓ 0. Finally We have and so as ǫ ↓ 0, again by dominated convergence, as the final integrand is odd.
Thus we see that for Σ = R and f ∈ S(R) where x−s ds indeed exists pointwise for f ∈ S. Similarly one finds and we obtain the fundamental relations for f ∈ S where z ′ lies in a cone of arbitrary opening angle α < π (see Figure 2.3), and similarly for C − f(x) (see refs. [5,6,8]). A critical property of the singular integral operator H, and hence the operators C ± , is that, as we now show, H is a bounded operator from L p (R) → L p (R) for all 1 < p < ∞. To prove the result for L 2 , recall that the Fourier transform and the inverse Fourier transform are unitary maps For f ∈ S(R), fix ǫ > 0, and set Then by Fubini's Theorem. Now for s fixed and R large, and z > 0 Hence for s fixed and z > 0 But we also have It follows that we may take the limit R → ∞ in (2.4) in the s-integral and so for z > 0 Exercise 2.7. Show, by a similar argument, that Thus which converges to 0 as ǫ ↓ 0, again by dominated convergence. In other words, for f ∈ L 2 , In particular, it follows by general measure theory, that for some sequence ǫ n ↓ 0 pointwise a.e. In particular (2.8) holds for f ∈ S(R). But then by our previous calculations, C f(x + i ǫ n ) converges pointwise for all x, and we conclude that for f ∈ S and a.e. x Thus C + f and, hence H f, extend to bounded operators on L 2 (R) and 1 2f where sgn(z) = +1 if z > 0 and sgn(z) = −1 if z < 0. We have shown the following: For f ∈ L 2 , and similarly The following argument of Riesz shows that in fact C ± , and hence H, are bounded in L p (R), for all 1 < p < ∞. Consider first the case p = 4. Suppose f ∈ C ∞ 0 (R), the infinitely differentiable functions with compact support. Then as z → ∞, and C f(z) is continuous down to the axis. By Cauchy's theorem where C R is given in Figure 2.9, and as But then as Now suppose that f is real. Then Hf is real and the real part of (2.10) yields for any c > 0. Take c = 6. Then The case when f is complex valued is handled by taking real and imaginary parts. Thus, by density, H maps L 4 boundedly to L 4 .
Exercise 2.11. Show that H maps L p → L p for all 1 < p < ∞. Hints: (1) Show that the above argument works for all even integers p.
(2) Show that the result follows for all p 2 by interpolation.
(3) Show that the result for 1 < p < 2 now follows by duality.
12. Contours that self-intersect. Exercise 2.13. Show that H is not bounded from L 1 → L 1 . (However H maps L 1 → weak L 1 .) As indicated in Lecture 1, RHPs take place on contours which self-intersect (see Figure 2.12).
We will need to know, for example, that if f is supported on Σ 1 , say, and we consider Here is a prototype result which one can prove using the Mellin transform, which we recall is the Fourier transform for the multiplicative group {x > 0}. We have [5, p. 88] the following: For f ∈ L 2 (0, ∞) and r > 0, set One can also show that for any 1 < p < ∞ for some c θ,p < ∞. Results such as (2.14) are useful in many ways. For example, we have the following result.
uniformly Hölder-1 2 in C + and in C − . In particular, Cf is continuous down to the axis in C + and in C − . Then as R We now consider general contours Σ ⊂ C = C ∪ {∞}, which are composed curves: By definition a composed curve Σ is a finite union of arcs {Σ i } n i=1 which can intersect only at their end points. Each arc Σ i is homeomorphic to an interval Here C has the natural topology generated by the sets , is a composed curve on the understanding that it is a union of (at least) two arcs.
Although it is possible, and sometimes useful, to consider other function spaces (e.g. Hölder continuous functions), we will only consider RHPs in the sense of L p (Σ) for 1 < p < ∞.
So the first question is "What is L p (Σ)?". The natural measure theory for each arc Σ i is generated by arc length measure µ as follows. If z 0 = ϕ(t 0 ) and z n = ϕ(t n ) are the end-points of some arc Σ ⊂ C, and z 0 , z 1 , . . . , z n is any partition of [z 0 , z n ] = {ϕ(t) : t 0 t t n } (we assume z i+1 succeeds z i in the ordering induced on Σ by ϕ, symbolically z i < z i+1 , etc.) then If L < ∞ we say that the arc Σ = [z 0 , z n ] is rectifiable and L [z 0 ,z n ] is its arc length. We will only consider composed curves Σ that are locally rectifiable i.e. for any R > 0, Σ ∩ {|z| < R} is rectifiable (note that the latter set is an at most countable union of simple arcs and rectifiability of the set means that the sum of the arc lengths of these arcs is finite. In particular, the unit circle T as a union of 2 rectifiable subarcs, is rectifiable, and R is locally rectifiable.) For any interval Now the sets {[α, β) : α < β on Σ i } form a semi-algebra (see [30]) and hence µ i can be extended to a complete measure on a σ-algebra A containing the Borel sets on Σ i . The restriction of the measure to the Borel sets is unique. For 1 p < ∞, and then all the "usual" properties go through. One usually writes dµ = |dz|. For Exercise 2.17. |dz| is also equal to Hausdorff-1 measure on Σ 1 .
is not a composed curve, although Σ 1 and Σ 2 are both locally rectifiable.
For Σ as above we define the Cauchy operator for h ∈ L p (Σ, |dz|), 1 p < ∞, by Given the homeomorphisms for each i, the integrand (clearly) lies in L p (ds : [0, s i )). Now the fact of the matter is that many of the properties that were true for C Σ when Σ = R, go through for C Σ in the general situation. (See, in particular, [24].) In particular for f ∈ L p (Σ, dµ), the non-tangential limits exist pointwise a.e. on Σ. Figure 2.20 demonstrates non-tangential limits. Moreover, where the Hilbert transform is now given by and the points z ∈ Σ for which the non-tangential limits (2.19) exists are precisely the points for which the limit in (2.22) exists.
Again, for f ∈ L p (Σ, dµ) with 1 p < ∞, and The following issue is crucial for the analysis of RHPs: Question. For which locally rectifiable contours Σ are the operators C ± and H bounded in L p , 1 < p < ∞?
Quite remarkably, it turns out that there are necessary and sufficient conditions on a simple rectifiable curve for C ± , H to be bounded in L p (Σ), 1 < p < ∞. The result is due to many authors, starting with Calderón [7], and then Coifman, Meyer and McIntosh [9], with Guy David [10] (see [6] for details and historical references) making the final decisive contribution.
Let Σ be a simple, rectifiable curve in C. For any z ∈ Σ, and any r > 0, let where D r (z) is the ball of the radius r centered at z, see Figure 2.23. ℓ r (z) r .
Theorem 2.24. Suppose λ Σ < ∞. Then for any 1 < p < ∞, the limit in (2.22) exists for a.e. z ∈ Σ and defines a bounded operator for any 1 < p < ∞ Conversely if the limit in (2.22) exists a.e. and defines a bounded operator H in L p (Σ) for some 1 < p < ∞, then H gives rise to a bounded operator for all p, 1 < p < ∞, and λ Σ < ∞.
An excellent reference for the above Theorem, and more, is [6].
The fact that φ p is independent of Σ, is very important for the nonlinear steepest descent method, where one deforms curves in a similar way to the classical steepest descent method for integrals.
Carleson curves are sometimes called AD-regular curves: the A and D denote Ahlfors and David. To get some sense of the subtlety of the above result, consider the following curve Σ with a cusp at the origin (see Figure 2.26): Clearly λ Σ < ∞ so that the Hilbert transform H Σ is bounded in L p , 1 < p < ∞.
Lecture 3
We now make the notion of a RHP precise (see [8,17,28]). Let Σ be a composite, oriented Carleson contour in C and let v : Σ → GL(n, C) be a jump matrix on Σ, Σ h, H Σ h be the associated Cauchy and Hilbert operators.
We say that a pair of In turn we call f(z) ≡ Ch(z), z ∈ C\Σ, the extension of f ± = C ± h ∈ ∂C(L p ) off Σ.
Definition 3.2. Fix 1 < p < ∞. Given Σ, v and a function F ∈ L p (Σ), we say that M ± ∈ ∂ C(L p ) solves an inhomogeneous RHP of the second kind (IRHP2 p ) if Recall that m solves the normalized RHP (Σ, v) if, at least formally, • m(z) is a n × n analytic function in C\Σ, • m(z) → I as z → ∞.
Theorem 3.6. If f and v are such that f(v − I) ∈ L p (Σ) for some < p < ∞, then
The first part of this result is straightforward: Suppose
. The converse is more subtle and is left as an exercise: We now show that the RHPs IRHP1 p and IRHP2 p , and, in particular, the normalized RHP (Σ, v) p are intimately connected with the singular integral operator 1 − C ω .
We summarize the above calculations as follows: ⇐⇒ IRHP1 p has a unique solution for all f ∈ L p (Σ) ⇐⇒ IRHP2 p has a unique solution for all F ∈ L p (Σ).
Moreover, if one, and hence all three of the above conditions, is satisfied, then for all f ∈ L p (Σ) where m ± solves IRHP1 p with the given f and M ± solves IRHP2 p with F = f(v − I) (∈ L p !), and if M ± solves IRHP2 p with F ∈ L p (Σ), then Finally, if f ∈ L ∞ (Σ) and v ± − I ∈ L p (Σ), then (3.11) remains valid provided we interpret This is true, in particular, for the normalized RHP (Σ, v) p where f ≡ I.
Note that if we take v + = v, v − = I, in particular, then The above Proposition implies, in particular, that if µ ∈ I + L p solves in the sense of (3.12) i.e. µ = I + ν, ν ∈ L p (3.14) It is in this precise sense that the solution of the normalized RHP is equivalent to the solution of a singular integral equation (3.13), (3.14) on Σ.
One very important consequence of the proof of Proposition 3.10 is given by the following Let m ± solve IRHP1 p with the given f and let M ± solve IRHP2 p with F = f(v − I). Then for some constants c = c p , c ′ = c ′ p . In particular if we know, or can show, that m ± p const f p , or M ± p const f p , then we can conclude from (3.16) or (3.17) that (1 − C ω ) −1 is bounded in L p with a corresponding bound. Conversely if we know that (1 − C ω ) −1 exists, then the above calculations show that m ± p c f p and M ± p c f p for corresponding constantsc,c.
Proof. Supposem ± = I + C ±ĥ ,ĥ ∈ L p (Σ) is a 2 nd solution of the normalized RHP. We have, by assumption, m −1 ± = I + C ± k for some k ∈ L q (Σ). (It is an Exercise to show that I + (Ck)(z), the extension of m −1 ± to C\Σ, is in fact m(z) −1 .). Then arguing as abovê These results immediately imply that the normalized RHP (Σ = R, v x,t ) for MKdV with v x,t given by (1.23) has a unique solution in But for Σ = R, we have Exercise 3.22. Both C + and −C − are orthogonal projections in L 2 (R) and so C ± L 2 = 1.
It follows that for each x, t ∈ R, 1 − C ω x,t −1 exists in L 2 (R) and and the proof of the existence and uniqueness for (Σ, v x,t ) follows from Proposition 3.10. On the other hand, just uniqueness alone follows from Theorem 3.21 as det v(z) ≡ 1 on R. Now it turns out that a key role in the theory of RHPs is played by Fredholm operators. Recall that a bounded linear operator T from a Banach space X to a Banach space Y is Fredholm if dim ker T < ∞ and dim coker T < ∞ i.e. Y/ran T is a finite dimensional space.
If T is Fredholm, we define index T ≡ dim ker T − dim coker T .
Exercise 3.24. T : X → Y is Fredholm iff it has a pseudo-inverse S ∈ L(Y, X) such that ST = 1 X + K and T S = 1 Y + L where K is a compact operator in L(X) and L is a compact operator in L(Y).
We know that a normalized RHP (Σ, v) p , say, has a (unique) solution if (1 − C ω ) −1 exists. The situation where we know, for example, that C ω L 2 < 1, as in the example (Σ = R, v x,t ) above so that (1 − C ω ) −1 exists, is very rare. For example, for the KdV equation on R the associated RHP is exactly the same as (R, v x,t ) for MKdV, except that now, generically, Thus r ∞ = 1 and the above proof of the existence and uniqueness for the RHP breaks down. A more general approach to proving the existence and uniqueness of solutions to normalized RHPs, is to attempt the following: Then it follows that 1 − C ω is a bijection, and hence the normalized RHP (Σ, v) has a unique solution.
Let's see how this goes for KdV with normalized RHP (Σ = R, v x,t ), but now r satisfies (3.25), (3.26). By our previous comments (see Remark above), it is enough to consider the special case v + = v, v − = I so that ω + = v − I and ω − = 0. Thus We assume r(z) is continuous and r(z) → 0 as |z| → ∞. Let S be the operator Thus and we see that Hint: v − I is a continuous function which → 0 as |z| → ∞ and hence can be approximated in L ∞ (R) by finite linear combinations of functions of the form a/(z − z ′ ) for suitable constants a and points z ′ ∈ C\R. Then use the following fact: Exercise 3.28. If T n , n 1 are compact operators in L(X, Y) and T n − T → 0 as n → ∞ for some operator T ∈ L(X, Y), then T is compact.
The proof above shows that C ω(γ) is a norm continuous family of Fredholm operators and so ind(1 − C ω ) = ind 1 − C ω(γ=1) = ind 1 − C ω(γ=0) = 0 as C ω(γ=0) = 0 and the index of the identity operator is clearly 0. Finally suppose Then using (3.9), m + = µv and m − = µ solve m + = m − v, m ± ∈ ∂C(L 2 ). Consider P(z) = m(z) (m(z)) * for z ∈ C + where m(z) is the extension of m ± off R i.e. if m ± = C ± h, h ∈ L 2 , then m(z) = (Ch)(z). Then for a contour Γ R, ǫ , pictured in Figure 3.30, Γ R,ǫ P(z) dz = 0 as P(z) is analytic. Letting ǫ ↓ 0 and R → ∞, we obtain (exercise) Taking adjoints and adding, we find But a direct calculation shows that v + v * is diagonal and Now since |r(z)| < 1 a.e. (in fact everywhere except z = 0), we conclude that m − (z) = 0. But µ = m − and so we see that ker The result of the above chain of arguments is that the solution of the normalized RHP (Σ, v x,t ) for KdV exists and is unique. Such Fredholm arguments have wide applicability in Riemann-Hilbert Theory [22].
One last general remark. The scalar case n = 1 is special. This is because the RHP can be solved explicitly by formula. Indeed, if m + = m − v, then it follows that (log m) + = (log m − ) + log v and hence log m(z) is given by Plemelj's formula, which provides the general solution of additive RHPs, via a formula which is easily checked directly. However, there is a hidden subtlety in the business: On R, say, although v(s) may go rapidly to 0 as s → ±∞, v(s) may wind around 0 and so log v(s) may not be integrable at both ±∞. Thus there is a topological obstacle to the existence of a solution of the RHP. If n > 1, there are many more such "hidden" obstacles.
(Here we assume that dµ has infinite support: otherwise there are only a finite number of such polynomials.) Associated with the π n 's are the orthonormal polynomials (4.1) P n (x) = γ n π n (x), γ n > 0, n 0 such that Orthogonal polynomials are of great historical and continuing importance in many different areas of mathematics, from algebra, through combinatorics, to analysis. The classical orthogonal polynomials, such as the Hermite polynomials, the Legendre polynomials, the Krawchouk polynomials, are well known and much is known about their properties. In view of our earlier comments it should come as no surprise that much of this knowledge, particularly asymptotic properties, follows from the fact that these polynomials have integral representations analogous to the integral representation for the Airy function in the first lecture. For example, for the Hermite polynomials one has the integral representation where C is a (small) circle enclosing the origin, (Note: the H n 's are not monic, but are proportional to the π n 's, H n (x) = c n π n (x) where the c n 's are explicit) and the asymptotic behavior of the H n 's follow from the classical steepest descent method. For general weights, however, no such integral representations are known. The Hermite polynomials play a key role in random matrix theory in the socalled Gaussian Unitary, Orthogonal and Symplectic Ensembles. However it was long surmised that local properties of random matrix ensembles were universal, i.e., independent of the underlying weights. In other words if one considers general weights such as e −x 4 dx, e −(x 6 +x 4 ) dx, etc., instead of the weight e −x 2 dx for the Hermite polynomials, the local properties of the random matrices, at the technical level, boil down to analyzing the asymptotics of the polynomials orthogonal with respect to the weights e −x 4 dx, e −(x 6 +x 4 ) dx, etc., for which no integral representations are known. What to do?
It turns out however, that orthogonal polynomials with respect to an arbitrary weight can be expressed in terms of a RHP. Suppose dµ(x) = ω(x) dx, for some ω(x) 0 such that R |x| m ω(x) dx < ∞, m = 0, 1, 2, . . . . and suppose for simplicity that normalized so that Exercise 4.3. Show that we then have (see e.g. [12]) −2πi γ 2 n−1 π n−1 (z) C −2π iγ 2 n−1 π n−1 ω where C = C R is the Cauchy operator on R, π n , π n−1 are the monic orthogonal polynomials with respect to ω(x) dx and γ n−1 is the normalization coefficient for π n−1 as in (4.1). (Note that by (4.2) and Theorem 2.15, Y (n) (z) is continuous down to the axis for all z.) This discovery is due to Fokas, Its and Kitaev [21]. Moreover this is just exactly the kind of problem to which the nonlinear steepest descent method can be applied to obtain ( [14,15]) the asymptotics of the π n 's with comparable precision to the classical cases, Hermite, Legendre, . . . , and so prove universality for unitary ensembles (and later, Deift and Gioev, Shcherbina, for Orthogonal & Symplectic Ensembles of random matrices, see [13] and the references therein).
As mentioned earlier, RHPs are useful not only for asymptotic analysis, but also to analyze analytical and algebraic issues. Here we show how RHPs give rise to difference equations, or differential equations, in other situations.
Consider the solution Y (n) for the orthogonal polynomial RHP R, v = 1 ω 0 1 .
The key fact is that the jump matrix 1 ω 0 1 is independent of n: the dependence on n is only in the boundary condition Hence R(z) has no jump across R and so, by an application of Morera's Theorem, R(z) is in fact entire. But as z → ∞ Thus R(z) must be a polynomial of order 1, for suitable A and B, or, which is a difference equation for orthogonal polynomials with respect to a fixed weight. b n p n+1 (z) + (a n − z) p n (z) + b n−1 p n−1 = 0, n 0 a n ∈ R, b n > 0; b −1 ≡ 0.
Whereas the RHP for orthogonal polynomials comes "out of the blue", there are some systematic methods to produce RHP representations for certain problems of interest. This is true in particular for RHPs associated with ordinary differential equations. For example, consider the ZS-AKNS equation (Zakharov-Shabat, Ablowitz-Kaup-Newell-Segur) (see e.g. [17]). Here z ∈ C, σ = 1 2 1 0 0 −1 and q(x) → 0 at some sufficiently fast rate as |x| → ∞. Equation (4.7) is intimately connected with the defocusing Nonlinear Schrödinger Equation (NLS) by virtue of the fact that the operator In other words, if q = q(t) solves NLS then the spectrum of is constant: Thus the spectrum of L(t) provides constants of the motion for (4.9), and so NLS is "integrable". The key fact is that there is a RHP naturally associated with L which expresses the integrability of NLS in a form that is useful for analysis. Here we follow Beals and Coifman, see [4]. Let q(x) in (4.8) be given with q(x) → 0 as |x| → ∞ sufficiently rapidly. Then for any z ∈ C\R, (1) For fixed x, ψ(x, z) is analytic in C\R, and is continuous down to the axis.
Now clearly ψ ± (x, z), z ∈ R, are two fundamental solutions of (L − z) ψ = 0 and so for z ∈ R, for all x ∈ R, where v(z) is independent of x. In other words, by (1) of Remark 4.11, ψ(x, ·) solves a RHP (Σ = R, v), normalized as in (4.12). In this way differential equations give rise to RHPs in a systematic way.
One can calculate (exercise) the precise form of v(z) and one finds where, again (cf. (1.23) for MKdV) we have for r, the reflection coefficient, is a bijection between suitable spaces: r = R(q), the direct map, is constructed from q via the solutions ψ(x, z) as above. The inverse map r → R −1 (r) = q is constructed by solving the RHP (Σ, v) normalized by (4.12) for any fixed x. One obtains (cf (1.24) for MKdV). Now if q = q(t) = q(x, t) solves NLS then r(t) = R (q(t)) evolves simply, i.e. t → q(t) → r(t) → log r(t) = log r(t = 0) − itz 2 linearizes NLS. This leads to the following formula for the solution of NLS with initial data q 0 (4.14) The effectiveness of this representation, which one should view as the RHP analog of NLS of the integral representation (1.2) for the Airy equation, depends on the effectiveness of the nonlinear steepest descent method for RHPs.
Question.
Where in the representation (4.14) is the information encoded that q(t) solves NLS?
The answer is as follows. Let ψ(x, z, t) be the solution of the RHP with jump matrix normalized as in (4.12). Set H(x, z, t) = ψ(x, z, t) e −itz 2 σ and observe that (4.15) for which the jump matrix is independent of x and t. This means that we can differentiate which reduces directly to NLS. In this way RHP's lead to difference and differential equations. Another systematic way that RHP's arise is through the distinguished class of so-called integrable operators. Let Σ be an oriented contour in C and let f 1 , . . . , f n and g 1 , . . . , g n be bounded measurable functions on Σ. We say that an operator K acting on L p (Σ), 1 < p < ∞, is integrable if it has a kernel of the form Integrable operators were first singled out as a distinguished class of operators by Sakhnovich [31] in the late 1960's, and their theory was developed fully by Its, Izergin, Korepin and Slavnov [26] in the early 1990's (see [11] for a full discussion). The famous sine kernel of random matrix theory 2i π(z − z ′ ) is a prime example of such an operator, as is likewise the well-known Airy kernel operator.
Integrable operators form an algebra, but their most remarkable property is that their inverses can be expressed in terms of the solution of a naturally associated RHP. Indeed, let m(z) be the solution of the normalized RHP (Σ, v) where . . , f n ) T , g = (g 1 , . . . , g n ) T .
(Here we assume for simplicity that Σ n i=1 f i (z) g i (z) = 0, for all z ∈ Σ as in the sine-kernel: otherwise (4.16) must be slightly modified).
Then (1 − K) −1 has the form 1 + L where L is an integrable operator This means that if, for example, K depends on parameters, as in the case of the sine kernel, asymptotic problems involving K as the parameters become large, are converted into asymptotic problems for a RHP, to which the nonlinear steepest descent method can be applied.
Sketch of proof. Let e k , 0 k n, be the standard basis in C n+1 . Then the map U n : e k → z k , 0 k n, z ∈ T takes C n+1 onto the trigonometric polynomials P n = n j=0 a j z j of degree n and induces a map τ n : P n → P n which is conjugate to X(ϕ).
Hence (Exercise) So we see that in order to evaluate D n as n → ∞ we must evaluate the asymptotics of the solution m t of the normalized RHP (T, v t ) as n → ∞, for each 0 t 1, and substitute this information into (4.27) using (4.26). This is precisely what can be accomplished [11] using the nonlinear steepest descent method.
Here we present the nonlinear steepest descent analysis in the case when ϕ(z) is analytic in an annulus A ǫ = {z : 1 − ǫ < |z| < 1 + ǫ}, ǫ > 0 around T. The idea of the proof, which is a common feature of all applications of the nonlinear steepest descent method, is to move the z n+1 term (or its analog in the general situation) in v t into |z| < 1 and the z −n−1 term into |z| > 1: then as n → ∞, these terms are exponentially small, and can be neglected. But first we must separate the z n+1 and z −n−1 terms of v t algebraically. This is done using the lower-upper pointwise factorization of v t Now as n → ∞,ṽ(z) → I on Σ ρ and on Σ ρ−1 . This means thatm → m ∞ where m ∞ solves the normalized RHP But this RHP is a direct sum of scalar RHP's and hence can be solved explicitly, as noted earlier (cf. (3.31)). In this way we obtain the asymptotics of m as n → ∞ and hence the asymptotics of the Toeplitz determinant D n .
Here is what, alas, I have not done and what I had hoped to do in these lectures (see AMS open notes): • Show that in addition to the usefulness of RHP's for algebraic and asymptotic purposes, RHP's are also useful for analytic purposes. In particular, RHP's can be used to show that the Painlevé equations indeed have the Painlevé property. • Show that in addition to RHP's arising "out of the blue" as in the case of orthogonal polynomials and systematically in the case of ODE's and also integrable operators, RHP's also arise in a systematic fashion in Wiener-Hopf Theory. • Describe what happens to an RHP when the operator 1 − C ω is Fredholm, but not bijective, and • Finally, I have not succeeded in showing you how the nonlinear steepest descent method works in general. All I have shown is one simple case.
|
2019-03-20T01:17:06.000Z
|
2019-03-20T00:00:00.000
|
{
"year": 2019,
"sha1": "5a30c9acfe792a7231ea5ef676563f15e1d7064e",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1903.08304",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "5a30c9acfe792a7231ea5ef676563f15e1d7064e",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
}
|
4451726
|
pes2o/s2orc
|
v3-fos-license
|
Calcium-Sensing Receptor Antagonist NPS 2143 Restores Amyloid Precursor Protein Physiological Non-Amyloidogenic Processing in Aβ-Exposed Adult Human Astrocytes
Physiological non-amyloidogenic processing (NAP) of amyloid precursor holoprotein (hAPP) by α-secretases (e.g., ADAM10) extracellularly sheds neurotrophic/neuroprotective soluble (s)APPα and precludes amyloid-β peptides (Aβs) production via β-secretase amyloidogenic processing (AP). Evidence exists that Aβs interact with calcium-sensing receptors (CaSRs) in human astrocytes and neurons, driving the overrelease of toxic Aβ42/Aβ42-os (oligomers), which is completely blocked by CaSR antagonist (calcilytic) NPS 2143. Here, we investigated the mechanisms underlying NPS 2143 beneficial effects in human astrocytes. Moreover, because Alzheimer’s disease (AD) involves neuroinflammation, we examined whether NPS 2143 remained beneficial when both fibrillary (f)Aβ25–35 and a microglial cytokine mixture (CMT) were present. Thus, hAPP NAP prevailed over AP in untreated astrocytes, which extracellularly shed all synthesized sAPPα while secreting basal Aβ40/42 amounts. Conversely, fAβ25–35 alone dramatically reduced sAPPα extracellular shedding while driving Aβ42/Aβ42-os oversecretion that CMT accelerated but not increased, despite a concurring hAPP overexpression. NPS 2143 promoted hAPP and ADAM10 translocation to the plasma membrane, thereby restoring sAPPα extracellular shedding and fully suppressing any Aβ42/Aβ42-os oversecretion, but left hAPP expression unaffected. Therefore, as anti-AD therapeutics calcilytics support neuronal viability by safeguarding astrocytes neurotrophic/neuroprotective sAPPα shedding, suppressing neurons and astrocytes Aβ42/Aβ42-os build-up/secretion, and remaining effective even under AD-typical neuroinflammatory conditions.
Results
The specific involvement of CaSRs in the following studies was shown by the inhibitory effects of a highly selective antagonist, NPS 2143, on the NO release and cAMP levels elicited by various treatments (see Supplementary Information and Fig. S1). The reversemer peptide Aβ 35-25 was always ineffective (not shown).
Intracellular and secreted
NPS 2143 rescues the fAβ 25-35 ± CMT-induced block in the physiological shedding of sAPPα. Several studies have highlighted the key physiological roles hAPP NAP plays through the shedding of neurotrophic and neuroprotective sAPPα via α-secretase activity 18,25,65 . Therefore, we investigated how sAPPα shedding is altered in human astrocytes exposed to fAβ 25-35 ± CMT ± NPS 2143.
NPS 2143 drives plasma membrane translocation of hAPP in fAβ
Recent evidence indicates that promoting the delivery of hAPP to the plasma membrane or inhibiting the internalization of hAPP favours hAPP NAP 66 . Hence, we investigated the distribution of hAPP in cortical adult human astrocytes following fAβ [25][26][27][28][29][30][31][32][33][34][35] ± CMT ± NPS 2143-treatment. We biotinylated proteins on the astrocyte cell surface and then assessed the amount of biotinylated hAPP at the plasma membrane, with the remaining non-biotinylated hAPP regarded as "intracellular" hAPP (see Methods for details).
Thus, antagonizing Aβ•CaSR signalling intensified hAPP trafficking to the plasma membrane, an effect that was remarkably intensified by CMT.
Discussion
Currently, scant information is available about the mechanisms modulating hAPP proteolysis in cortical untransformed human neural cells. Our previous findings revealed that Aβ•CaSR signalling promotes the AP of hAPP, eliciting substantial increases in endogenous Aβ 42 accumulation and secretion from adult human astrocytes and postnatal neurons [22][23][24] . Our present results show for the first time that a highly selective CaSR antagonist (calcilytic) rescues the physiological NAP of hAPP, maintaining neurotrophic and neuroprotective sAPPα shedding while fully suppressing pathological AP in Aβ-exposed human astrocytes and neurons, even in the presence of microglial CMT (Fig. 6). Although our results were obtained only in astrocytes, we previously demonstrated that the effects of Aβ•CaSR signalling and its antagonism by a calcilytic on Aβ 42 metabolism are very similar in human neurons and astrocytes [22][23][24] . Hence, an extension of the present findings to neurons is feasible until experimentally proven. The concurrent suppression of NO overproduction and adenylate cyclase inhibition (Fig. S1) strengthens the view that NPS 2143 specifically antagonized Aβ•CaSR signalling in astrocytes.
Therefore, the present results prove for the first time that a calcilytic agent can effectively correct the balance of hAPP processing altered by Aβ•CaSR-signalling-mediated mechanisms in cortical human astrocytes (and likely neurons) (Fig. 6). The calcilytic rescues the α-secretase-mediated extracellular shedding of neurotrophic and neuroprotective sAPPα at the expense of the BACE1/β-secretase-mediated neurotoxic Aβ 42 overrelease. The former would safeguard neuronal trophism, viability, and synaptic connections. The latter is inherently dangerous because overproduced and oversecreted soluble Aβ 42 /Aβ 42 -os and their insoluble fibrillar derivatives are endowed with a pernicious self-propagating potential. These peptides can react with CaSRs in adjacent and farther neurons and astrocytes, triggering self-spreading and self-perpetuating vicious waves of Aβ•CaSR signalling, and their consequent accumulation releases further Aβ 42 /Aβ 42 -os surpluses, which likely sustain LOAD progression [22][23][24] . Most remarkably, calcilytics can break such vicious cycles and hence stop Aβ 42 oversecretion and intra-brain diffusion, thus safeguarding neuronal viability and function [22][23][24] . In addition, the calcilytic NPS 2143 elicits a robust downregulation of total CaSR levels in astrocytes, thereby inducing a lasting cell desensitization to exogenous Aβs•CaSR-driven noxious effects 22 .
Regarding the mechanisms of calcilytics, NPS 2143 notably (i) drives a substantial translocation of hAPP, as well as both the precursor (85 kDa; not shown) and active (55 kDa) forms of ADAM10 α-secretase, to the astrocyte plasma membrane; (ii) greatly increases the total ADAM10 α-secretase specific activity, particularly in the presence of CMT; (iii) maintains the intracellular levels of active ADAM10 (55 kDa) at or above basal values; and (iv) restores neurotrophic and neuroprotective sAPPα extracellular shedding close to untreated control levels while hindering the intracellular accumulation of sAPPα. Consequently, by concomitantly also suppressing any surplus Aβ 42 release, NPS 2143 maintains the secreted Aβ 42 /sAPPα and Aβ 42 /Aβ 40 ratios near controls levels.
As with other type-I transmembrane proteins, hAPP is synthesized in the endoplasmic reticulum (ER). Then, hAPP undergoes maturation (glycosylation) while migrating to the Golgi/TGN compartment, where it is mainly found in neurons 66 . Finally, hAPP reaches the plasma membrane via the constitutive secretory pathway, where it is inserted to be cleaved by ADAM10 (mainly) α-secretase, shedding the sAPPα ectodomain, which also occurs in a post-Golgi compartment 67 . In addition, through the recognition of its YENPTY motif and clathrin-coated pits, hAPP can be quickly endocytosed from the plasma membrane and trafficked back to the membrane, delivered through endosomes to the lysosomal system for proteolysis 66 or alternatively cleaved by BACE1/β-secretase, shedding Aβs (the amyloidogenic pathway), particularly if retained in acidic late endosomes, the TGN or ER 67 . Thus, favouring the plasma membrane trafficking or retention of hAPP blocks Aβ production while enhancing controls. In all cases blots have been cropped to size for clarity. Right panel. Densitometric evaluation of holo-APP specific bands for each treatment and time point. Points on the curves are means ± SEM of 3 independent experiments, with 0-h values normalized as 1.0. One-way ANOVA analysis of (i) fAβ [25][26][27][28][29][30][31][32][33][34][35] ± NPS 2143 data set: F = 14.520, P < 0.001; (ii) fAβ 25 sAPPα extracellular shedding via ADAM10 cleavage. hAPP trafficking is regulated by factors that promote Aβ generation, such as the SNX family (SNX17 and SNX33), dynamin I, and the RAB GTPase family (RAB1B, RAB6, RAB8, and RAB11) 68 . In addition, factors that regulating α−, β−, and γ-secretase trafficking are able to alter hAPP processing and, hence, impact the production of sAPPα or Aβs 68 . Further investigations will clarify the roles played by such factors in human astrocytes.
The activation of a number of cell surface receptors, e.g., muscarinic acetylcholine receptors, platelet-derived growth factor (PDGF) receptors, serotonin/5-hydroxytryptamine (5-HT 4 ) receptors, and metabotropic glutamate receptors, reportedly exerts differential effects on hAPP AP or NAP 69 . Our findings add CaSRs to this group of receptors. These receptors activate various signalling pathways that regulate extracellular Aβ secretion and sAPPα shedding via changes in cytosolic [Ca 2+ ] i , cAMP, inositol 1,4,5-triphosphate, small Rac GTPases, and in the activity of a number of protein kinases, including PKA, PKC, mitogen activated protein kinase kinase (MAPKK), extracellular signal-regulated kinase (ERK), phosphatidylinositol-3-kinase (PI3K), and Src tyrosine kinase 69 . Reduced cholesterol levels also heighten ADAM10 activity and hinder hAPP endocytosis, thus enhancing sAPPα shedding from cultured cells 69 . Similar effects can be obtained via ADAM10 overexpression 70 , pharmacological muscarinic activation 32 or phorbol myristate acetate treatment in hAPP-transfected CHO cells 27 . Conversely, the AP of hAPP was favoured at the expense of sAPPα extracellular shedding following overexpression of BACE1/β-secretase 40 or the Swedish mutant form of hAPP (SweAPP), which is linked to a familial EOFAD and is more effectively cleaved by BACE1/β-secretase within the TGN 69 . In addition, knocking down ADAM10 25 and expressing a dominant-negative ADAM10 mutant in mice 70 both increased hAPP AP.
ADAM family members belong to the metzincin superfamily and are typically synthesized as inactive precursors (zymogens) 71 . The proteolytic removal of a conserved cysteine switch in the prodomain is necessary to activate these zymogens 71 . Our findings indicate that cleavage by proprotein convertases (e.g., furin and PC7 in HEK293 cells 72 ) into the 55-kDa ADAM10 active form occurs at the cell surface of human astrocytes rather than in late compartments of the secretory pathway. However, the complex mechanisms modulating α-secretase cleavage activity are not fully elucidated. ADAM10 is not the sole constitutive α-secretase in neurons 25,73 . The present findings indicate that antagonizing Aβ·CaSR signalling with a calcilytic agent, in the absence but more effectively in the presence of CMT, increases the regulated ADAM10 α-secretase specific activity in adult human astrocytes. Treatment with NPS 2143 drives the plasma membrane translocation of both ADAM10 and hAPP in fAβ 25-35 + CMT-exposed astrocytes. This finding reveals that Aβ·CaSR signalling alone restrains the vesicular transport of hAPP and ADAM10 to the plasma membrane, while raising hAPP intracellular levels and AP.
Although it increased ADAM10 α-secretase specific activity, CMT addition had little to no impact on daily and cumulative (i.e., over 72 h) extracellular sAPPα secretion. Only NPS 2143 addition restored sAPPα secretion to the levels of untreated astrocytes, showing the importance of blocking Aβ•CaSR signalling is in restoring hAPP NAP. Regarding the intracellular storage of sAPPα, which did not occur in the untreated astrocytes, CMT addition altered the kinetics but not the total amount stored in fAβ 25-35 -exposed astrocytes. As expected, NPS 2143 reduced most but not all) of the sAPPα storage caused by fAβ [25][26][27][28][29][30][31][32][33][34][35] treatment, even in the presence of CMT. The reasons why these minor sAPPα fractions were retained regardless of NPS 2143 and CMT treatment are not currently understood. Intracellular sAPPα accumulation has also been observed in other cellular models, including cultured human thyroid cells 50 .
Increased cleavage of hAPP by α-secretase was previously suggested as a therapeutic approach to AD 32 . Our present results strengthen the role of calcilytics as prospective drugs for AD therapy (Fig. 6). In this regard, calcilytics benefits largely overcome the mild hyperparathyroidism they induce in humans, given that AD "inexorably kills the patient cognitively several years before his/her actual physical demise" [22][23][24] . Therefore, the negative consequences of calcilytics should prove negligible if clinical trials prove that they can halt AD development.
Methods
Cell cultures. Untransformed human adult astrocytes were isolated from anonymized surgical fragments of normal adult human temporal cortex (brain trauma leftovers) provided by several Neurosurgery Units after obtaining written informed consent from all the patients and/or their next-of-kin. Experimental use of isolated astrocytes was approved by the Ethical Committee of Verona University-Hospital Integrated Company. All human cells experiments were performed in accordance with the relevant guidelines and regulations of Verona University-Hospital Integrated Company. Cultures of astrocytes were set up, as previously described 22 Experimental protocol. Since astrocytes do not actively divide in the adult human brain, we employed them once they had reached mitotic quiescence. At experimental "0 h", culture flasks served partly as untreated controls receiving a change of fresh medium and partly received fresh medium with 20 µM of either fibrillar (f) Aβ [25][26][27][28][29][30][31][32][33][34][35] or reversemer Aβ added. This dose of the fAβs had been found to be ideal in previous studies 22-24 . Part of the treated cultures received 20 µM of fAβ [25][26][27][28][29][30][31][32][33][34][35] once (at 0 h) plus a cytokine mixture trio (CMT), that is IL-1β (20 ng mL −1 ), TNF-α (20 ng mL −1 ), and IFN-γ (70 ng mL −1 ) (all from PeproTech, London, England). A second and a third CMT bolus was added at 24-h and 48-h. The CaSR allosteric antagonist (calcilytic) NPS 2143 HCl (2-chloro-6-[(2 R)-3-1,1-dimethyl-2-(2-naphtyl) -ethylamino-2-hydroxy-propoxy]-benzonitrile HCl; Tocris Bioscience, UK) 54 was dissolved in DMSO and next diluted in the growth medium at a final concentration of 100 nM. At experimental "0-h", "24-h", and "48-h" part of the astrocyte cultures were exposed for 30 min to NPS 2143 dissolved in fresh medium. Next, the NPS 2143-containing medium was removed and fresh (at 0.5-h) medium or the previously astrocyte-conditioned (at 24.5 and 48.5-h) media were added again to the cultures. Cultures and cell-conditioned media were sampled at 24 hourly intervals. Phosphoramidon (10 μM; Sigma), an inhibitor of thermolysin and other proteases, was added to the media at "0-h" experimental time.
Western immunoblotting (WB). At selected time points, control and treated astrocytes were scraped into cold PBS, sedimented at 200 × g for 10 min, and homogenized in T-PER ™ tissue protein extraction reagent (Thermo Scientific, Rockford, USA) containing complete EDTA-free protease inhibitor cocktail (Roche, Milan). Equal amounts (10-30 µg) of protein from the samples were loaded on NuPAGE Novex 4-12% Bis-Tris polyacrylamide gel (Life Technologies Italia) and next blotted onto nitrocellulose membranes (0.
Biotinylation and isolation of astrocytes' plasmalemmal proteins. The Pierce TM Cell Surface
Protein Isolation Kit (Thermo Scientific) served to biotinylate and isolate cell surface proteins. According to the supplier's procedure the cell culture media were removed and astrocytes were washed twice with ice-cold PBS followed by incubation with 0.25 mg mL −1 Sulfo-NHS-SS-Biotin in ice-cold PBS on a rocking platform for 30 minutes at 4 °C. The biotinylation reaction was quenched by adding 500 μl of the provided Quenching Solution (Pierce). Astrocytes were harvested by gentle scraping and pelleted by centrifugation at 500 × g for 5 minutes at 4 °C. After washing with TBS astrocyte pellets were lysed using the provided Lysis Buffer (Pierce) containing a protease inhibitor cocktail (Roche) for 30 minutes on ice with intermittent vortexing. To get rid of cell remnants, the lysates were centrifuged at 10,000 × g for 2 minutes at 4 °C. To purify biotinylated proteins on Immobilized NeutrAvidin Gel, the clarified supernatant was incubated for 1-h at room temperature (RT) to allow the biotinylated proteins to bind to the NeutrAvidin Gel. The unbound proteins, representing the intracellular fraction, were collected by centrifugation of the column at 1,000 × g for 2 minutes. Any remaining unbound proteins were removed by washing thrice with Wash Buffer (Pierce). Finally, the biotinylated surface proteins were eluted from the biotin-NeutrAvidin Gel by incubation with 400 µL of the SDS-PAGE Sample Buffer containing 50 nM DTT for 1-h at RT in the end-over-end tumbler, and were collected by column centrifugation at 1,000 × g for 2 minutes.
Assays of α-secretases specific activities. The ADAM10 and ADAM17 enzymatic activities were assayed by means of fluorescent methods using EnSens TM ADAM10 and EnSens TM ADAM17 activity detection kits (Enzium, Inc., Philadelphia, USA) in the cell lysates. Despite the highly-overlapping substrate specificities of ADAM10 and ADAM17, EnSens TM substrates are able to differentiate between the two enzymes. Astrocytes' lysates (20 μg) were incubated with the fluorogenic EnSens TM ADAM10 and EnSens TM ADAM17 substrates, respectively for 1-h at RT, protected from light according to the supplier's protocol. The fluorescence was recorded at excitation and emission wavelengths of 625-635 nm and 655-665 nm, respectively. The results were expressed as specific activity (means ± SEMs of ΔF µg −1 protein pertaining to each experimental group). 40 , and sAPPα released into in cell-conditioned growth media. Quantifications of Aβ 42 , and Aβ 40 and sAPPα were carried out by means of specific Aβ 42 , and Aβ 40 Human/Rat High-Sensitive ELISA Kits (both from Wako, Japan) as previously described 22 and by means of specific Human sAPPα High-Sensitive ELISA Kit (from IBL International). Briefly, the astrocytes conditioned media samples were added with a protease inhibitor cocktail (Roche) and centrifuged for 10 minutes at 13,000 rpm to remove any cellular debris. Supernatants were tested in triplicate according to the manufacturer protocol.
|
2018-04-03T03:21:45.941Z
|
2017-04-28T00:00:00.000
|
{
"year": 2017,
"sha1": "160e9e9c592a1575b2f7f9817aa4fc495dd749eb",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-017-01215-3.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "25204d7bf2e0feef14a64ccbedfe9be973fed1aa",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
53180584
|
pes2o/s2orc
|
v3-fos-license
|
Basal pharmacokinetic parameters of topically applied diacerein in pediatric patients with generalized severe epidermolysis bullosa simplex
Abstract Generalized severe epidermolysis bullosa simplex (EBS-gen sev) is caused by mutations within either the KRT5 or KRT14 gene, phenotypically resulting in blistering and wounding of the skin and mucous membranes after minor mechanical friction. In a clinical phase 2/3 trial, diacerein has recently been shown to significantly reduce blister numbers upon topical application. In this study we addressed basic pharmacokinetic parameters of locally applied diacerein in vitro and in vivo. Ex vivo experiments using a Franz diffusion cell confirmed the uptake and bio-transformation of diacerein to rhein in a porcine skin model. Rhein, the active metabolite of diacerein, was also detected in both urine and serum samples of two EBS-gen sev patients who topically applied a 1% diacerein ointment over a period of 4 weeks. The accumulated systemic levels of rhein in EBS-gen sev patients were lower than reported levels after oral application. These preliminary findings point towards the uptake and prolonged persistance of diacerein / rhein within the intended target organ - the skin. Further, they imply an acceptable safety profile at the systemic level. Trial registration DRKS. DRKS00005412. Registered 6 November 2013.
Generalized severe epidermolysis bullosa simplex (EBS-gen sev) is caused by mutations within either the keratin 14 (KRT14) or keratin 5 (KRT5) gene, resulting in a susceptibility of the skin towards mechanical trauma. Due to the autosomal dominant mode of inheritance, conventional therapeutic approaches require high efficiency not only in generating sufficient amounts of a wild type allele, but also in replacing or down-regulating the disease causing copy. Although ex vivo gene therapy showed promising results in dystrophic and junctional subtypes of EB [1][2][3], these approaches are currently not applicable for dominatly inherited EBS. In addition to a small number of early-stage clinical trials or case reports on small molecule based treatment approaches for EBS [4], topically applied diacerein showed promising results in reducing blister numbers in two recent clinical studies [5,6]. In vitro studies addressing the mode of action showed that diacerein, an antagonist of IL-1ß, reduced the aggregation of mutated keratin 14 (K14) and 5 (K5) protein upon heat shock, which ultimately leads to a disruption of the intermediate filament (IF) network, a characteristic observed for most EBS-gen sev underlying mutations in vitro [7]. This IF fragility not only leads to an increased expression and maturation of IL-1ß but also to an activation of the c-jun N-terminal kinase (JNK) stress pathway, which, in a positive feedback loop, promotes KRT14 expression at increased levels [8]. In a pilot study, treatment of five EBS-gen sev patients demonstrated a positive effect of a 1% diacerein containing ointment on blister reduction. Blister numbers were reduced by more than 70% in treated skin areas and the reduction remained stable for 6 weeks [6]. In a phase 2/ 3 clinical trial, 17 patients topically applied a 1% diacerein cream or placebo once daily during a 4 week period onto 3% of their total body surface area (BSA), presenting with blisters at the start of the treatment. The outcome of this trial was a significant reduction of blister numbers in 60% of patients upon diacerein treatment within 4 weeks of application. At the end of a 3 months follow-up, 87% of diacerein-treated patients achieved this positive outcome, further substantiating the observation of a long-term effect of the treatment [5]. Despite the availability of pharmacokinetic data on orally administered diacerein, no such data regarding a topical application are currently available [9]. We therefore analyzed the metabolism of a 1% diacerein ointment both in vitro and in vivo in a volunteer extension of the phase 2/3 trial [5], in order to verify the activation of the prodrug diacerein within the skin to support our understanding of rhein mediating the reduction in blister formation. In addition, we performed in vitro experiments using a Franz diffusion cell system with porcine skin as a surrogate for human skin to investigate whether or not deacetylation of the prodrug diacerein occurs within the skin.
For that, skin samples (n = 5) were mounted on the 1cm 2 Franz-cell and treated with a 1% diacerein ointment [10]. During a 72 h (hrs) time course, the 1% diacerein ointment was reapplied every 24 h and receptor medium was sampled for liquid chromatography tandem-mass spectrometry (LC-MS/MS) analysis after 6, 24, 48 and 72 h for evaluating the trans-epidermal permeation of diacerein / rhein [11]. In addition, 8 mm biopsies were taken from treated porcine skin at the end of the experiment, i.e.
after 72 h, after thorough removal of any ointment remains, in order to determine rhein levels within the skin (Fig. 1a). After 6 h rhein was clearly detectable in the receptor medium in three out of five individual experiments (c max_6hrs = 0.35 μg•mL − 1 ). Continued drug application further increased rhein levels (at time points 24, 48, 72 h) with a c max_72hrs of 6.39 μg/mL and a mean concentration c mean_72hrs of 3.41 μg•mL − 1 proving the transformation of diacerein into its active metabolite during skin permeation. In addition, we were also interested in the amount of rhein present within the skin after 72 h. On average 368 μg (SD = 85.7 μg) rhein were detected in the skin, meaning that 37.4% of the totally applied rhein, under the assumption of 100% conversion of diacerien to rhein, was retained within the skin after 72 h. Taking into account that 26 μg (SD = 17.1 μg, 2.7%) passed the skin, 589 μg (SD = 257.4 μg), representing 61.2%, of totally applied rhein (983 μg, SD = 276.6 μg) remained within the acceptor compartment (Fig. 1b, c). As only rhein, but not diacerein, was detected in both, receptor medium and skin biopsy, we conclude that diacerein is rapidly metabolized within the skin into its active form rhein, relevant for the therapeutic strategy in treating EBS-gen sev patients.
In addition to the ability of skin to convert diacerein, we were interested in pharmacokinetics in vivo to assess systemic rhein levels. EBS-gen sev patients, who had participated in the clinical phase 2/3 diacerein trial, topically applied the 1% ointment over a period of four weeks onto 3% of their body surface areas (BSA) in a volunteer pharmacokinetic extension study of the clinical trial [5] (Fig. 2a). Given the burden of children with EBS-gen sev, only 2 patients were willing to participate in this pharmacokinetic (PK)-trial. BSA for patient 1 was a 310 cm 2 area on the right thigh and a 210 cm 2 area stretching from the left thigh into the left groin for patient 2, both presenting with blisters at the start of the treatment. In total, 123.4 g and 69.9 g of 1% diacerein cream, respectively, were applied, amounting for a calculated, average daily dose of 34 mg rhein, under the assumption of complete conversion of diacerein, for patient 1 and 19 mg rhein for patient 2. To evaluate systemic absorption upon topical application, blood and urine samples were obtained when starting the treatment, and after 14 and 28 days. Rhein was detected in all samples from both patients. In patient 1, maximum serum levels of c max_serum = 20.1 ng•mL − 1 and creatinine normalized maximum urine levels c max_urine of 39.9 ng•mL − 1 were measured. In patient 2, 15.4 ng•mL − 1 in serum and a c max_urine = 25.0 ng•mL − 1 in urine were detected at maximum (Fig. 2b, c, Table 1). While serum levels remained rather stable, rhein levels differed significantly between patients after 4 weeks of treatment, potentially pointing towards differences in renal clearance, which will need to be taken into account in future studies.
In conclusion, given our results and comparing them to already published data on oral administration by Nicolas et al., treatment of 3% of the body surface for 4 weeks resulted in systemic rhein levels that were approximately 150-fold lower than the levels detected 24 h after single-dose oral intake. A maximum of 10.23 mg total rhein in the plasma was determined upon oral administration of a 50 mg single dose diacerein [9] . Even when extrapolating our data from 3% BSA (rhein levels in serum: 20.1 ng•mL − 1 ) up to a treatment of 90% BSA (603 ng•mL − 1 )which relates to covering the whole body except head and genitalsreported levels measured upon oral administration (9100 ng•mL − 1 ) would not be reached. As an anthraquinone derivative, oral administration of diacerein has been reported to cause major side effects affecting the gastro-intestinal tract, so that the European Medicinal Agency (EMA) no longer recommends its use in patients aged 65 years and older. However, topical application of diacerein renders the probability of such side effects highly unlikely.
Despite several attempts using both RNA and genome editing techniques to restore wild type KRT14 and KRT5, no causal therapy for EBS-gen sev is currently available to treat patients [12][13][14]. Therefore, treatments to reduce characteristic skin manifestations, thereby increasing patient's quality of life are urgently needed and small molecules could provide a remedy. A few such approaches for different EB subtypes have been published during the last years, most of them being small clinical trials or case reports [15][16][17][18][19][20][21][22]. For EBS however, none of these studies has reached the level of late phase clinical trials yet [23][24][25][26][27]. In order to reduce blister number and increase EBSgen sev patient's quality of life, the anti-inflammatory effect of diacerein was investigated in a recent phase 2/3 clinical trial, which showed promising results that provided the basis for a worldwide phase III clinical trial (NCT03154333) [5]. Knowledge about basal pharmacokinetics will provide important information regarding the safety of the ointment.
In summary, our results demonstrate that the prodrug diacerein is metabolized to its active form rhein within the skin, thereby allowing for the exertion of its anti-inflammatory effect in EBS-gen sev patient skin. In vivo, patients showed no side effects or complications related to the ointment over the time course of the treatment matching the results of two clinical trials on EBS-gen sev including 22 patients in total, where no treatment-related side effects were reported [5,6]. However, there are some major limitations of this study, especially as in vivo data is limited to only two young test subjects. Given that the patient cohort included in this study are children who suffer from skin lesions and impaired wound healing, blood sampling was not compulsory as part of the previous phase 2/3 clinical trial. This would have drastically reduced patients' willingness to participate in the study, which would have potentially caused recruitment failure in this particularly rare disease. Indeed, this is a major problem we face in many EB trials and in rare (pediatric) diseases in general. Nevertheless, we believe that preliminary data on PK are important in order to provide the basis for more extensive PK studies that are necessary for drug development. Notably, based on such results, patient numbers for PK sampling can be properly calculated, potentially reducing the number of patients to be included.
Finally, we propose that 1% diacerein ointment is a safe and well-tolerated targeted therapy for the treatment of epidermolysis bullosa.
|
2018-11-11T01:40:24.473Z
|
2018-11-01T00:00:00.000
|
{
"year": 2018,
"sha1": "5911064ded72cbd6683678cacc4ec7c1abc23f5d",
"oa_license": "CCBY",
"oa_url": "https://ojrd.biomedcentral.com/track/pdf/10.1186/s13023-018-0940-1",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5911064ded72cbd6683678cacc4ec7c1abc23f5d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
257776100
|
pes2o/s2orc
|
v3-fos-license
|
Revisiting the baby schema by a geometric morphometric analysis of infant facial characteristics across great apes
Infants across species are thought to exhibit specific facial features (termed the “baby schema”, such as a relatively bigger forehead and eyes, and protruding cheeks), with an adaptive function to induce caretaking behaviour from adults. There is abundant empirical evidence for this in humans, but, surprisingly, the existence of a baby schema in non-human animals has not been scientifically demonstrated. We investigated which facial characteristics are shared across infants in five species of great apes: humans, chimpanzees, bonobos, mountain gorillas, and Bornean orangutans. We analysed eight adult and infant faces for each species (80 images in total) using geometric morphometric analysis and machine learning. We found two principal components characterizing infant faces consistently observed across species. These included (1) relatively bigger eyes located lower in the face, (2) a rounder and vertically shorter face shape, and (3) an inverted triangular face shape. While these features are shared, human infant faces are unique in that the second characteristic (round face shape) is more pronounced, whereas the third (inverted triangular face shape) is less pronounced than other species. We also found some infantile features only found in some species. We discuss future directions to investigate the baby schema using an evolutionary approach.
All mammals, and humans in particular, have a long period of vulnerability in early development during which extensive care from adults is critical for survival. The mechanisms underlying how adults interact with infants are therefore important to understand. In humans, infant faces convey rich information including health, age, and sex 1 , and are highly effective at capturing visual attention 2,3 . They are also generally perceived as highly attractive, induce positive emotions and parenting motivation [4][5][6][7] , and activate the medial orbitofrontal cortex of the perceiver, which is implicated in reward behaviour 4,5 . Therefore, they play a key role in human parental investment 1,6 . Infant faces, however, are not simply miniature adult faces. Almost 80 years ago, the prominent ethologist Konrad Lorenz proposed the "baby schema 8 ", a set of physical features of infant faces, including a prominent forehead, bigger eyes located lower in the face, protruding cheeks, and bodily features such as a relatively large head. The famous illustrations of the baby schema by Lorenz depict not only humans, but also hares, dogs, and birds. Since then, this baby schema has been assumed to be shared across species and induce caretaking behaviour from conspecific adults 8,9 . A number of empirical studies in humans have supported this idea. For example, exaggeration of the baby schema increases the attractiveness of faces and parental motivation [10][11][12][13] . Neurological studies have also found that in nulliparous women the baby schema activates the nucleus accumbens, which mediates reward processing 14 . This evidence suggests that, at least in humans, the baby schema is a salient positive stimulus with a robust perceptual impact.
There is some evidence that an infantile appearance in non-human animals may be similarly related to caretaking behaviour 9,[15][16][17][18] , but this has been tested exclusively in human perceivers. There are also some studies manipulating the extent of babyness in animal faces and testing the effect of the manipulation on cuteness evaluation by humans 16,19 , but the manipulations were based on developmental changes in human faces and were not www.nature.com/scientificreports/ species-specific. Thus, the faces were made to look more babylike from a human perspective, which may or may not reflect facial immaturity in each species. The first step is to identify whether a common baby schema exists across species in terms of appearance. Craniofacial development has been well-studied among primate species including humans. For example, chimpanzees develop elongated braincases while humans develop globular ones and they do much faster and for longer. Chimpanzee faces grow more projecting while humans faces grow mainly vertically 20 . Nevertheless, if one is interested in faces, which are seen by others and potentially function as social cues or signals (as the baby schema is supposed to), studying faces with muscle and soft tissue is necessary. As far as we know, there is no systematic study quantifying and comparing shared infant face characteristics among species. This is surprising as it has been claimed that the baby schema "seems to be a universal stimulus 21 ". It is possible that the existence and function of the baby schema (i.e. inducing parenting) evolved only in humans as human infants are costly to raise and are born at a very early stage of development. Alternatively, the baby schema could be shared and function similarly across evolutionarily related species. Therefore, it is important to study first if baby schema exists in our closest relatives to understand the evolution of the mechanisms underlying human parenting.
The present study investigated which facial characteristics are shared across five species of great apes (including humans) using geometric morphometric analysis. Our first and main goal was to examine and update the classic description of the baby schema. We targeted great apes to compare the similar facial morphology of these phylogenetically close species. A second goal of the study was to examine any variation in the characteristics of infant faces across different species. A previous study revealed that people perceive infants of non-mammals requiring parental care (i.e. semiprecocial species) as cuter than those exhibiting no parental care (i.e. superprecocial species) 9 . Thus, it is possible that species may also show different kinds or/and degrees of infantile face features according to socio-ecological factors such as the extent of alloparenting. We, therefore, aimed to develop a method to capture any variation in infantile facial characteristics across great ape species for future analysis by using geometric morphometrics. Geometric morphometrics is a data-driven approach. It does not require any predetermined assumptions (about adult/infant differences in our case) and instead measures the relative shape differences between the two categories. In previous face studies, geometric morphometrics have been used to quantify skeletal craniofacial morphology e.g. 22 , but it has also been used for living faces with full soft tissue (humans e.g. 23 and non-human animals e.g. 24 ).
Methods
Data analysis. We analysed frontal facial images of five species of great ape: humans (Homo sapiens), chimpanzees (Pan troglodytes), bonobos (Pan paniscus), mountain gorillas (Gorilla beringei beringei), and Bornean orangutans (Pongo pygmaeus). Although gorillas and orangutans consist of two and three species respectively and express morphological differences 25,26 , we analysed only one species for each due to availability of the pictures. Chimpanzees were western chimpanzees (Pan troglodytes verus) with one infant who was a hybrid between a western chimpanzee and a central chimpanzee (Pan troglodytes troglodytes). Those images were taken either in the wild or in captivity by authors, other researchers, or photographers (see Table A.1 for more details). The images of humans were from open-access databases 27,28 , and the reported ethnicity of those is all white. Sixteen frontal images for each species (eight for adults and infants respectively), totalling 80 images were analysed. Although larger sample size would be ideal, eight images were chosen as a maximum to ensure parity across species as taking fully frontal photos of infant faces is extremely difficult due to lack of independence from the mother. In a previous study 29 which analysed shape differences between adult and infant chimpanzee faces in the similar way, the number of images was the same. Four out of eight images were male for each species and age category with one infant gorilla whose sex was unknown. The average age of adults was 19.6 years old (bonobos (mean ± SD): 19.9 ± 4.7, chimpanzees: 18.4 ± 2.7, humans: 20.4 ± 1.8, gorillas: 18.6 ± 5.2, orangutans: 20.8 ± 3.15) while that of infants was 6.6-months-old (bonobos: 6.9 ± 2.5, chimpanzees: 6.8 ± 2.6, humans: 6.6 ± 2.3, gorillas: 6.6 ± 2.3, orangutans: 6.1 ± 2.7). Although there are slight differences in developmental speed among species (e.g. age of weaning or sexual maturation), infants of this age are at least not neonate anymore but much younger than weaning age, and adults of the targeted age are all sexually mature in each species. The criteria to choose the facial images were (1) does not exhibit clear facial movement, (2) mouth is closed, (3) both eyes are open, and (4) chin line is visible.
Ninety seven landmarks were manually placed on each face by one of the authors (Y. K.) with tpsDig2 software (version 2.31) 30 . Since we did not aim to analyse colour (and all human infant images were black and white), we changed the brightness and contrast of the images to achieve maximum visibility. Landmarks delineated the supraorbital torus, eye outlines, pupils, nose edges, mouth, and chin based on human face morphological studies 23,31 with some modification to apply non-human primate faces (i.e. landmarks on supraorbital torus and oral commissure instead of eyebrows and lip) (Fig. 1). Out of 97 landmarks, 76 were designated as semilandmarks. Semi-landmarks denote curves and outlines while other landmarks are represented as points that are geometrically homologous (e.g. mouth corner) among specimens. We did not include face outlines besides the chin because it is difficult to consistently identify the face outline due to facial hair in non-human primates. The landmarks and the slider file used for the definition of semilandmarks are available in Supplemental Materials. In order to confirm the reliability of the landmark annotation, the same person annotated 10 images (12.5%) randomly chosen from the dataset and intra-annotator reliability for all 97 landmarks (x-y coordinates) was calculated. The Inter Class Correlation Coefficient was excellent (x-coordinates: mean 0.994 (0.975-0.998), y-coordinates: mean 0.997 (0.986-0.999)).
Average adult and infant faces across all species and specific to each species were generated by tpsSuper (version 2.06) 30 for the purpose of visualization of shared morphological traits. All the landmarks of the images were superimposed by a generalized Procrustes analysis and analysed by a relative warp analysis with tpsRelw software www.nature.com/scientificreports/ (version 1.75) 32 . By using Procrustes analysis all the images were aligned in regard to orientation and size to get maximum fit. Then, a principal component analysis (PCA) (i.e. a relative warp analysis) was performed, which detects the main components contributing to the variation of landmark configuration among images. A relative warp analysis yielded 79 principal components and among them, the first 11 PCs accounted for more than 95% morphological variation of all 80 images (Table A.2). In order to examine if these characteristics are reliable features to differentiate two age categories from faces, we tested if the age category (infant vs. adult) is correctly classified based on PC scores by using classification algorithms. Our approach is analogous to a previous study that used the classification performance of machine learning in order to evaluate if there is potential information in face traits to determine certain attributes (e.g. sex or age) among the twelve species of guenons (Cercopithecini) 33 . There are several methods which are used to make classifications in morphology studies 34 . Thus, we fitted four different classification models with scores of all PC1-11, and chose the best one based on a classification performance 35 . The four models are: (1) linear discriminant analysis (LDA, or discriminant function analysis), (2) linear support vector machine (SVM) without hyperparameter tuning, (3) linear SVM with hyperparameter tuning, and (4) non-linear SVM (with Radial Basis Function kernel) with hyperparameter tuning. LDA and SVM are both algorithms which build a decision boundary between two classes (adults and infants in our case) in dimensional space of data (scores of target PCs in our case). The decision boundary is determined by distribution of the data in LDA while it is determined by points that are close to the other class in SVM. For SVM models with hyperparameter tuning, hyperparameters (the optimal cost and gamma) were determined by a grid search from 2 -10 to 2 10 with a fourfold cross validation. As an index of classifier performance, we used the accuracy and area under curve (AUC). The accuracy is the percentage of correct prediction, and the area under receiver operating characteristic curve (hereafter just AUC) is a parameter which takes into account both false-positive rate and true positive rate. AUC can take the value from 0.5 (i.e. a random classifier) to 1 (i.e. a perfect classifier). All the modelling was conducted with open-source Python (version 3.8.10) Scikit-Learn packages.
Then we remodelled age category classification with the best model based on each of PC 1-11 for all the five species pooled together and for each species separately to see which PCs are key features to differentiate adult and infant faces. In order to compare the model performance with random classification, we conducted 1000 permutation tests. Based on the p-value of the permutation tests, we decided if each PC is reliable cue to differentiate age categories. Next, we conducted age classification for each species separately based on each of PC 1-11 in order to check if there are any species differences. First, we found that PC 1 is a reliable age class predictor and scores are higher in infants than in adults across all species except chimpanzees (p = 0.07, others p < 0.01. Figure 3 and Table 1, see also Fig. A.1 and Table A.3 for details). By visually inspecting the exaggerated facial shape that each PC represents, PC 1 seemingly represents facial roundness and relative eye size; a high score indicates the face has a round and www.nature.com/scientificreports/ short shape on the vertical axis with relatively bigger eyes. The PC 2 is also a reliable predictor and the scores are higher in infants than in adults across all species except humans (p = 0.11, others p < 0.01). PC 2 represents the holistic configuration of the face, and a high score indicates a top-heavy (i.e. inverted triangle) face shape with relatively bigger eyes. PC 3 is a reliable predictor only in bonobos, orangutans, and humans (all p < 0.05). However, the direction is not consistent. The scores of PC 3 are higher for infants in bonobos and orangutans, but higher for adults in humans. PC 3 seemingly represents the distance between eyes and between eyes and nose, a high score indicates a centripetal face (i.e. the distance between those features is close). We also found that PC 7 is a reliable predictor in gorillas and humans (p < 0.05). The direction is again different from each other. In gorillas, adults score higher, while in humans, infants score higher. Based on visual inspection, PC 7 likely represents chin shape; high scores indicate a horizontally wider and vertically shorter chin. Moreover, PC 10 is a reliable predictor only in orangutans; infants score higher than adults (p < 0.05). However, this component seemingly represents just an artifact, namely lateral asymmetry. Lastly, PC 11 is a reliable predictor only in two species; the scores are significantly higher for infants than adults in chimpanzees and gorillas (p < 0.05). PC 11 likely represents the shape of the supraorbital torus; a high score indicates a curved instead of straight one. The other PC 4, 5, 6, 8, and 9 did not contribute to age classification in any species. All the visualization and the scores of PC 1-11 were shown in Fig. A.2 and Figs. A.3-A.8. PC 1 and PC 2 can be described as shared infant face features because they are reliable predictors of age category in most of the species, and the tendency between adults and infants is consistent among all species. That is, infants in general have a vertically short (PC 1) and top-heavy (PC 2) face shape with relatively bigger eyes (PC 1 and 2). Nevertheless, there are also species differences in the robustness of shared infant features described by PC 1 and PC 2, although we did not compare PC scores themselves across species with statistical analysis. For example, scores of PC 1 in humans (both adults and infants) stood out among five species (Fig. 3), indicating that human faces are generally rounder compared with other species. For PC 2, human infants have lower scores compared with infants of other species (Fig. 3), which means human infant faces have a more bottom-heavy (i.e. triangular) configuration compared with infants of other species.
Discussion
As far as we know, this is the first attempt to examine and update the classic work on the baby schema by Lorenz 8 using a data-driven approach. The facial shape analysis showed that there are shared infantile face features across species. Based on our results, infant faces are defined by three face characteristics in all five species; (1) relatively bigger eyes located lower in the face (PC 1 and 2), (2) a rounder and shorter (on the vertical axis) face shape (PC 1), and (3) an inverted triangular face shape (PC 2). These characteristics are consistent and robust among the five species. When we compare these characteristics and those originally mentioned as baby schema by Lorenz 8 , the first characteristic about eyes was clearly mentioned as one of the features of baby schema. Not particularly rounder face shape (PC 1), but rounder body shapes, in general, was also listed as a characteristic of baby schema. The second characteristic, an inverted triangular face shape itself was not listed by Lorenz, but it may, at least partly, be corresponding to a protruding forehead, one of the characteristics of baby schema. Besides these features, the assumption that infants have relatively smaller noses and mouths was not clearly listed as the www.nature.com/scientificreports/ original baby schema by Lorenz 8 , but often mentioned as the baby schema in later literature e.g. 19 . However, we did not find clear evidence supporting this at least from the present study. www.nature.com/scientificreports/ These infantile face features seen in all species might reflect physical constraints such as differential timing of development of each face part. For example, the development of eye growth ceases much earlier than that of other parts of the face, resulting in relatively larger eye sizes in young faces 20 . The protruding forehead is seen in infants because it accommodates the relatively large size of the brain, and faces experience vertical growth later 20 . The infants' short and top-heavy faces in our findings could be probably explained by these processes. Although PC 1 and PC 2 are both related to bigger eyes, each PC is orthogonal to the other, meaning PC 1 and PC 2 are reflecting independent characteristics. Bigger eyes are not expressed alone but accompanied with other features (rounder or top-heavy face shape). Our findings may explain the reason why bigger eyes of infants alone poorly contribute to cuteness perception in humans 36 .
Although these face features are shared by infants among species, there seem to be species differences in the extent to which these differences manifest. Human infant faces are especially unique compared to similar-aged infants of other species, which is consistent with the findings from a previous cranial study 20,37 . First, human infants look immature with respect to one component (PC 1), facial roundness. This may be related to Lorenz's argument that compared with human infants, the baby schema is less embodied in young non-human primates, who have "long legs, long snout, and sunken cheeks and they appear cute to very few people" 8 . Roundness in human faces even in adults may reflect neoteny, where humans retain immature features including feminized or juvenilized morphology 38 . Conversely, human infant faces, compared with other species infant faces, score lower in PC 2, which means that they tend to be bottom-heavy rather than top-heavy. One can say that in this regard human infant faces look more mature than other species. Nevertheless, it is also possible that bottomheavy characteristic of human infants may reflect chubby cheeks, which is probably uniquely present in humans as Lorenz pointed out 8 due to greater adipose tissue in the face 39 .
While it is obvious that PC 1 and 2 are related to developmental face change in great apes in general, three other PCs indicate species-specific infant facial features because the relationship between infants and adults varied across species. First, regarding PC 3, infant faces score higher than adult faces in bonobos and orangutans (i.e. infant faces are more centripetal than adult ones). On the other hand, the scores of PC3 are lower in human infants (i.e. infant faces are more centrifugal than adult ones). The results are consistent with a previous human study which found that human faces with wide eyes are perceived as young 40 . However, this is specific to humans (at least among great apes). Second, human infants have a horizontally wider and vertically shorter chins than adults while gorilla infants have narrower and longer chins (PC 7). Human infants have a wide posterior dental arcade compared with infants of Pan species 41 . Moreover, the chin develops prominently in humans 20 and can be considered a uniquely human characteristic. These factors may be related to the results of PC 7. Lastly, only chimpanzee and gorilla infants have curved supraorbital torus (PC 11). The characteristics defined by those PCs are infantile characteristics only in certain species. Thus, we should be cautious to assume that infant faces have the same characteristics across species.
An important next question is what is the function of infantile face features in non-human primates, if any? In humans, a large body of literature supports the hypothesis that infantile face features induce cuteness perception and parenting motivation 4,5,7,11,42 . Such features are, therefore, likely to contribute to infant survival. These features may function similarly in non-human primates 8,9 . Moreover, paedomorphic appearance in infants and caregiving behaviour may have coevolved in primates or other orders 9 . It is beyond the scope of the current study to address this question, but future studies should address this point. One way to test this may be to use an index which evaluates the face immatureness of infants of the species and examine the relationship between face immatureness and other socio-ecological factors. Although from the present study it is not fully clear how much differences exist in the extent of face immatureness within infants of the non-human primate species, further investigation may reveal more. One of the predictions is that if the face immatureness encourages conspecifics to take care of infants as has been suggested, degree of (allo)parental care will be positively correlated with the face immatureness of the species. A related prediction is that if the baby schema encourages adults' protective behaviour toward infants at risk as suggested for another infantile features 43 , infanticidal risk and face immatureness will also be positively correlated.
It should be noted that specific face morphology is not the only visual characteristic of infants. For example, some primate species have conspicuous coloration during infancy [43][44][45][46] . In orangutans, for example, skin colour around the eyes and mouth is bright during infancy while adults have darker skin 47 . Similarly, in chimpanzees, infants have paler face skin colour compared with adults and this infantile face colour is perceived by chimpanzees as a more salient cue than infant face morphology 29 . The potential functions of such infantile coloration in general (e.g. encouraging alloparenting 44,45 ) have been suggested but are still under debate. At least for chimpanzees and orangutans among the species we analysed, the other facial cues showing "babyness" seemingly exist, Table 1. Significant PCs and the directions (***p < 0.005, **p < 0.01, *p < 0.05, † < 0.10). www.nature.com/scientificreports/ so how much facial morphology alone plays a role is unclear. If species with infantile coloration have more (or less) morphological immatureness in faces to see whether these features are functionally related. It is also informative to ascertain if and how individual differences in infant faces are related to other factors (e.g. health or the amount of care they receive from other individuals) although a larger sample size is necessary to test this. It is possible that infant facial cues are "relatively honest signs of fitness and health of infants 38 ". Indeed, humans, especially females, are very sensitive to subtle differences in infant faces 48,49 , and perceived cuteness of infant faces is correlated with perceived health 50,51 and the quality of maternal care toward the infant 52 . Individual differences in infant faces can be important information for non-human parents as well because they need to decide how much investment they should allocate to their offspring 53 .
There are several limitations in this study. First, we analysed only specific morphological features of 2D faces. There is also a trade-off between including various landmarks in morphologically different species in analysis and setting corresponding landmarks consistently among them. Thus, it should be noted that our analysis does not cover all the features of facial morphology. Second, we did not control the living environment (wild versus captive) of the individuals we analysed. It is possible that the environmental factors, including food availability, affect face morphology during development. Third, the number of samples we analysed was small due to the difficulty of getting full-frontal face photos of non-human primate infants. Thus, we cannot fully rule out random variations in face photographs by artifacts such as lateral asymmetry captured by PC10. Nevertheless, at least the shared infant face features we reported here, namely PC 1 and 2, are robust and seem not to be artefactual. Lastly, this study focused on only five species of great apes although the concept of a baby schema has been applied to various species beyond primates. Our method using a geometric morphometric should be applicable to other species, especially primates, so future studies should include more species in order to ascertain how broadly infantile face features are shared. Regardless of those limitations, our study provides new insight regarding the evolution of paedomorphic appearance in infants. This is the first quantitative evidence that there is a "baby schema" which is shared across our closest relative species. In conclusion, some face features are indeed shared among great ape species but there are also significant species differences. The current study should be a good starting point to reveal how infantile visual features have played a role in social interaction over the course of mammalian evolution.
Data availability
Some of the analysed images owned by one of the authors, all the landmark information, and the slider file are available on Mendeley Data (http:// dx. doi. org/ 10. 17632/ 8hs59 3cyc2.1). The data associated with this research are available as supplementary materials.
|
2023-03-29T14:13:27.012Z
|
2023-03-29T00:00:00.000
|
{
"year": 2023,
"sha1": "c5c2e1214db8920e1713443ac795907da3813e0c",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "c5c2e1214db8920e1713443ac795907da3813e0c",
"s2fieldsofstudy": [
"Biology",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
2178792
|
pes2o/s2orc
|
v3-fos-license
|
Sinusoidal Wave Estimation Using Photogrammetry and Short Video Sequences
The objective of the work is to model the shape of the sinusoidal shape of regular water waves generated in a laboratory flume. The waves are traveling in time and render a smooth surface, with no white caps or foam. Two methods are proposed, treating the water as a diffuse and specular surface, respectively. In either case, the water is presumed to take the shape of a traveling sine wave, reducing the task of the 3D reconstruction to resolve the wave parameters. The first conceived method performs the modeling part purely in 3D space. Having triangulated the points in a separate phase via bundle adjustment, a sine wave is fitted into the data in a least squares manner. The second method presents a more complete approach for the entire calculation workflow beginning in the image space. The water is perceived as a specular surface, and the traveling specularities are the only observations visible to the cameras, observations that are notably single image. The depth ambiguity is removed given additional constraints encoded within the law of reflection and the modeled parametric surface. The observation and constraint equations compose a single system of equations that is solved with the method of least squares adjustment. The devised approaches are validated against the data coming from a capacitive level sensor and on physical targets floating on the surface. The outcomes agree to a high degree.
Introduction
Attempts to characterize the water surface with optical methods date back to the beginning of the 20th century [1,2]. The interest in a quantitative description of the surface with light came from the field of oceanography and the use of photography to map the coastlines. This prompted further applications, namely the use of photography to quantify ocean waves and to exploit these parameters in, e.g., shipbuilding, to engineer structures of the appropriate strength [3][4][5].
The same drivers disseminated optical methods among other applications, in river engineering and the oceanographic domain. Understanding river flow allows for a better riverbed management and mitigation of floods through combined fluid dynamics modeling and experimental testing. Additionally, the knowledge of the dispersive processes gives an insight into the way pollution and sediments are transported [6][7][8][9].
In the coastal zones, optical methods became a good alternative to in situ measurements, which require substantial logistical commitments and offer low spatial, as well as temporal resolution. The dynamics of the water, hence the energy it carries, influences the nearshore morphology, which is of significance for both coastal communities and marine infrastructure, e.g., wharfs and mooring systems [10][11][12][13].
targeting is employed. The most common targeting techniques use physical material, like powder, Styropor, oil or optical projections in the forms of a laser sheet, a grid of points or sinusoidal patterns [9,10,[21][22][23]. Specular reflections are inevitable and are often the source of errors in the estimated depths. To avoid the corrupted measurements, specular highlights can be: (1) removed in the image preprocessing step, (2) eliminated in multiview setups during the processing (the appearance of glints in images is view dependent; with the third or n-th view, every identified feature can be verified, and glints can be eliminated; the method is apt for scenes with single or few glints) [24], (3) filtered with the help of either polarized or chromatic filters (the filters are mounted in front of the camera lens; hence, there is no restriction on the number of present glints) [25] and (4) in industrial photogrammetry of rigid objects, attenuated through the use of special targets, e.g., doped with fluorescing dye, that respond to a wavelength other than the wavelength of the specular reflections [26].
Alternatively, the water being itself a source of infrared radiation can be observed with thermal cameras. Because the heat distribution is heterogeneous across the surface, it provides a good base for the correspondence search. A surface treated in this way can be measured with classical stereo-or multi-view photogrammetric approaches. If qualitative results are expected, it is sufficient to acquire single images and to proceed with data evaluation in the image space only [27].
Water as a Specular Surface
Sometimes, it is advantageous to exploit the inherent optical characteristics of water, i.e., the total reflection and refraction, for measurement purposes. Contaminating the liquid with physical material is cumbersome, because it becomes (1) unfeasible for large areas and field surveys, (2) difficult to keep a homogeneous point distribution and (3) it may influence the response of the water by interacting with it. In such situations, and depending on the working environment, whether in a lab or out in the field, it is possible to derive the surface shape by mere observation of a reflection of a source light or a pattern whose distortion corresponds to the surface slope and height Reference [28] pioneered the characterization of water surface slopes with a reflection-based method. Their motive was to analyze slope distributions under different wind speeds through observing the Sun's glitter on the sea surface from an aerial platform. Variations of the Cox and Munk method include: replacing the natural illumination with one or more artificial light sources, also known as reflective (stereo) slope gauge (RSSG) [3,29], and using entire clear or overcast sky to derive surface slope information for every point in the image, also known as Stilwell photography [30].
A combination of stereo-and reflection-based techniques (RSSG) was proven to be a sound way to characterize not only the surface slopes, but also the water heights. In a typical stereo setting, one is faced with a bias in the corresponding features seen by the left and the right cameras; the reason being that the cameras will record a change from the spots on the water whose normals' are in the line of sight of the given cameras. Naturally, the steeper the observed wave, the less the systematic error. Using the Helmholtz reciprocity principle, i.e., upgrading the method to employ light sources placed right next to the cameras, eliminates the correspondence ambiguity. The identified features in image spaces are then bound to be a unique feature in the object space [3,4].
In the field of computer vision, two principal classes of algorithms are shape from distortion and shape from specularity, which, e.g., inspect single or multiple highlights with a static or a moving observer and emitter [31][32][33], observe known or unknown intensity patterns reflected from mirror-like surfaces [34][35][36][37][38][39], directly measure the incident rays [40], exploit light polarization [41,42] and make assumptions on the surface's bidirectional reflectance distribution [43][44][45]. Further interesting approaches that fall outside the scope of the adopted categories exploit other (than the visible) parts of the electromagnetic spectrum, such as infra-red [46], thermal [47] or UV. The principle resembles that of [48,49], i.e., one searches for a wave spectrum, in which the information/signal received from the problematic surfaces, either doped or hit by the energy portion, is maximized, while minimizing the close-by, disturbing signals. For a good overview of the techniques, refer to [50,51].
Water as a Refractive Medium
Refraction-based techniques are more complex due to the fact that the light path is dependent not only on the surface normal, but also the medium's refraction index. The development of shape from refraction goes hand in hand with the shape from reflection methods and, therefore, has an equally long history. The work in [52], again, first experimented with light refraction to derive surface slopes from an intensity gradient emerging from beneath the water surface. Today, a successor of this technique, called imaging slope gauge, alternatively shape from refractive irradiance, is considered a highly reliable instrument for measuring wind-induced waves in laboratory environments. In contrast to the reflection-based techniques, refraction of a pattern through the water maintains a quasi-linear relationship between the slope and the image irradiance [14,[53][54][55].
White light can also be replaced with active laser lighting. The laser offers a greater temporal resolution at the expense of a lower spatial resolution. The first examples of laser imaging gauges employed a single beam, thereby delivering information about a single slope [56,57]. Over time, they evolved to scanning systems that could capture spatial information over an area [58]. A laser slope gauge is used in the laboratory and field environment, however being most apt for the characterization of longer waves, as the literature reports.
In the field of computer vision, dynamic transparent surfaces have been analyzed with single-and multi-image approaches. The work in [59] first introduced a single view shape from the motion of a refractive body. By assuming that the water's average slope is equal to zero, i.e., it regularly oscillates around a flat plane, the author proposed a method for reconstructing the shape of an undulating water surface by inverse ray tracing. The work in [60] developed a method that recovers complete information of the 3D position and orientation of dynamic, transparent surfaces using stereo observations. The authors formulate an optimization procedure that minimizes a so-called refractive disparity (RD). In short, for every pixel in an image that refracts from an underwater pattern, the algorithm finds a 3D point on the water surface that minimizes the conceived RD measure. This measure expresses Snell's law of refraction by enforcing that the normal vector of the computed 3D position must be equal when computed from the left and the right images of the stereo image. The two flag examples of computer vision approaches are limited to use in laboratory conditions due to (1) the need to locate a reference pattern under the water and (2) the demand of clear water for a reliable pattern to image correspondence retrieval. For more information, refer to [50,51].
Preliminaries
The traveling 3D sine wave is deemed (note that the shift term H 0 in Equation (2) is omitted; this is possible if the translation from GCS → LCS already compensates for that shift): where φ is the phase of the wave-front and N is the duration of the measurement in seconds. The traveling sine wave is a special surface, and as such, it is advantageously modeled in a local coordinate system (LCS), that is parallel to the wave propagation direction and shifted to the mean level of oscillations (cf. Figures 1 and 6). In order to link the LCS with the global coordinate system (GCS) where the cameras are defined, a 3D spatial similarity transformation (the scale factor is unity; thus, the transformation reduces to a 3D rigid transformation) is formulated. The parameters of this transformation (i.e., three components of the translation vector and three rotation angles) could likewise be included within the adjustment. The methods invented devise the above-defined model in a least square approach, that is trying to minimize the residuals between the nominal observations and the observations predicted by that model. To solve the non-linear least squares problem, one must know the starting values of all parameters involved. Presented below is the strategy for retrieving the wave amplitude A, the wavelength λ, period T, phase-shift φ and the rigid transformation of the wave LCS to the camera GCS. Experience has shown that unless the amplitude is infinitesimally small and the wave infinitely long, even very rough estimates of the unknown wave parameters assure convergence of the system of equations.
Throughout the text, the notion of real and virtual points appears. The points are real if their projections in images come directly from the points (e.g., physical targets) or virtual (i.e., specular highlights) if the camera sees merely the reflection of a real point. Virtual points are always single-image observations, as their positions in space depend on the shape of the surface from which it is reflected and the view angle of the camera (vide the law of reflection).
wave propa gation w a v e g e n e r a to r platfo rm Figure 1. A simplistic view of the model basin. The platform is placed at a distance from the wave generator (on the left) and parallel to the wave propagation direction.
Derivation of Approximate Wave Parameters
The wave amplitude can be: (i) estimated visually, on-site, during the measurement taking place, (ii) recovered from the wave probes, as these are a common place in any ship model basin, or (iii) derived from image-based measurement, provided there are observed real points found on the water surface (adopted by the authors). When the image-based approach is undertaken, the triangulation step must be followed to obtain the 3D coordinates of the real points (cf. Section 3.1.2). The amplitude can be recovered with the complex amplitude demodulation method (AMD) or merely by removing the trend from a point's response and taking the halved maximum bound to be the starting A value. The collateral benefit of the AMD is that, were there is a varying amplitude signal, a slope instead of a horizontal line outcome would be observable. Accordingly, the A shall be replaced with a (linear) time-varying function, the parameters of which may be engaged in the total adjustment, as well.
Similarly, the value of the period T can be approximated either from on-site visual impressions, using the image data in post-processing (adopted by the authors), or taken directly from the wave probe data. The dominant period is then restored with the help of the spectral analyses, i.e., the periodogram power spectral density estimate.
The wavelength might be inspected visually or with the help of the method presented in Section 3.1 (adopted by the authors). The wave probe devices, being single-point-based measurements, do not deliver enough data to indicate the length of the traveling wave.
The remaining wave parameter is the phase shift. Its computation requires a real point floating on the water surface, or otherwise, the wave probe data can become useful if its position is known in the reference coordinate system (CS) (adopted by the authors). If one moves the CS to that point, the term y j λ in Equation (2) cancels out. If one further takes the starting frame t 1 = 0, also the term t 1 T is gone, and the phase shift can be computed from φ = arcsin z j A , where j denotes the point at the origin of the translated CS. A slightly more elegant way to solve the equation for the initial phase shift is to use, again, the demodulation technique.
As for the wave transformation parameters (ω, Φ, κ; T) GCS→LCS , one usually tries to define the global system that is quasi-parallel to the wave propagation direction or in best case that aligns with it. If so, the rigid transformation parameters can be set to zero values at the beginning of the adjustment and receive their corrections, which compensate for the inaccurate alignment. The image data and the scene context must be exploited to find the transformation relating the two coordinate system by identifying, e.g., the minimum of three common points or a 3D line and a point.
Optimization Technique
In the computational part, the Gauss-Markov least squares adjustment with conditions and constraints was adopted. The adjustment workflow proceeds in repetitive cycles of five steps, i.e., (i) the generation of current approximate values of all unknown parameters, (ii) the calculation of the reduced observation vector, (iii) the calculation of the partial derivatives of the functional model with respect to all current parameters, (iv) the construction of the normal equations and (v) the solution of the system of equations.
The partial derivatives that make up the Jacobian matrix are always evaluated at the current values of the parameters. If the system of equations includes both condition and constraint equations, it does not fulfil the positive-definite requirement put by, e.g., the Cholesky decomposition. Indeed, on the diagonal of the equation matrix in Method 2, zero entries are present. The vector of solutions is then retrieved with the Gauss elimination algorithm.
Water as a Diffuse Surface
The following method sees the water as a diffuse surface. It is converted to such, owing to artificial targeting. A set of retro-reflective targets floated on the water surface were tracked during the measurements (cf. Figure 2). The targets were in-house produced using a diamond grade reflective sheeting (3M TM , Minneapolis, Minnesota, USA). The sheet, thanks to the cube corner reflectors' composition, allowed for an efficient light return for very wide entrance angles, thereby assuring good visibility in the oblique looking camera views. The targets were interconnected with a string, so as to avoid their collision and dispersion; their spacing equaled ca. 30 cm.
Reconstruction of a singular point provides for all but one wave parameter: the wavelength λ. Combining the responses of the minimum of two such points allows for the possibility of recovering also the remaining λ. It is presumed that the 3D data have already been transformed to the LCS, and to shorten the discussion, this is excluded from the mathematical model. A complete description on how to include this information in the model is given in Section 3.2.
Since the parameters are found in a least squares approach, the discussion commences with the initial parameter retrieval. In the next step, the adjustment mathematical model is outlined. Evaluation of the results continues in Section 5.
Mathematical Model
The functional model describes analytically the prior knowledge on the wave shape expressed in Equation (2). The modeling part, unlike in Method 2 in Section 3.2, is formulated purely in 3D space. The point triangulation is treated as a separate and unrelated phase, even though the image measurements indirectly contribute to the eventual outcome, and a joint treatment might be suggested. As a matter of fact, the 3D space is reduced to a 2D space, building on the fact that the transformation from GCS to LCS has taken place in advance, and the x-coordinate can take arbitrary values. All parameters are present in the adjustment as observations and unknowns; see the adjustment workflow in Figure 3. The stochastic model is formalized within the weight matrix and conveys the observation uncertainty. The matrix holds non-zero elements on its diagonal, and they may take the following form: , where σ i signifies the a priori standard deviation of an observation and σ 0 is the a priori standard deviation of a unit weight. Within the experiments, the σ 0 was set to unity, whereas σ A = 2 mm, σ T = 0.05 s, σ λ = 100 mm, σ φ = 0.25 rad. These values are rough and rather pessimistic estimates of the uncertainty of the approximate parameters.
The condition equations are formed by all parameters that are regarded as observed, i.e., y i , z i , A, T, λ, φ, and follow Equation (6). Every observed point i provides three condition equations:ŷ i = y i , z i = z i and Equation (2).
Reduced observations
Partial derivatives Solution
Image-Based Approximate Wave Retrieval
The starting point is to measure the targets in images, for instance with centroiding methods, ellipse fitting or cross-correlation [61][62][63]. The authors adopted the intensity centroiding method to detect points in the initial frame and the sub-pixel cross-correlation to track them in time [64]. The 2D image measurements are then transferred to 3D space in a regular bundle adjustment and exploited to recover the initial wave parameters. The developed pipeline is fully automatic and summarized in the following order: 1. clustering of 3D points; 2. coupling of neighboring clusters; 3. calculation of mean A, T and φ from the clusters; 4. calculation of mean λ from the couples of clusters.
The reconstructed targets in time are considered an unorganized point cloud; thus, their clustering is carried out up front (cf. Figure 2). The retrieved clusters are equivalent to the responses of a single target floating on the water surface. The clustering per se is not required, as this piece of information is already carried within the point naming convention. Nonetheless, because this may not always be the case, the clustering is a default operation. It creates boundaries in 3D space based on some measure of proximity; in our case, the Euclidean measure was chosen. The algorithm was invented by [65] and implemented in [66]. Now, the coupling establishes a neighborhood relationship between closest clusters (cf. Figure 2). Given a kd-tree representation of all clusters' centroids, the algorithm searches for the neighbors within a desired radius and ascertains that the selected pair has an offset along the wave propagation direction. This later permits for the computation of the wavelength λ. The selected radius should not be too small, but cannot be greater than the wavelength, to be able to resolve its length.
The mean A, T and φ are found within each single cluster, as pointed out in Section 2.1. To find the mean λ, the requirement is to know: (i) the period T, (ii) the direction of the wave propagation, and (iii) the relation between the GCS and the LCS, where the wave is defined. For every couple of clusters, one counts how much time ∆t a wave crest takes to travel between the clusters (cf. Figure 4). The distance ∆d (in LCS) between them is known and so is the period T, hence the wavelength estimate results from the trivial proportion: . Image-based approximate wave retrieval. For this cluster pair, the wave crest takes eight frames (∆t) to travel between the neighbors. If the distance between the clusters is ∆y = 10 cm and the period T = 12, then the wavelength λ = 15 cm.
Water as a Specular Surface
The second developed method exploits the fact that water under some conditions can be perceived as specularly reflecting, i.e., manifesting no scattering on its surface. Two parallel arrays of linear lamps hang above the measurement space (cf. Figure 8). Their reflections on top of the water surface could be observed as static, in the steady wave condition, and dynamic, when the water was excited. In the latter case, the reflections project to distorted shapes (cf. Figure 5). Such deformations implicitly carry information about the instantaneous shape of the water surface and are investigated in the successive paragraphs.
Because the specularities (also known as highlights) travel with the observer and depend on the surface geometry, no corresponding features in different cameras are present [33,51,67]. As a result, no stereo or multi-view measurements are made possible. Unless one is able to directly identify 3D rays that would intersect at the interface of a surface [40], the alternative solution to the depth-normal ambiguity is to add control information and/or impose appropriate constraints in 3D object space.
In the developed approach, the images of specular highlights and a number of parameter constraints are combined together to recover the water's instantaneous state. This method solves a least squares problem, simultaneously determining all parameters of interest. The discussion opens with the condition equations and imposed constraints, which constitute the functional model of the LS problem. Next, the adjustment procedure is explicitly given, including: (i) the derivation of approximate values for all unknowns, (ii) the stochastic model, (iii) the system equation forming, as well as (iv) the collection of control information. Lastly, the experimental section presents the results followed by a compact conclusive paragraph.
Mathematical Model
The functional model comprises the mathematical description of two observed phenomena, that is the perspective imaging associated with the camera system, as well as the shape of the induced waves, which in turn associates with a wave maker.
The camera to object points relation was modeled with the collinearity equations. The shape of the induced waves was modeled with Equation (2). The defined wave model is accompanied by three constraint equations. They impose that: (i) virtual points lie on the wave surface ( f sur f ) (their distance from the surface = 0), (ii) for all virtual points, the incident and reflected rays make equal angles with respect to the tangent/normal at that point ( f re f l ) (compliance with the law of reflection), and (iii) the vector from the camera to a virtual point, the normal at the point and the vector from that point towards its 3D real position are coplanar ( f copl ) (compliance with the law of reflection). The real points are always considered as ground control information; therefore, the developed method belongs to the class of calibrated environment methods. See the adjustment workflow in Figure 7.
Apart from what has been so far discussed in Section 3.1.1, the stochastic model avoids having the solution driven by the observations that are most abundant. It limits the influence of a particular group of observations with the help of a second weighting matrix N max . Here, every group of observations was assigned a value n max that limits its participation in the adjustment to below n max observations. The diminishing effect is realized by the expression in Equation (4) and found on the diagonal of the matrix. The n obs is the cardinality of the observations within a group. The ultimate weight matrix W is a multiplication, W = W · N max . The bespoke weighting strategy is implemented within MicMac, an open source bundle adjustment software [68,69]. n obs,i max = n obs n max n obs +n max Within the experiments, the σ 0 value was always set to unity, whereas σ A = 1 mm, σ λ = 10 mm, σ T = 0.05 s, σ φ = 0.25 rad, σ xy = 0.5 pix, σ real XYZ = 5 mm, σ virtual XYZ = 10 mm, σ GCS→LCS T = 25 mm and σ GCS→LCS ω,φ,κ = 0.01 rad. In analogy to Method 1, these values are rough estimates of the estimated approximate parameters' values. If the parameter setting shall be unclear, a means of assessing the correctness of the a priori values must be employed, e.g., the variance component analysis. Partial derivatives Solution Condition equations are functions of observations and parameters. Collinearity equations are self-evidently observations as a function of parameters: the 3D coordinates of the real or virtual point (IOR and EOR in the developed implementation were treated as constants). Equation (5) renders the collinearity expanded into Taylor series around the N initial estimates XYZ 0 i : Optionally, one may define originally free parameters as observed unknowns. This trick helps to include any available knowledge of the unknowns into the pipeline, as well as to avoid surplus parameter updates, i.e., steer the rate of parameter change along the iterations. The parameters to control the rate are the entries of the weight matrix W. Our implementation allows all parameters to be regarded as observed; therefore, any parameter in Figure 7 can be replaced with the param in Equation (6). For instance, if an X-coordinate is observed, the condition equationX = X obs + v x and the correction equation X = X obs + 1 · dX are written down, where v x and dX are the observation and the parameter corrections.
Constraint equations do not involve observations, but pure parameters. The conceived wave model renders three constraint equations: (i) f sur f , (ii) f re f l , The constraints are defined locally; thus, coordinate quantities are annexed with the symbol * . The values determined in LCS are not considered in the adjustment, but are obtained after a 3D rigid transformation with the parameters (ω, Φ, κ; T) GCS→LCS .
The linearized forms of the above equations, expanded into Taylor series, are presented in Appendix A. Note that the local coordinate quantities x * , y * , z * are functions of their positions X, Y, Z in the GCS, as well as the parameters of the 3D rigid transformation. As a result, the derivatives are calculated for a composition of functions and must obey the chain rule.
Derivation of Control Information
The control information was not acquired physically prior to nor during the measurements. Not even posterior efforts were undertaken to collect the ground truth. The position of the linear lamps (cf. Figure 8), which served as the ground truth information, was recovered solely using the image data, under the condition that the reflecting water is globally a plane. As each measurement started with the calm water condition, the planarity condition was valid at numerous times. The imaging situation is depicted in Figure 9. The calculation of the XYZ coordinates of the control information in their real locations divides into: (i) the water plane derivation (from real points), (ii) the identification of homologous points across views and triangulation (virtual points), and lastly (iii) the flipping of the virtual points to their real positions.
The plane π of the water was recovered thanks to well-distributed dust particles present on its surface. Their appearance was sufficiently discriminative for their identification across views. Alternatively, one could place artificial targets on top of the water to avoid potential identification problems. Given a few (≥ 3) pairs or triples of 2D image points corresponding to real features, their 3D position is found by intersection. The searched plane defined analytically as Ax + By + Cz + D = 0 is then established by singular value decomposition (SVD).
The end points of the linear lamp reflections were identified and measured in images manually only in the initial frame. The subsequent tracking in time was realized by the flagship cross-correlation technique. Having found and measured the reflections, their 3D locations are triangulated (R in Figure 9), ignoring the fact that the observed features are not real. The 3D points emerge on the wrong side of the water plane; thus, they must be flipped to their real positions. The flipping is done with respect to an arbitrary plane, being the water plane and determined by the coefficients A, B, C and D. The transformation performing that operation works by: (i) roto-translating the global coordinate to a local coordinate system that aligns with the flipping plane, where N = [n x n y n z ], λ = n 2 y + n 2 z , and [X i Y i Z i ] are 3D coordinates of any points lying within the flipping plane. (ii) performing the actual flipping over the local XY−plane: (iii) and bringing the point back to the global coordinate system with R −1 1 and T −1 . The entire procedure committed in a single formula renders:
Derivation of the Approximate Highlight Position
The highlights' coordinates (P in Figure 6) result from the intersection of the approximate wave model with the vectors anchored in the image measurements, passing through the camera perspective center and extending into the object space. The highlights are single-image observations, so there exist no correspondences across images, as in the case of real points. The intersection points are found first by intersecting with the mean water plane and then by iteratively improving the results with Newton's method. The points are first found in the LCS and subsequently transferred to the GCS given the approximate parameters of the rigid transformation. The algorithm is presented below.
Given the 3D vector defined by points (x * 1 , y * 1 , z * 1 ) and (x * 2 , y * 2 , z * 2 ) at the camera center and observed in image space, respectively, the 3D line parametric equation takes the form: The sought y * −coordinate of the intersection is then equal to y * = y * . The unique solution can be obtained when the z in the preceding equation is replaced with the mean water level, e.g., H 0 = 0 in the LCS. A better approximation can be accomplished if the intersection is performed with a more realistic model than the plane: the observed sine wave. Combining Equation (2) with the last row of Equation (13), such that the z * terms are equal, brings about the following relationship: The function g has one parameter y * , and as such, together with its derivative g , both evaluated at the current parameter value y * 0 , they enter Newton's method, which finds the ultimate root. The Newton step is the ratio of g(y * 0 ) and g (y * 0 ) (cf. Equation (15)). The loop continues until the difference between old and new parameter estimates no longer falls below a defined threshold.
Once the y * value is known, the z * -coordinate is computed from Equation (2), and lastly, the x * can be retrieved from the 3D line equation as . The final step brings the locally-determined coordinates to the global ones with (ω, Φ, κ; T) GCS→LCS .
Imaging System
The imaging setup is comprised of three dSLR cameras (Canon 60D, 20-mm focal length) and three continuous illumination sources (1250 W). The spatial resolution of the videos matched the full HD (1920 × 1080 pix), acquiring a maximum of 30 fps in progressive mode. The video files were lossy compressed with the H.264 codec and saved in a .mov container.
The mean object to camera distance amounted to 10 m, resulting in the average image scale of 1:500. The cameras were rigidly mounted on a mobile bridge (cf. Figures 10 and 11) and connected with each other, as well as with a PC, via USB cables to allow for: (i) remote triggering, (ii) coarse synchronization. Fine-alignment of the video frames was possible with the help of a laser dot observed in all cameras. The laser worked in a flicker mode, at a frequency lower than that of the video acquisition, and casually moved over the floating platform's surface, both at the start and the finish of each acquisition. No automatization was incorporated at this stage; instead, the alignment was conducted manually. Despite the USB connections, the videos were stored on the memory cards. No spatial reference field was embedded in the vicinity of the system; instead, the calibration and orientation was carried out with the moved reference bar method [70].
Evaluation strategy
Results achieved with Method 1 (m1) and Method 2 (m2) are confronted with the responses of a capacitive level sensor and validating physical targets (cf. Figure 11). The capacitive level sensor was mounted on a rod-like probe and sensed the variations in electrical capacity within the sensor. Given the dielectric constant of the liquid, this information can be directly transformed to the changes in the water level, in which the probe is normally immersed. Because the sensor samples the changes in a singular spot, it provides merely information on the amplitude and frequency of the water level oscillations; likewise validating the targets. The instantaneous wavelength, thereby, remained unknown, as no direct means to judge the accuracy of the calculated wavelength existed. Indirectly, the correctness of all wave parameters, including the wavelength, can be estimated by confronting the response of a number of points distributed along the wave propagation (vt1, vt2, cls) with their responses predicted from the model.
In the adjustment, m1 adopted three clusters, whereas m2 tracked up to seven highlights, corresponding to four to six ground control points (i.e., the lamps' endpoints). The distribution of measured and validating points is displayed in Figure 12. Numerical results of the five measurement series are summarized in Table 1, with the graphical representations provided in Figures 13-18. The third and fourth measurement series was evaluated twice with varied image observations (specular highlights; cf. Figure 12). The adopted evaluation strategy is as follows.
Accuracy 1: Validating Targets (vt1, vt2)
With validating targets, we refer to points that were not treated in the adjustments aiming at finding the wave parameters. They were measured in images and independently intersected in 3D space. The Z-response of all validating targets is confronted with the value predicted from the devised wave model. The validation takes place in the LCS. Figures 13 and 14 illustrate the results projected onto the traveling sine wave. The red corresponds to the response from the target; the blue is the model outcome. The normal probability plots test and confirm that the residuals follow the normal distribution.
Accuracy 2: The Capacitive Level Sensor (cls)
To compare the data collected by the capacitive level sensor and the image-based measurement system, temporal synchronization and data resampling had to take place. The start of the capacitive level sensor (cls) data collection was conveyed to the cameras audiovisually, by switching the light on and by emitting vocal sound. This allowed for rough temporal synchronization. To fine align the two signals, cross-correlation was carried out. The frequency of cls data collection was double the frequency of the camera recording; therefore, to equalize the acquisition rates, every other sample of the cls device was discarded. Figure 18 illustrates the results of the comparisons. Figure 18. Accuracy 2 validation results for m1 and m2 in the 1-5 measurement series depicted in a-e, respectively. In red, the cls response; in blue the m1, m2 responses. All comparisons are carried out at the position of point cls.
Discussion
The results achieved were confronted with the responses of a capacitive level sensor and two validating targets, all of which provided single-point responses, and were placed in various positions across the water basin. In an overall assessment, the specular method (Method 2) proved superior with respect to the diffuse method (Method 1).
Accuracy
Method 1 performs well locally, when validated on points in the vicinity of the cluster pair (vt1), but as soon as it is confronted with distant points (vt2), modeling errors significantly grow; compare, e.g., Figure 13a at vt1 and vt2. Method 2 employs the entire water field in the computation and therefore has a global scope with the absence of extrapolation effects, and the modeling errors are more consistent, yet have a slightly higher magnitude. It shall be noted that the vt1 and cls placed on either end of the basin were under the influence of the principal sine wave, as well as the waves reflected from the basin side walls. The platform floating in the middle also disturbed the principal wave shape. The third and fourth measurement series evaluated with Method 2 on the highlights observed at the top of the basin (series 3a m2 and 4a m2 ) and around the platform (series 3b m2 and 4b m2 ) proved that the wave, having faced an obstacle, decreases its amplitude and wavelength. Compared the significant deviations of the blue/red curves in Figures 15b and 16b at vt2, as well as Figures 18c,d at the cls, as opposed to Figures 15c and 16c and Figures 18c,d respectively. This is of high importance in interpreting the behavior of the platform. To do that, one must know the form of the water body just before it hits the platform and not some distance before that interaction, since it no longer corresponds to the real force put on that object.
Evaluation results on cls suggest that the wave form changed spatio-temporally. It was systematically attenuated with increasing distance from the generating source. The cls was mounted closer to the wave maker than vt1, vt2, other artificial targets or the highlights and, consequently, measured higher wave amplitudes; compare the subfigures of, e.g., Figure 18c or d. Wave superposition effects (the principal and reflected waves) could contribute to higher amplitudes, as well.
Precision
Precision measures should not be interpreted as a single quality measure. As the evaluation proved, they are too optimistic when confronted with the accuracy measures. Moreover, the covariance matrices in Method 1 return a standard deviation homogeneous in all coordinates, while Method 2 manifests large uncertainty in the y-coordinate. This is due to the simplified and rigorous modeling of Methods 1 and 2, respectively. Method 2 simultaneously treats the reconstruction and the modeling task, whereas Method 1 performs merely the modeling, with no special treatment of the preceding steps, other than the known a priori standard deviation expressed in the weight matrix.
The inferior precision on the y-coordinate in Method 2 is a side effect of suboptimal network design, with a good base across the model basin (x-coordinate) and practically no shift along the y-axis (see the definition of the coordinate system in Figure 11). In spite of no parallax on the z-coordinate, the precision figures along that axis are satisfying, due to the introduced water surface model.
Wave Parameters
The wave parameters calculated with Method 1 and Method 2 differ, most seriously for the wavelength parameter. The differences are more pronounced at very long waves with small amplitudes (Series 3 and 4) and less evident at shorter waves or high amplitudes (Series 1, 2 and 5). In cases of small amplitude to wavelength ratios, the stability of the solution ought to be brought into discussion. Nonetheless, the subject has not been given further insight within this work.
Conclusions
Measurement of a difficult surface was approached with the collinearity equation, treating it as diffuse, and with the piecewise linear equations, when restituted solely from the specular reflections visible on its surface, but coming from a set of lamps hung from the ceiling. The accuracies obtained on physical targets floating on the surface, counted for the entire acquisition length, were more favorable for the latter method, falling between 1 and 3 mm.
The concept of using the lamps' reflections for metrology was partially driven by the fact that lamps in ship testing facilities appear predominantly in the same configurations. They provide a calibration field at no labor cost, of high reliability, making the methodology universal and easy to re-apply. The diffuse method, on the contrary, necessitates extra work to establish a net of targets to be placed on the water. The points are then prone to sinking, partial submergence, occlusions or drift.
The superiority of the specular over the diffuse approach is in full-field versus single-point shape modeling. To install a net of points that spans a large area is infeasible; therefore, one is constrained to local observations. On the contrary, the number of specular highlights is a multiplication of every point in the calibration field by the number of cameras within the imaging system. Their allocation over the measurement volume is steerable by the camera placement. If, however, bound to using the diffuse approach and aiming at full-field data collection, eliciting the water shape over extended surfaces may be possible through the adoption of patches of nets in strategic areas. In both conditions, large field modeling demands very careful planning, especially for complex-shaped surfaces. The model definition must contain just enough parameters, whereas the observations ought to deliver enough data for their recovery.
A noteworthy aspect of the specular approach is the magnifying effect present on the water surface. Observed highlights undergo an apparent motion under the deforming surface shape. The motion magnitude and trajectory are known from the law of reflection. This depends on the camera to surface relation and the relation of the point at which reflection is observed (in its real position) to its reflection on the surface (virtual position). By modifying the distance between the reflecting surface and the calibration field (real points), the motion magnitude is changed proportionally. Put differently, very small surface deformations can render large highlight displacements for a sufficiently distant calibration field.
An important issue to consider when doing least squares adjustment, true for the diffuse and specular methods, is the initial approximations of all unknowns. Unless their values are known well enough, the success of the adjustment is put under question. At small amplitudes, the longer the wave is, the more precise must be the approximations. If approximations are imprecise, divergence or convergence to an incorrect solution is highly probable. Reliability measures output from the covariance matrices of the adjustment may serve to evaluate the credibility of the results. However, this has not been investigated within this work.
The model of the wave shape assumes single-frequency oscillations. In large field observations, this assumption is often violated, as has been observed in the presented work. If the assumption is violated and the model becomes insufficient to describe the phenomena, one may still: (i) use the simple model to observe the surface locally, or (ii) extend it to involve wave time-varying components, eventually modeling its shape with a wave being the sum of two elementary waves.
|
2016-03-14T22:51:50.573Z
|
2015-12-01T00:00:00.000
|
{
"year": 2015,
"sha1": "60cdc7d906694290aab21315b373e3d3d8298fdc",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/15/12/29828/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "60cdc7d906694290aab21315b373e3d3d8298fdc",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
}
|
7400991
|
pes2o/s2orc
|
v3-fos-license
|
Treatment seeking behaviour in southern Chinese elders with chronic orofacial pain: a qualitative study
Background Chronic orofacial pain (OFP) is common in general adult populations worldwide. High levels of psychological distress and impaired coping abilities are common among Western people with chronic OFP but limited information was found in southern Chinese people. This study aimed to explore the perceptions and experiences of community dwelling elderly people with chronic OFP symptoms and their treatment seeking behaviour in Hong Kong. Methods An exploratory qualitative interview study was conducted. Elderly people experiencing chronic OFP symptoms were invited to take part in an individual semi-structured interview. A total of 25 semi-structured interviews were performed for 25 participants. Results Pertinent issues relating to the treatment seeking behaviour emerged from the interviews, many of which were inter-related and overlapping. They were organized into three major themes: (i) Impact of chronic OFP on daily life; (ii) Personal knowledge and feeling of chronic OFP; (iii) Management of chronic OFP. The participants were found to have the intention to seek professional treatment, but there were barriers which discouraged them continuing to seek professional treatment. They also received complementary treatment for chronic OFP, such as acupuncture, massage and “chi kung”. Moreover, a wide range self-management techniques were also mentioned. On the other hand, those who did not seek professional treatment for the chronic OFP claimed that they had accepted or adapted to the pain as part of their lives. Conclusions This qualitative study observed that elderly people affected by chronic OFP symptoms in Hong Kong sought many different ways to manage their pain including traditional and complementary approaches. The role of the dentist in dealing with chronic OFP is unclear. Multiple barriers exist to accessing care for chronic OFP. The findings may be used to inform future chronic OFP management strategies in Hong Kong.
Background
Orofacial pain (OFP) can be defined as pain related to the face and mouth regions and may involve both hard and soft tissues in these anatomical regions [1]. Chronic pain is a term used to describe pain that had persisted for 3 months or more, in accordance with the International Association for the Study of Pain definition [2]. The diagnosis and treatment of chronic OFP continues to be challenging even in contemporary modern dental practice. Anatomical structures in the head and neck region, mechanisms of referred pain, and underlying systemic and psychological pathology complicate diagnoses and management [3].
Chronic OFP is common in general adult populations worldwide with prevalence estimates ranging from 14-42% [4][5][6][7][8]. The adverse impact of OFP on sufferers' lives can be considerable, especially if the pain is chronic [9][10][11][12]. Chronic OFP affects approximately 10% of adults and is more common in the elderly where 50% or more may be affected [13]. Experience of chronic OFP in the elderly has been found to vary between ethnic groups and appears to be more common in Asian elders [14,15]. The impact of chronic OFP also seems to vary between ethnic groups. However, the majority of people with OFP do not seek treatment [16]. There also appears to be an ethnic bias in treatment seeking behaviour with estimates of 40-46% in Western populations compared with around 20% in southern Chinese groups (including Hong Kong) [7,[16][17][18].
High levels of psychological distress and impaired coping abilities are common among Western people with chronic OFP [6]. Whereas, in studies in southern Chinese people with chronic OFP of significant intensity, associated psychological distress was found to be limited and there were low levels of perceived need for treatment [19][20][21]. It has been proposed that southern Chinese people may have more effective coping strategies and more acceptance of pain than their Western counterparts [7,12,20].
The experience and consequences of chronic OFP have been explored predominantly using quantitative research methods [12][13][14]20]. Whilst the quantitative approach has yielded important information on the magnitude and impact of the chronic pain problem, it is not possible to investigate the perspectives, experiences and responses of individual patients.
The present study, therefore, aimed to explore the perceptions and experiences of southern Chinese community dwelling elderly people living in Hong Kong with chronic OFP symptoms and their treatment seeking behavior. Greater understanding of perceptions and experiences of chronic OFP within the local community has implications in defining 'need' in the local context, understanding the salience of chronic OFP to everyday life and functioning, and to provide a situation analysis of pathways to managing chronic OFP.
Methods
By using the qualitative approach, the perceptions and experience of chronic OFP were collected by conducting individual semi-structured interviews with an interview schedule. In contrast to survey approaches, the semistructured interviews enabled deeper insights regarding participants' knowledge and understanding the impact of OFP pain on their daily lives and their treatment seeking behaviors [22]. The study was approved by the Institutional Review Board of the University of Hong Kong/Hospital Authority Hong Kong West Cluster. Participants who took part in the study were provided with written informed consent.
Participants
Hong Kong is often described as an 'East meets West' culture where knowledge, use and interest of traditional based approaches co-exist with conventional westernbased medicine; particularly among older people [23]. This study population was community-based elderly people in Hong Kong who experienced chronic OFP.
Inclusion criteria were: people aged 60 years and above with non-dental chronic OFP symptoms. Exclusion criteria were: communication difficulties; psychiatric disease including dementia and non-Cantonese speaking people.
Participants were a convenience sample of elderly people aged 60 years and above who were attendees at daytime social and community centres which provide leisure facilities and opportunities for socializing. In order to acquire a sample which was as sociodemographically as diverse as possible and with different characteristics of non-dental OFP of at least 3 months duration (i.e. chronic), subjects were recruited from different places within the Hong Kong Special Administrative Region (HKSAR), including Hong Kong Island and Kowloon. They were chosen from the publically available list of 57 social and community centres obtained from the website of the Social Welfare Department, the Government of the HKSAR (http://www.swd.gov.hk/doc/ elderly/HP%20List%20of%20SE%20Oct_2013.pdf). The social and community centres were approached until enough eligible participants were recruited.
A short initial screening questionnaire was used to ask the potential participants regarding their experience of different types of non-dental chronic OFP symptoms, the duration and intensity of the symptoms and treatment seeking behaviour for chronic OFP. People who had, during the past month, experienced pain in their face, mouth or jaws which lasted for 1 day or longer and the pain had begun more than 3 months previously were eligible. The questions concerning chronic OFP symptoms comprised 9 items which did not include toothache (i.e. non-dental), viz. 1) pain in the jaw joint/s, 2) pain in the face just in front of the ear, 3) pain in and around the eyes, 4) pain in the jaw joint/s while opening the mouth wide, 5) sharp shooting pains across the face and cheeks, 6) pain in the jaw joints/s when chewing food, 7) pain in and around the temples, 8) tenderness of the muscles at the site of the face, and 9) a prolonged burning sensation in the tongue or other parts of the mouth.
Construction of the interview schedule
The interview schedule was developed by including issues identified as potentially important from key literature in the field [9,[15][16][17]20] and by conducting 2 focus groups.
The focus groups were designed to explore issues related to non-dental chronic OFP and treatment seeking for the pain. Elderly persons who had non-dental chronic OFP (with the use of the initial screening questionnaire mentioned above) were conveniently recruited from 2 social and community centres and one focus group session was held at each centre. They were invited to take part in the discussion of the focus groups. Each focus group contained 6 participants and the group discussions were conducted by a trained facilitator. Based on an interview guide, the facilitator encouraged participants to talk freely about their OFP perceptions and experiences, how and when professional care was sought for these conditions and how the conditions affected their daily living activities. Group discussions were conducted in Cantonese and each discussion lasted for around 1 hour. The conversations were audio-recorded and transcribed. Content analysis of the transcripts was performed by the facilitator to construct the interview schedule which consisted of the questions asked in the semi-structured interviews; no further analysis of the content was carried out. The constructed interview schedule included open-ended questions on the general experiences of chronic OFP, how the pain affected daily lives, pain characteristics (including severity, frequency and duration), knowledge and personal feeling of the pain and treatment seeking behaviour.
Semi-structured interviews
A purposive sample of 25 elderly persons who had nondental chronic OFP (with the use of the initial screening questionnaire mentioned above), were recruited from 18 social and community centres (excluding the 2 centres where the focus groups were conducted). Subjects with different age, gender, place of residence, educational attainment, oral health condition, self-rated oral health, pain characteristics and different treatment seeking behavior were invited to participate ( Table 1).
The participants were interviewed individually and each discussion lasted for around 1 hour. The semi-structured interviews were conducted from October 2009 to January 2010. During each interview, body outline drawings were provided for the participants to mark pain locations in the orofacial region. Based on the interview schedule, a trained interviewer began the interviews by asking openended questions which were mainly on the general experiences of chronic OFP to allow the participants to raise any issues that they felt were important. Participants were encouraged to speak freely about how the pain affected their daily lives. Open-ended questions were asked on pain characteristics (including severity, frequency and duration), their knowledge and personal feeling of the pain and treatment seeking behaviour. Interviews took place until the dialogues become repetitive indicating that the exploration was saturated. The conversations were recorded for post-hoc transcription and analysis.
Data analyses
The interviews were analysed using the Thematic Framework Approach that involved a multi-stage thematic analysis [24]. First, the audio-recorded interviews were transcribed verbatim in Chinese. The transcripts were reviewed by the trained interviewer line-by-line noting recurrent issues. Recurrent issues that emerged were then coded and indexed into different nodes, which contained related text in the same node so that any patterns and ideas could be easily identified. Another investigator verified the coding and indexing. The nodes were then organized into themes and sub-themes. Finally mapping and interpretation were achieved through discussion with the other investigators in the study. Quotations were selected to illustrate the observed patterns and interpretation. These selected quotations were then translated into English and reported in this paper.
Sample characteristics
The characteristics of the participants are shown in Table 1, there were 21 female and 4 male participants with chronic OFP. Their age range was 65 to 83 years old. The extra-oral locations for chronic OFP were in the temple regions, eyes, face, jaw, chin or nose. The intra-oral locations for chronic OFP were in the tongue or buccal (cheek) mucosa. The participants had been suffering from chronic OFP for several years, some for more than ten years. In general, participants reported that their OFP was of mild to moderate severity.
Themes
Numerous pertinent issues were obtained from the interviews, many of which were inter-related and overlapping. They were organized into three major themes: (i) Impact of chronic OFP on daily life; (ii) Personal knowledge and lay understanding of chronic OFP; (iii) Management of chronic OFP. A summary of the themes and sub-themes is shown in Table 2.
Impact of chronic OFP on daily life
The participants expressed that their daily lives, including normal daily activities, social and family life and their mood had been adversely affected by the chronic OFP.
Effect on normal daily activities Some participants reported that it was difficult to carry out daily activities and that their chronic OFP prevented them from doing previously routine things. For example: "My eyes became very painful when I was looking at something for a long period of time. When my eyes felt painful, my vision would become blurred. For example, I could not read newspapers for a long period of time; otherwise my eyes would become tired and painful. So I could only read a few paragraphs from a newspaper." (female participant, age 78, with moderate eye pain for 2 years due to post-herpetic neuralgia). Another participant noted the change in her social behavior. As she became increasingly aware of her limited clarity in speaking, she deliberately reduced oral communication.
"A wound seems to be present on my tongue. My tongue would become very painful and could not move freely when I talk. This affects my speech because I could neither talk fluently, nor pronounce accurately…. So now I would rather not talk." (female participant, age 65, with moderate tongue pain for 10 years).
Moreover, there appeared to be a relationship between the chronic OFP condition and the quality of sleep. Some participants noticed that they had poor sleep quality due to the pain. Those who had tongue pain and/or jaw pain reported that certain foods would trigger the pain, so they would eat very carefully and would select only certain types of food, one of them said: "I get (tongue) pain when eating, I could not eat salty or hot food as it would make my pain worse… Now, I like to eat some less seasoned food, seldom eat deep fried food." (female participant, age 78, with moderate tongue pain for 5 years).
Effect on family and social lives The occurrence of chronic OFP had also impacted on the participants' family and social lives. They mentioned that the chronic OFP had caused them to miss out on some family and social gatherings and/or activities or that it had affected their enjoyment of some events. They would reduce the public activities and would prefer to be alone when suffering from chronic OFP and not want to be bothered by anything or anyone. Some expressed concerns about how this affected their relationships with their family and friends. For example: "I like playing mahjong very much, but when I get the pain (at the temple region), I could not concentrate and I would often lose the game……so I would not play mahjong when I felt unwell." (female participant, age 68, with moderate pain at the temple region due to trigeminal neuralgia for 12 years).
"When I had the pain (temple region), I would not go to the community centre and I seldom tell other people about my pain, I would rather stay at home for some more rest." (female participant, age 80, with mild pain at the temples region for 2 years).
An alternative, though less recurrent view was expressed by some participants that there was a need to get on with something, stressing the importance of distracting the chronic OFP and not letting the pain to govern them. They mentioned that they would not stay at home alone when the pain occurred, they would attended some social engagement or walk around outside the home. A participant emphasized going out of the house can relieve her pain: "It is better to go out when the (eye and nose) pain occurs… I would feel less pain when I was being distracted. Moreover, in the daytime, my son goes out to work, I feel bored when I stay at home alone. It's better to attend the community center in the daytime to spend my time." (female participant, age 76, with moderate eyes and nose pain for 5 months).
Effect on mood Participants routinely reported that the chronic OFP had badly affected their mood. Some mentioned that because their chronic OFP was unpredictable Table 1 Characteristics of participants (Continued) [24] 65 (F) Remarks: a = OFP was due to post-herpetic neuralgia; b = OFP occurred after brain tumour surgery; c = Neuropathic OFP; d = OFP was due to the recurrent trigeminal neuralgia (TN). The first episode of TN was in 20 years ago; e = OFP was due to TN; f = OFP was due to TN; g = OFP was due to TN. and considered to be untreatable by health professionals, they felt depressed, experienced self-regret, were worried, disturbed or anxious about the pain. They also expressed concern that they lacked of control over the course of the pain. One female participant even disclosed suicidal feelings because of her persistent chronic OFP: "Sometimes I really want to die…Why do l live so long? I believe that the (jaw) pain could only be solved if I die. I always feel annoyed and depressed…Why is life so tough? I think it's unfair for me to live so long and suffer from the pain!" (female participant, age 71, with both severe jaw and tongue pain for 3 years).
A contrasting view was expressed by those who reported that the chronic OFP would not affect their mood negatively. They claimed that their chronic OFP became part of their lives. Some claimed that they had accepted and/or adapted to the pain and could control it. This feeling seemed to be more common among those who had been suffering from chronic OFP for many years. Moreover, a participant said that it was better to forget the pain so that she could live happily as a healthy person: "I have ignored the (eye, face and chin) pain. It's better not to feel that you a "sick person". I would prefer to live as a healthy person and do whatever I like, such as I would play, eat, go shopping and travel around as before." (female participant, age 80, with eye, face and chin pain for 15 years due to trigeminal neuralgia).
Personal knowledge and Lay understanding of OFP
The participants who were worried about their chronic OFP conditions sought information from a variety of sources. They preferred to obtain advice directly from their physicians; however, their physicians often did not make a detailed examination, or give a formal diagnosis or an explanation for the chronic OFP. On the other hand, traditional Chinese medical practitioners generally diagnosed the chronic OFP as the cause of "internal heat" inside the human body.
Participants generally reported that they obtained more information about their chronic OFP from reading some informative books, magazines, through the internet or advice from their friends and family. Therefore, they had developed their own ideas about different aspects of their chronic OFP, including diagnosis and underlying causes. Some of them even worried about the underlying causes of the pain being something very serious, such as tumors, stroke or glaucoma. They also identified that, in some cases, triggers for their chronic OFP might be as infection, carious teeth, stress, diet and lifestyle factors. For example: "I think that "internal heat" will affect the nervous system. After I knew I had the neuropathic pain from my physician, I avoided eating spicy food, such as pepper because I believe that the spicy food would create the "internal heat" which would stimulate the neuropathic pain." (female participant, age 77, with moderate neuropathic pain around the face and eyes for 10 years).
"I do not have any knowledge about the facial pain, I think that the (nose and eyes) pain was caused by the toothache. Because my eyes started to have pain after I have had toothache… so I believe that all my facial pain was radiating from the carious tooth." (female participant, age 76, with moderate eyes and nose pain for 5 months).
Management of chronic OFP
Treatments obtained from physicians/dentists/ traditional Chinese medical practitioners (TCM) The participants would generally seek help from health professionals as the first strategy for treatment of their chronic OFP. However, they felt there was a lack of special clinics for the treatment of OFP in Hong Kong. The participants usually sought professional help from physicians, traditional Chinese medicine and/or dentists. However, they seldom sought treatment from physicians just solely for the chronic OFP. They would prefer to consult physicians about their chronic OFP at the time when they had follow-up and/or consultation for other systemic/general problems. In these situations, they had some bad experiences reporting that the physicians were disinterested or dismissive of their problems. This discouraged them from continuing to seek treatment from the physicians. One of them said: Some of them reported that they had approached many different health professionals (including physicians, TCM and/or dentists) directly about their chronic OFP; however, they did not find any solutions for their pain. The physicians most often only gave analgesics to relieve the pain, instead of asking them any detailed history or providing a detailed examinations and diagnosis.
Although they reported that the analgesics could help them to relieve some of the pain, they expressed some concerns about the drawbacks of the medications. They were especially concerned about the side effects of the analgesics, such as gastric problems and/or dizziness. People who were worried about the side effects of the analgesics claimed that they would only take the analgesics if the OFP became very severe and they could not tolerate it. For those who had been taking analgesics for their chronic OFP for a long time, they found that those medications were not as effective as they had been initially. Moreover, drug interactions appeared to be a problem for those participants who were also taking a number of other medications for their systemic disease. Some of them said: "I visited the physician for my jaw pain. He prescribed me some analgesics. I (would) seldom take the medications. As I often have stomach ache, I am afraid that the medication will affect my stomach." (male participant, age 73, with moderate jaw pain for 2 years).
"I would not take the analgesics because these drugs could not treat my OFP. My physician prescribed me some analgesics, he told me to take them if the pain was severe. But I believe that the analgesics have more cons than pros, so I would rather bear the pain instead of taking them." (female participant, age 77, with moderate neuropathic pain in the face and eyes for 10 years). In contrast, those who occasionally took analgesics to relieve their chronic OFP claimed that they depended very much on those analgesics. For example, A 66 year old participant mentioned that he did not worry about the side effects of the analgesics because the physicians told him that they were negligible.
Those who consulted Traditional Chinese Medical practitioners (TCM) reported that they were told that their chronic OFP was due to the "internal heat" inside their body and they were prescribed some Chinese medicine to release the "internal heat". However, they claimed that the internal Chinese herbal medicines were not very effective for relieving the chronic OFP. One of them said that she could not take the Chinese medicine because of its side effects: "I am suffering from rheumatic heart disease (RHD) and have taken medications for a period of time. I could not take the Chinese medicine because last time I got nose bleeding after taken it. Then my physician told me that it was due to the drugs interactions between the Chinese medicine and the medications for the RHD." (female participant, age 78, with moderate tongue pain for 5 years).
Participants were less likely to seek treatment from dentists for their chronic OFP. Some thought that the chronic OFP was not related to dentistry, so they had not thought of seeking treatment from the dentists. Moreover, some claimed that the dental treatment fee was very expensive and they could not afford it. For example: "I have consulted several physicians and traditional Chinese medical practitioners (for my tongue pain). However, they couldn't solve the problem… the Chinese herbs did not help and the physicians could only give me some vitamins. Now I was dispirited about continuing seeking treatment for the pain. On the other hand, I have not consulted dentists about my tongue pain because I thought that it was not a dental problem. I don't know whom should I seek treatment from." (female participant, age 65, with moderate tongue pain for 10 years).
"I consulted a physician for my jaw pain before, he suggested that I seek treatment from a dentist. However, I cannot afford the dental treatment fee because it is very expensive. I hope that the government can provide free dental treatment service for me." (female participant, age 71, with severe jaw pain for 3 years).
The participants who never sought any treatment from the health professionals claimed that their OFP was not a problem for them and the pain did not have a great impact on their daily lives. They could function normally in daily duties and the pain did not affect their social life and mood. The chronic OFP they suffered was mild and they claimed that they could control it. For example: "The (left eye) pain started last year, I haven't sought any treatment for the pain. Because it disappears if I close my eyes for a while and I avoid from looking at an object for a long time." (female participant, age 83, with mild left eye and orbit pain for more than 1 year).
Use of complementary therapies Some participants had tried complementary therapy after they found that the health professionals were not so effective in solving their chronic OFP. The complementary therapies were often recommended by their friends or relatives. Acupuncture and massage were the most frequently mentioned therapies. However, some commented that these therapies were costly, while others had stopped using them because they found that they were not so effective or just for short term pain reduction. As one of them said: "I had tried acupuncture many times before… The (tongue and jaw) pain was relieved after the first few times but I found that it was not effective afterwards. Acupuncture was just effective for short term pain relief." (female participant, age 74, with moderate tongue pain for 10 years and jaw pain for 20 years respectively).
"Later, I had massage therapy; however, I found that it was not very effective to relieve the (face and chin) pain…Afterwards, my daughter recommended a famous traditional Chinese medical practitioner to me for the acupuncture treatment. After I got the twelve courses of acupuncture treatment, I found it was also not effective." (female participant, age 69, with face and chain pain due to trigeminal neuralgia for 3 years).
A contrasting view was expressed by a participant who had severe pain in the jaw and temple region. She reported that "chi kung" was very effective for treating her pain. Before she sought "chi kung" treatment, she had consulted multiple health professionals, but they were not very helpful. Then her friend suggested that she seek treatment from a master of "chi kung". She said: "I remember I had treatment of my (jaw and temple region) pain from the master of "chi kung" for two years. I visited him everyday at the beginning, but now I visit him once per week only for regular followup. Although most people do not believe in "chi kung", I believe in it because it really could help me to relieve my (jaw and temple region) pain." (female participant, age 75, with severe pain at the jaw and temple region for 1 year).
Self management techniques Apart from treatment from the health professionals and having complementary therapies, participants described a number of other techniques which they found helpful for alleviating their OFP. Again, some of these methods were suggested from their friends or relatives. Common strategies included application of Chinese herbal oil, self massage, cold or warm compression, using medical pads, nutrition, taking more rest, physical exercise and over-the-counter medications.
Here are some of the conversations from the participants: "When it (jaw and face) was painful, I would use my hands to press onto the painful regions to relieve the pain… Everyday, I also do massage on my face in the mornings and at nights." (female participant, age 75, with severe pain at the jaw and face for 1 year).
"When the (jaw) pain was very severe; I would occasionally apply herbal oil onto the painful region and it was quite effective to relieve the (jaw) pain." (female participant, age 71, with severe pain at the jaw for 3 years).
"Everyday I use medical pads to stick onto the painful regions (right side temple and face). I think it is very useful for relieving the pain. Moreover, when I sleep, I put a towel under the right side of my head to prevent the pillow from touching the painful regions." (female participant, age 82, with severe pain at the face and temple region for years due to recurrent trigeminal neuralgia).
On the other hand, some described less conventional techniques, such as doing some other activities to distract their focus on the chronic OFP, such as playing mahjong or number cards, swimming, singing or doing meditation. One of them said: "I think the most wonderful time in each day is when I am in the swimming pool. …. I like to swim slowly inside the pool. At the time when I am floating in the water, I do not feel any (temple and nose) pain." (male participant, age 66, with moderate pain at the nose and temple region for years).
Social support The participants did not want to mention their chronic OFP to their family and friends because they did not want them to worry and/or they felt others would not understand their pain and could not help them. For example: "I have never talked about my (right jaw) pain to the others because I think that they cannot help me to solve the problem. Even for my wife, I have never told about my pain as I don't want her to worry about me." (male participant, age 73, with moderate pain at the jaw for 2 years).
In contrast, other participants would share their concerns about their pain with others. In some cases, friends helped them to relieve the pain. For example, one of the participants who had trigeminal neuralgia for 3 years mentioned that she had consulted many health professionals to treat the pain in the chin and face regions, but they were not helpful and she became very desperate. However, her friend recommended a Chinese medicine which she found to be was very effective for relieving the pain. She said: "My friend heard that I had OFP, she gave me some Chinese herbal tea and claimed that it could relieve the "internal heat" inside my body. After I drank the tea, the pain (face and chin) seemed to be reduced. I took it every morning for one month. To my surprise, the pain (face and chin) would be seldom occurred now." (female participant, age 69, with severe pain at the face and chin for 3 years due to trigeminal neuralgia).
On the other hand, another participant had some bad experiences from the "help" of her friend. She said: "My doctor was unwilling to prescribe the medication for the pain at my right eye. One of my friends who had also suffered from Herpes zoster infection in the past suggested I buy a topical medication. Therefore, one day I bought this topical medication and apply around my eye region in the morning. However, in that afternoon, I found that my eye became very red… I immediately visited my doctor to seek treatment." (female participant, age 78, with moderate eye pain for 2 years due to post-herpetic infection). Figure 1 summarizes the experience, adaptation and management of chronic OFP. When the elders reported experiencing chronic OFP, their first strategy was usually to seek professional help from physicians, dentists and/ or TCM practitioners. Complementary therapy and/or self-management techniques recommended by family/ friends were adopted when they viewed the professional help to be ineffective in solving their chronic OFP. However, when this pain was relatively mild, they undertook self management techniques to cope with their chronic OFP. Social support was sought by some elders as they found it helpful but not for the others as they did not want to worry their families and friends.
Discussion
This qualitative, interview-based research study involved southern Chinese elders from the general population in Hong Kong who were suffering from different types of non-dental chronic OFP. The purpose of the qualitative approach is to contribute the conceptual and theoretical knowledge on particular issues can be learned from individual life experiences and perceptions [22].
According to previous quantitative research, OFP symptoms were found to have a significant detrimental effect on functional, psychosocial well-being, daily life activities and lowered the quality of life of the Chinese elders [15,20,25,26]. However the majority of southern Chinese elderly people did not seek professional treatment for chronic OFP: only 27% with OFP symptoms sought professional treatment [15,27]. The likelihood for treatment seeking for OFP increased with the number of days when OFP was experienced [27]. However, in this study, we found that the participants who were suffering from chronic OFP were keen to consult health professionals in the hope of relieving the pain symptom. On the other hand, there were only a few of them who had never Figure 1 Experience, adaptation and management of chronic OFP.
sought any professional treatment for the chronic OFP. The most likely reason for not seeking treatment was that the pain was relatively mild in nature and it did not have a great impact on their daily lives. They claimed that they could control the pain by their own coping strategies and they had already accepted/ adapted to the pain as part of their lives. People with chronic OFP often have sought help from multiple health professionals for symptomatic pain relief. Because there is a lack of a "specialized chronic OFP clinic" in Hong Kong, we found that some of the participants did not know where and from whom they should seek treatment. Among the available choice of health professionals, most of them preferred to seek treatment from a physician rather than a dentist. This finding was in agreement with our previous finding [15,27] and another study in the United Kingdom [28]. The concepts of the clinical role of physician and dentist were usually determined by the patients' experiences and perceptions as well as the influence from their family and peers [29]. They regarded physicians as being better trained to diagnose and treat symptoms that are of non-dental origin [28]. It is relevant that over half of the dental graduates from the University of Hong Kong felt that they were less well equipped to relieve chronic OFP [30]. This might indicate that there was a lack of confidence of the dentists in the diagnosis and management for chronic OFP. This situation should be improved via undergraduate and postgraduate dental training and continuing development professional courses, as well as improved patient awareness of chronic OFP [28,30].
According to a survey in Hong Kong, some barriers exist to accessing the oral health care services. The problem related to the cost of oral health care services could be due to many reasons such as the price information not being available, dental services not being affordable, or a low level of appreciation or value on the cost of care [31]. From our interviews, it was clear that some participants were worried about issues like pain and discomfort during the dental treatment and some were concerned about the cost of dental treatment.
When consulting physicians regarding their OFP, it was unexpected to find that our participants most often did this indirectly. They preferred to consult the physicians during follow-up and/or consultations for other systemic problems, especially at government medical clinics. However, in these situations, the participants complained that the physicians were uninterested and not willing to treat the chronic OFP problems. In Hong Kong, the dissatisfaction on the short consultation but long waiting times, no stable doctor and no freedom to choose physicians to facilitate continuity of care in seeking health care services in the public sector have been already reported [32]. These factors are likely to discourage people from seeking treatment for chronic OFP.
When a curative treatment is not available for chronic OFP, people often expect to be given analgesic medications ("pain killers") for pain relief. Although analgesics could effectively relief acute pain in the short run, their efficacy in treating chronic pain is probably marginal and controversial [33]. In our study, the majority of participants were reluctant to take the prescribed analgesic because they were concerned about the side effects. Some mentioned that the analgesic could only relieve the pain symptoms temporarily, but it could not cure the OFP completely. Moreover, some participants who had other systemic diseases were concerned about the possibilities of drug interactions between the analgesics and their existing medications. However, considering the widespread use of analgesics, the overall incidence of serious drug-drug interactions involving the analgesics has been relatively low [34]. Thus, even with different available biomedical treatments for chronic pain, more effective complementary and alternative treatments are needed.
Some participants consulted TCM practitioners for the treatment of chronic OFP. They received either internal Chinese herbal medication and/or complementary treatment. TCM has been known for more than 5,000 years and the belief in Chinese medicine is still ingrained in the general Hong Kong population. According to a previous survey, ten percent of the people in Hong Kong would consult TCM practitioners for their illnesses [35]. However, TCM is considered to be a complementary and alternative medicine (CAM) in many Western countries [36].
"Chi kung" and "Tai Chi" are closely associated with TCM but typically considered as complementary treatments. One of the participants who had received "chi kung" treatment claimed that it was a very effective in relieving her chronic OFP. "Chi kung" or "Qigong" is important in the cultural heritage of China and describes various Chinese systems of ways to improve health both physically and mentally [37]. "Chi kung" has also been found previously to relief chronic OFP [38].
On the other hand, participants who had received acupuncture and massage claimed that those treatments were not as effective in to relieving their pain. Some reported that acupuncture treatment was effective in short-term pain reduction only. Acupuncture has also been shown to provide a significant short-term pain relief in patients with chronic OFP [39]. However, there was no evidence to support massage therapy was effective to relieve chronic OFP [40].
Apart from seeking treatment from different health professionals, participants also developed their own coping strategies and described a wide range of various self management techniques which were quite effective in the relief of chronic OFP in most situations. Some of the techniques were suggested by their friends or family. The most commonly used technique was application of the Chinese herbal oil onto the painful region. The herbal remedies employed natural plant preparations for therapeutic effects [40]. Uses of herbal remedies to reduce facial pain have been described, but there is generally insufficient evidence to support its use for chronic pain relief [41].
The qualitative study approach provided a deeper contextual understanding of the experiences and practices of older adults affected by chronic OFP. Another strength was the location of the study as the accessing of social and community centres increased diversity thus enabling wider insights into the multitude of care pathways (both conventional and traditional) when compared to clinicbased studies. Additionally, inclusion of a community sample as opposed to a clinical sample overcame possible biases of perceptions among a treatment-seeking study group. Data saturation was reached after the 25 interviews (around 1 hour each) and so the sample size was deemed adequate. In qualitative studies, the focus is on context and meaning rather than building a representative view of the population and this approach has limitations. A further limitation is that no clinical objective assessment of chronic OFP was undertaken to confirm chronic OFP diagnoses.
These findings have implications in providing greater 'in depth' understanding of the elders experience of chronic OFP with implications to inform the need for services including multidisciplinary specialty clinics. The study highlights the issues faced by elders affected by chronic OFP in the local context (and potentially other populations) and the need to address this problem through community means.
Conclusions
In conclusion, this qualitative study observed that people with chronic OFP symptoms in Hong Kong seek many different ways to manage their pain including traditional Chinese and complementary approaches. The role of the dentist in the management of chronic OFP appears unclear. A number of barriers exist to accessing care for OFP. The present findings may be useful to inform future chronic OFP management strategies in Hong Kong.
|
2016-05-12T22:15:10.714Z
|
2014-01-25T00:00:00.000
|
{
"year": 2014,
"sha1": "6bab4a0e881f6ea17e85e6f82ed1d997b08f8e4f",
"oa_license": "CCBY",
"oa_url": "https://bmcoralhealth.biomedcentral.com/track/pdf/10.1186/1472-6831-14-8",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "25117c2f3f4caf527ebb95c8065910de2f638b6a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
52039533
|
pes2o/s2orc
|
v3-fos-license
|
The clinical prognostic significance of hs-cTnT elevation in patients with acute ischemic stroke
Background Cardiac autonomic dysfunction caused by ischemic stroke might lead to an adverse outcome. Elevated high sensitivity cardiac troponin (hs-cTnT) is a marker of cardiac disease, it can elevate in acute stroke patients. The aim of the present study was to investigate association between serum hs-cTnT with prognosis among patients with acute ischemic stroke. Methods Five hundred and sixteen patients (mean age 66.19 ± 10.11) with acute ischemic stroke underwent a comprehensive clinical investigation and serum hs-cTnT activity test. All patients were followed up for 3 months. The prognosis was death or major disability (modified Rankin Scale score ≥ 3) at 3 months after acute ischemic stroke. Results 22.87% (118/516) of patients had serum hs-cTnT elevation (≥14 ng/l). Compared with normal hs-TnT group, the incidence of insular stroke (adjusted odds ratio, 2.84; 95% confidence interval, 1.48–4.17; P = 0.001) were more likely in patients with hs-cTnT elevation. In fully adjusted models, there was an association between serum hs-cTnT elevation and death (adjusted odds ratio, 3.14; 95% confidence interval, 1.16–8.49; P = 0.02) and major disability(adjusted odds ratio, 2.07; 95% confidence interval, 1.04–4.51; P = 0.04), and composite outcome(adjusted odds ratio,2.22;95% confidence interval,1.10–4.48; P = 0.03). Conclusions Higher levels of serum hs-cTnT were independently associated with increased risk of death or major disability after stroke onset, suggesting that serum hs-cTnT may have prognostic value in poor outcomes of ischemic stroke.
Background
Stroke is an important contributor to death and major disability. Ischemic stroke is the most common subtype of stroke. Autonomic dysfunction is a frequent in stroke patients. Studies showed that cardiac autonomic dysfunction caused by ischemic stroke might lead to an adverse outcome. Cardiac comorbidities account for almost 20% of deaths after ischemic stroke, appropriate preventive or therapeutic measures can be taken if the patient with acute stroke at risk of myocardial injury could be identified at the time of admission. High sensitivity cardiac troponin T (hs-cTnT) is the most sensitive marker of myocardial injury, however it can rise in several other conditions (e.g. renal failure, sepsis, heart failure, and pulmonary edema) [1,2]. In the last decade, the importance of hs-cTnT elevated in acute stroke has attracted many scholars' interest. Previous studies have shown that serum hs-cTnT is elevated in 10-30% of acute stroke patients [3][4][5]. In addition, some studies have found that elevated serum hs-cTnT levels may be associated with specific areas of brain damage [6]. However, it still remains unclear the pathomechanism of hs-cTnT elevation during the acute stage of ischemic stroke, which may be developed by neurally mediated autonomic dysregulation after acute stroke. In the other hand, in published studies, there were small numbers of patients. To date, whether hs-cTnT levels is associated with death or poor outcome remain uncertain [7,8].
To test the hypothesis that if serum hs-cTnT levels can help predict cardiac complications and poor outcome in acute ischaemic stroke(AIS), we studied the prognostic correlates of elevated hs-cTnT levels on admission in a cohort of consecutive patients.
Study population
The consecutive patients who were admitted to the Second People's Hospital of Chengdu due to AIS within 72 h of symptom onset between May 2012 and December 2017. Stroke patients were diagnosed as AIS if the brain computed tomography (CT) scan was normal or showed acute ischemic changes according to World Health Organization definition (sudden neurological deficit with a putative vascular cause). Acute ischemic stroke was confirmed by diffusion-weighted imaging (DWI) magnetic resonance imaging (MRI) using Siemens Magnetom Avanto 1.5 Tesla (Siemens Medical Solutions, Erlangen, Germany). The severity of stroke was assessed by National Institutes of Health Stroke Scale score(NIHSS). Eligible patients were all patients admitted to our stroke unit during the study period. This study was approved by ethics committee. Written informed consent was obtained from all study participants or their legal proxies.
Inclusion and exclusion criteria
Patients were included in the study only if they fulfilled all the following criteria:1. Admission for first-ever acute ischemic stroke.2. Evidence of a single acute hemispheric lesion consistent with clinical manifestations. 3. Cardiac(include acute myocardial infarction, congestive heart failure, a history of tachyarrhythmia/bradyarrhythmia or atrial fibrillation),pulmonary disease and impaired renal function (estimated glomerularfiltration rate < 60 mL/min per 1.73 m 2 ) were excluded.4. Any pharmacological treatment, including β-blockers, possibly affecting the autonomic function were excluded. 5.Cerebral hemorrhage, fever, hypoxia also were excluded. No patients received mechanical thrombectomy and thrombolytic therapy. All patients received standard therapy, which consisted of aspirin, lipid-lowering medications and so on. All patients were followed up for 3 months. The outcome was defined as death and major disability (scores 3-5 of modified Rankin Scale [mRS]) at 3 months after stroke onset.
Data collection
CT or MRI examination was conducted at the time of admission, and repeated examination was performed on 5 days after admission to confirm the location of the lesion. The presence of insular infarction was assessed by an experienced neuroradiologist blinded to clinical details.
Serum hs-cTnT was measured as part of routine laboratory testing on admission, which were measured by Elecsys and cobase analyzer (Roche diagnostics). Levels of serum hs-cTnT was considered abnormal if it was≥14 ng/l. Standard 12-lead electrocardiogram (ECGs) examination was performed on admission and assessed by two inspectors, blinded for patients. The Data differences between observers were resolved by consensus.
After the heart investigation, if the patients were suspected to have acute coronary syndrome, the cardiologist would perform additional cardiac evaluations for the patients. The waveforms of 12-lead ECGs were uploaded in digitai form, and explained by a cardiologist. Two dimensional transthoracic echocardiography was performed on patients with suspected reversible cardiac ischemia, and then we excluded patients with reversible cardiac ischemia.
Statistical analysis
Firstly, patients were classified into normal and hs-cTnT elevation group according to the level of serum hs-cTnT on admission. Demographic characteristics, vascular risk factors, current smoke, and so on were compared between the 2 subgroups in univariate analysis, using Pearson χ 2 test, Fisher exact 2-sided test, or Student t test, mean values(±standard deviation) were calculated for continuous variables. Mann-Whitney U test was used to test differences between two groups. We then performed logistic regressions analyses to determine the association between serum hs-cTnT and outcome(death, major disability and death/major disability), adjusting for age, sex, hypertension, current Smoking, current alcohol drinking, diabetes, hyperlipidemia, insular stroke, family history of stroke and NIHSS score. Results were expressed as adjusted odds ratios (OR) with the corresponding 95% confidence interval (CI).The data were analyzed using SPSS software (SPASS 22.0). P values< 0.05 were considered as statistically significant.
Characteristics of the study subjects
During the study period, 516 patients were identified, comprised 49.03%(253) men and 50.97%(263) women, and the mean age was 66.19 ± 10.11 years(38-96 years). In the study population, 367 patients had a history of hypertension, 154 had a history of diabetes, 274 had a history of hyperlipidemia, 149 patients smoke,153patients current alcohol drinking. Of these patients, 152were diagnosed as insular stroke. During the 3-month follow-up period, 49 out of 516(9.49%) patients had died.
In this study, we also found that the concentration of hs-cTnT were significantly correlated with poor prognosis, the higher the hs-cTnT, the worse the prognosis. The levels of hs-cTnT in the death group and the survival group were respectively 18.67 ± 10.39, 10.26 ± 6.85 (P = 0.00), the levels of hs-cTnT in the mRS ≤ 2 group and the major disability group were respectively 9.14 ± 5.98, 14.14 ± 8.15(P = 0.00),and the levels of hs-cTnT in the mRS ≤ 2 group and the composite outcome group were respectively 9.14 ± 5.98, 15.59 ± 9.14(P = 0.00).
Discussion
Hs-cTnT is the most sensitive and specific biomarker of myocardial injury,which is widely used in the diagnosis of the patients with heart diseases, especially in patients with non-ST segment elevation acute coronary syndrome [9,10]. Many studies have shown that serum Hs-cTnT of many patients with acute stroke increases significantly. The current treatment guidelines for acute ischemic stroke patients recommend troponin evaluation in acute stage [11]. It is still controversial whether the increase of troponin after AIS is related to the mortality and disability rate of stroke patients. Most studies suggest that there is a link between them, but a few studies that hold the opposite view. Some studies have shown that elevated troponin is related to poor functional prognosis, and high troponin levels is associated with increased mortality [12][13][14][15]. The potential pathophysiological mechanism of troponin elevation in the AIS is still unclear, leading to considerable uncertainty in the diagnosis and treatment for the clinician. In our study, 22.87%(118/516) of acute stroke patients had elevated serum hs-cTnT level which was congruent with previous studies. Mortality rates and major disability rate in the elevated hs-cTnT group respectively were 24.58%,30.31% at 3 months, which was significantly higher than that in the normal hs-cTnT group. After adjusting for fully confounders, we found a significant association of elevated hs-cTnT level with risks of death or major disability within 3 months after acute ischemic stroke. These results suggested this association was independent of established risk factors, including age, and baseline NIHSS score, and increased serum hs-cTnT elevation could be an independent risk factor of poor outcomes and have prognostic value for death or major disability among patients with acute ischemic stroke. The evidence on whether increased troponin is associated with stroke was inconsistent: some studies had shown that damage to the right or left insular is associated with higher baseline troponin levels [16,17], while others had not found any association between insular stroke and troponin levels [18,19].In our study, 152 were diagnosed as insular stroke, patients with elevated hs-cTnT levels showed significantly higher prevalence of insular stroke, after adjusting for fully confounders, we found a significant association of insular stroke with elevated hs-cTnT level. These results suggested insular damage might contribute to cardiac autonomic dysfunction, the underlying pathophysiological mechanism might be the downregulation of parasympathetic activity, hence the relative up-regulation of sympathetic effects on cardiac function. As a result, this may lead to myocardial injury by contraction zone necrosis or ischemia. In our study showed that the range of hs-cTnT activity was between 14.46~58.51 ng/L in patients with increased hs-cTnT levels, which was much lower than that in the myocardial infarction patients. So, hs-cTnT elevation during the acute stage of ischemic stroke, which may be developed by neurally mediated autonomic dysregulation after acute stroke.
Some limitations of this study merit consideration. Firstly, in this study, we relied on a single baseline blood sample and thus we could not account for variations in serum hs-cTnT levels that occur over time, serum hs-cTnT levels should be measured repeatedly to allow longitudinal analysis, which might provide additional information on the development and on its prognostic implications. Secondly, We did not study that association of elevated serum hs-cTnT level with recurrent stroke, which might have an effect on the experimental results. Thirdly, although we adjusted for NIHSS score, which has been show to correlate with infarction volume, we lacked data on infarction volume. Fourth, we lack data on the possible influence of the left and right insular stroke on hs-cTnT and prognosis, respectively, because left and right insular lesion have different influence on the cardiac autonomic function.. In future experiments, we will avoid the aboved limitation, in order to obtain more reliable result.
Conclusions
Routine serum hs-cTnT measurement in patients with ischemic stroke may provide important novel clinical use. In addition, some studies should be encouraged in regarding to correct cardiac autonomic dysfunction and whether lowering hs-cTnT could prevent the poor outcome of ischemic stroke.
In conclusion, our findings indicated that higher levels of serum hs-cTnT in acute ischemic stroke were associated with increased risk of death or major disability at 3 months. Serum hs-cTnT may have potential predictive value in risk stratification of ischemic stroke.
Acknowledgments
We thank all patients and their families for generously consenting to use of human tissues in this research.
Funding
This work was funded by the Health and Family Planning Commission of Chengdu (2015009), which is not involved in the database management (collection, analysis, interpretation of data) and has no access to patient information. The funding body did not participate in designing the study or writing the manuscript. The study protocol has undergone peer-review process by the funding body.
Availability of data and materials
The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.
Authors' contributions LYH was responsible for the concept and design of the study, data collection and analysis and the first draft of the paper and further manuscript. JW was responsible for concept and design of the study, the data analysis and interpretation. WWD was responsible for overseeing the concept and design of the study, the data analysis and interpretation, and writing the paper. All authors read and approved the final manuscript for publication.
Ethics approval and consent to participate
We obtained ethical approval for this study from the Medical and Health Research Ethics Committee in Second people's Hospital of Chengdu. The current study was carried out according to Declaration of Helsinki. Local legal and regulatory authorities as well as the medical secrecy will be followed. If the patient has consciousness disorder or aphasia, the decision cannot be made by themselves, the consent form can be signed by the patient's legal proxies. Prior to enrollment, each patients or their legal proxies will be given detailed information about the aims, scope and
|
2018-08-20T15:50:20.396Z
|
2018-08-20T00:00:00.000
|
{
"year": 2018,
"sha1": "fb0a458529b311845268ae60c15bfc39e75e56e2",
"oa_license": "CCBY",
"oa_url": "https://bmcneurol.biomedcentral.com/track/pdf/10.1186/s12883-018-1121-5",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fb0a458529b311845268ae60c15bfc39e75e56e2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
255029691
|
pes2o/s2orc
|
v3-fos-license
|
Construction and validation of a fatty acid metabolism risk signature for predicting prognosis in acute myeloid leukemia
Background Fatty acid metabolism has been reported to play important roles in the development of acute myeloid leukemia (AML), but there are no prognostic signatures composed of fatty acid metabolism-related genes. As the current prognostic evaluation system has limitations due to the heterogeneity of AML patients, it is necessary to develop a new signature based on fatty acid metabolism to better guide prognosis prediction and treatment selection. Methods We analyzed the RNA sequencing and clinical data of The Cancer Genome Atlas (TCGA) and Vizome cohorts. The analyses were performed with GraphPad 7, the R language and SPSS. Results We selected nine significant genes in the fatty acid metabolism gene set through univariate Cox analysis and the log-rank test. Then, a fatty acid metabolism signature was established based on these genes. We found that the signature was as an independent unfavourable prognostic factor and increased the precision of prediction when combined with classic factors in a nomogram. Gene Ontology (GO) and gene set enrichment analysis (GSEA) showed that the risk signature was closely associated with mitochondrial metabolism and that the high-risk group had an enhanced immune response. Conclusion The fatty acid metabolism signature is a new independent factor for predicting the clinical outcomes of AML patients. Supplementary Information The online version contains supplementary material available at 10.1186/s12863-022-01099-x.
Background
Acute myeloid leukemia (AML) is a hematopoietic neoplasm characterized by the clonal expansion of abnormally differentiated myeloid progenitor cells [1,2]. With standard chemotherapy, AML patients have poor outcomes and high mortality rates because of relapsed disease and leukemia-related complications, especially in patients aged 60 years and older. In addition, the outcome of AML is heterogeneous with patient-related and disease-related factors [2,3]. Currently, cytogenetic risk combined with molecular abnormalities is used as a classic risk stratification system to predict the probability of complete response (CR) and relapse, as well as overall survival (OS) according to the national recommendations [4,5]. However, this system has limitations in patients without defined chromosomal or genetic alterations. Therefore, the development of a more accurate risk stratification system for AML is imperative to select suitable therapies and precisely predict clinical outcomes.
Metabolic reprogramming is a dynamic process accompanied by the whole process of leukemia [6][7][8]. When glucose metabolism shifts to aerobic glycolysis, AML cells enter a malignant proliferation phase, and when glucose metabolism shifts back into mitochondrial metabolism, AML cells enter a stem cell-based self-maintenance phase [9,10]. Moreover, fatty acid metabolism also plays an important role in AML progression [11]. Specific alterations in fatty acid oxidation (FAO) and fatty acid synthesis (FAS) participate in core mitochondrial metabolic pathways influencing the fate of leukemia stem cells (LSCs), the adaptation to a specialized microenvironment, and the response to drugs. The expression of FAO enzymes including APOC2, CD36, CT2, FABP4, PHD3 and CPT1 were elevated in AML compared to normal hematopoiesis, moreover inhibition of these enzymes resulted in increased sensitivity to chemotherapy and decreased AML survival [12][13][14][15][16][17]. However, no modelled signature of fatty acid metabolism has been developed to predict the prognosis of AML patients and to further select therapeutic strategies based on fatty acid metabolism.
In this study, we established a fatty acid metabolism risk signature with significant prognostic value based on The Cancer Genome Atlas (TCGA) AML database and validated it in another AML database (Vizome). The fatty acid metabolism risk signature could independently identify AML patients with poor clinical outcomes more precisely than other prognostic markers.
Construction of a fatty acid metabolism signature in AML
Considering the essential role of fatty acid metabolism in AML, we sought to establish a fatty acid metabolism signature (FA risk score) for prognostication. We used patients from the TCGA AML database as the training cohort. Univariate Cox regression analysis was used to explore the prognostic value of fatty acid metabolismrelated genes (Supplementary Table 1). Thirty-seven genes were found to be associated with prognosis in AML (Supplementary Table 2 ). Then, we further screened the significant genes by log-rank prognostic analysis (Supplementary Fig. 1A) and finally selected 9 genes (MLYCD, CYP4F2, SLC25A1, PLA2G4A, ACBD4, ACOT7, ACSF2, CBR1, and ACSL5). MLYCD and CYP4F2 were identified as protective factors with hazard ratios (HRs) < 1, whereas SLC25A1, PLA2G4A, ACBD4, ACOT7, ACSF2, CBR1 and ACSL5 were defined as risk factors with HRs > 1 ( Table 1). The procedure is illustrated in Fig. 1.
Identification of the fatty acid metabolism signature as a prognostic marker in AML
We first analyzed the distribution of FA risk scores in patients with different survival statuses using a waterfall plot. Patients with lower FA risk scores generally had better survival outcomes (alive) than those with high risk scores ( Fig. 2A). Then, we found that high-risk patients had shorter OS times than low-risk patients by log-rank analysis (Fig. 2B). To demonstrate the validity of the 9-gene FA metabolism risk signature in other independent populations, we calculated the risk score for each patient in the Vizome AML database [18] as an external cohort with the same formula. The patients were classified into high-risk and low-risk groups based on the median risk score. Consistent with the findings from the TCGA cohort, more surviving patients appeared in the low-risk group, and the OS time was shorter for high-risk patients than for low-risk patients ( Fig. 2A-B). Moreover, the sensitivity and specificity of the FA risk score were assessed through time-dependent receiver operating characteristic (ROC) analysis. The areas under the curve (AUCs) for 1-, 2-, and 3-year OS were 0.8297, 0.8392 and 0.8130, respectively, in the training cohort, with significant p values (Fig. 2C). For validation in the external cohort, the AUCs for 1-, 2-, and 3-year OS were 0.6560, 0.6649 and 0.6663, respectively (Fig. 2C).
To explore the prognostic value of the fatty acid metabolism signature in stratified cohorts, the patients were classified by two traditional independent markers, age and cytogenetic risk. In the training cohort, high-risk patients had shorter OS times than low-risk patients in Fig. 2A-B). However, when we confirmed the results in the validation cohort, we found that the FA score only further predicted the prognosis in patients aged ≤ 60 years or with intermediate cytogenetic risk (Supplementary Fig. 2C-D).
Overall, these results indicated that the FA signature is a prognostic marker in AML.
The fatty acid metabolism signature is an independent risk factor for precisely predicting the survival time of AML patients
We next performed univariate and multivariate Cox regression analyses to determine whether the FA risk score is independently correlated with the OS of AML patients. We analyzed the prognostic value of the FA risk score together with other common prognostic factors (age, FLT3 mutation, NPM1 mutation, leukocyte count and cytogenetic risk). We found that the FA risk score served as an independent prognostic factor with an HR of 4.238 (p < 0.0001) in the training cohort and 1.406 (p = 0.077) in the validation cohort ( Fig. 3A-B).
Then, we conducted ROC curve analyses of the FA risk score and two other independent factors (age and cytogenetic risk) for predicting 3 years of OS in the training and validation cohorts and found that the AUC of the FA risk score was larger than that of cytogenetic risk or age (Fig. 3C). These findings confirmed the power of the FA risk score to independently predict prognosis in AML. To achieve a better translational and predictive evaluation system, we developed a nomogram integrating age, cytogenetic risk and FA score in the training set and validation set ( Fig. 4A and Supplementary Fig. 3A). The calibration plots showed high concordance between the predicted and actual probabilities of 1-, 2-and 3-year survival ( Fig. 4B and Supplementary Fig. 3B). The C-index of the merged nomogram score in the validation set was 0.7, which was significantly higher than that of its constituting factors (Fig. 4C). However, in the training set, the C-index of the merged nomogram score was close to the C-index of the FA score but higher than that of age and cytogenetic risk Supplementary Fig. 3C). These results suggested that incorporating the FA score with traditional AML prognostic factors could increase the precision of survival prediction compared to using the single traditional prognostic factors alone.
Association between the fatty acid metabolism signature and the clinical features of AML
To explore the clinical features associated with the FA metabolism signature, we stratified the AML patients into FA high-risk and FA low-risk groups according to their FA scores and assessed their clinical parameters. Genes that formed the fatty acid metabolism signature exhibited distinct expression patterns corresponding to the risk score (Fig. 5A). Moreover, we found that the distribution of the FAB types and cytogenetics-based risk groups were different between the FA high-and low-risk groups, while other clinical features showed no significance (Fig. 5A). Then, we analyzed the FA risk values among the FAB subtypes and found that the M5 subtype exhibited the highest risk value, while the M3 subtype (acute promylocytic leukemia) exhibited the lowest risk value (Fig. 5B). Patients with favourable cytogenetic risk were more likely classified into the FA low-risk group (Fig. 5C). We also found that patients with poor cytogenetic risk had the highest FA risk values compared with those with intermediate or favourable cytogenic risk ( Supplementary Fig. 4A). These data indicated that FA risk classification were consistent with current risk factors.
The fatty acid metabolism signature is correlated with mitochondrial metabolism, and the high-risk group exhibits an enhanced immune response To explore the related functions of the fatty acid metabolism signature, we analyzed the genes closely correlated with the FA score (R > = 0.5) in the TCGA and Vizome databases (Supplementary Tables 3 and 4). The results of Fig. 4 The nomogram combined the fatty acid metabolism signature and classic prognostic factors to predict the overall survival. A Nomogram plot showed the merged score system composed of the signature, age and cytogenetic risk in validation cohort. B Calibration plot showed the consistency of nomogram-predicted OS and actual OS in validation cohort. C The C-index comparison between the merged score and its single composition in validation cohort (with t test). *, P < 0.05; ****, P < 0.0001
Fig. 5
The correlation between the fatty acid metabolism signature and clinicopathological features. A Heatmaps described the association of the signature with age, gender, FAB subtype, cytogenetic risk, leukocyte count, hemoglobin count and platelet count in training and validation cohort. B The FA scores of FAB subtypes in training and validation cohort (with t test). C The distribution of cytogenetic risk between high-risk and low-risk group (with Chi-square test). ns, no significance; *, P < 0.05; **, P < 0.01; ***, P < 0.001; ****, P < 0.0001 Gene Ontology (GO) analysis showed that the signature was associated with mitochondrial metabolism, including the tricarboxylic acid (TCA) cycle and oxidative phosphorylation, in both databases (Fig. 6A). Moreover, to further investigate the differential biological functions between the high-risk and low-risk groups, we screened out differentially expressed genes (upregulated in the high-risk group; log fold change (logFC) > 0.6 in TCGA, logFC > 0.7 in Vizome; p < 0.05; Supplementary Tables 5 and 6). We found that most relevant biological processes were enriched in the immune response, inflammatory response and innate immune response through GO analysis (Fig. 6B). To confirm these associations, we conducted gene set enrichment analysis (GSEA) of immune-related terms, and the results showed that positive regulation of the immune effector process, IFN-γ biosynthetic process, chronic inflammatory response and regulation of lymphocyte chemotaxis were positively enriched in the high-risk group (Fig. 6C). These results suggested that the high-risk group might exhibit an enhanced immune response. In addition, we explored twenty proteins that interacted with the nine FA score proteins through GeneMANIA, and most of the twenty proteins were included in lipid metabolism pathways (Fig. 6D).
Discussion
At present, chromosomal abnormalities and somatic gene mutations, considered the pathogenesis of AML, are combined to guide prognostic prediction and treatment selection [3,19]. However, this evaluation system has limitations because nearly 50% of AML patients harbour a normal karyotype, and some patients even lack common somatic mutations [20]. Thus, it is essential to develop new signatures to further stratify the heterogeneous prognosis of AML patients. In this study, we constructed a suitable prognostic signature composed of genetic expression pattern involved in fatty acid metabolism in AML patients. Previous studies have implied that fatty acid metabolism is active in LSCs and triggers various adaptive mechanisms in favour of AML cell survival [16,21]. Reduced synthesis of monounsaturated fatty acid from saturated fatty acid leads the increased level of ROS and finally induces apoptosis of AML cells [22]. Moreover, the liver microenvironment induces fatty acid metabolism adaptation, promoting growth and chemo-resistance of liver infiltrated leukemia [23]. However, no researchers have combined the related genes of fatty acid metabolism to predict the prognosis of AML. Here, we screened the expression profile of fatty acid metabolism and identified nine genes with prognostic significance. Most of these nine genes have been reported in different tumors [24][25][26][27][28][29] and some of them have been studied in AML such as PLA2G4A, ACOT7 and CBR1 [30][31][32]. The detailed roles of these genes in the pathogenesis of AML require further exploration.
The fatty acid metabolism signature we established could predict the clinical outcomes of AML patients independently with preferable specificity and sensitivity. Acute monocytic leukemia (AML-M5) is a poor prognostic subtype of AML associated with hyperleukocytosis, extramedullary disease, and abnormal coagulation [33]. We found that M5 subtype patients had the highest FA scores, which suggested that fatty acid metabolism might be highly activated, providing the potential therapeutic targets. Our results showed that FA score was an independent prognostic factor and the combination of FA score, age and cytogenetic risk was superior to single factor, providing a more useful tool to stratify AML patient.
Fatty acids converge into the TCA cycle and further participate in oxidative phosphorylation (OXPHOS) in mitochondria. Several studies have suggested that the cellular enhancement of mitochondrial metabolism might induce Ara-C resistance, leading to poor prognosis and targeting OXPHOS sensitized AML cells to Ara-C [34,35]. Thus, the desregulated fatty acid metabolism is an effective target and several inhibitors of FAO have been applied in preclinical AML studies [36]. Recently, researchers found that LSCs, which are drug-resistant cells, selectively depended on OXPHOS to supply energy and that the BCL-2 inhibitor venetoclax could inhibit OXPHOS in LSCs [37,38]. The combination of venetoclax with the hypomethylating agent (HMA) azacitidine showed promising synergistic effects on AML patients in a phase 1b clinical study [39,40]. Further studies showed that venetoclax combined with azacitidine targeted amino acid metabolism to inhibit OXPHOS in LSCs [41]. Moreover, up-regulation of FAO due to RAS pathway mutations or compensatory adaptation in relapsed disease attenuates the essentiality of amino acid metabolism, and finally decreases the sensitivity of the combination treatment with azacitidine and venetoclax [42]. In our study, the fatty acid metabolism signature was closely correlated with mitochondrial metabolism, which is consistent with previous studies. Based on these findings, we proposed that fatty acid inhibitors might improve the efficiency of venetoclax and azacitidine combination, especially in the patients with a high-risk FA metabolism signature. Cellular metabolic reprogramming is not only a hallmark of tumours but also a characteristic of immune cells [43]. Long-lived memory CD8 T cells (Tm), the key factors in immunotherapy, have elevated fatty acid oxidation levels, as previous studies reported [44]. Here we found that the high-risk group showed a disturbance of immune response. Therefore, we speculated that fatty acid metabolism also played the roles in the abnormal interaction between leukemic cells and the immune cells in the bone marrow environment, resulting in immune escape and drug resistance. However, the detailed mechanism needs further exploration and validation in AML.
Conclusion
Overall, we developed a prognostic signature based on nine fatty acid metabolism-related genes that could independently predict clinical outcomes with specificity and sensitivity, as well as improve the existing prognostic evaluation system. Moreover, the fatty acid metabolism signature might be an index to monitor the effect of targeted therapy.
Bioinformatics analysis
Limma R package was used to calculate differential expression genes between high-risk and low-risk group. The gene ontology (GO) enrichment analysis was performed by DAVID 6.8 (https:// david. ncifc rf. gov/ tools. jsp) to find possible functions associated with the fatty acid metabolism signature. Gene set enrichment analysis (GSEA) was carried out to verify the AML-related functions between patients in high-risk and low-risk group (http:// www. broad insti tute. org/ gsea/ index. jsp). Heatmaps were made by R language to express information correlated with the fatty acid metabolism signature. A nomogram model consists of independent prognostic factors was established for a better prediction of prognosis. The prediction accuracy of the merged system and its elements were determined by Calibration plot and C-index [45]. Protein-protein interaction among the nine genes was detected using the GeneMANIA datasets. GeneMANIA is frequently used datasets which can provide protein-protein interaction information [46].
Statistical analysis
R language (version 3.5.2), SPSS (20.0) and GraphPad Prism 7 were mainly used for statistical analysis and figure drawing. Univariate cox regression analysis was used to identify prognostic genes. A risk signature was developed according to a linear combination of their expression levels weighted with regression coefficients from univariate cox regression analysis [47]. Kaplan-Meier survival analysis and log-rank test were used to indicate prognostic values. Multivariate cox regression analysis was carried out to identify independent prognostic factors. Chi-square test was used for showing the difference of clinical features between two groups. Two-tailed t test was performed to calculate the quantitative difference between two groups. ROC curves, forest plots and survival curves were made by GraphPad Prism 7. Statistical significance was defined as P value < 0.05.
|
2022-12-24T16:19:52.956Z
|
2022-12-22T00:00:00.000
|
{
"year": 2022,
"sha1": "97f9dd05e3457fb69aa5dc7a952907590dfaead8",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "801b6a4d09f1739d5da2037e09d9ffbf66499ef6",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
265350856
|
pes2o/s2orc
|
v3-fos-license
|
Demographic and Support Interest Differences Among Nonbirthing Parents Using a Digital Health Platform With Parenthood-Related Anxiety: Cross-Sectional Study
Background The transition to parenthood is a period of major stressors and increased risk of anxiety for all parents. Though rates of perinatal anxiety are similar among women (4%-25%) and men (3%-25%), perinatal anxiety research on nonbirthing partners remains limited. Objective We aimed to examine whether demographic characteristics or digital perinatal support preferences differed among nonbirthing partners with compared to without self-reported high parenthood-related anxiety. Methods In this large cross-sectional study of nonbirthing partners using a digital perinatal health platform during their partner’s pregnancy, users reported their parenthood-related anxiety through a 5-item Likert scale in response to the prompt “On a scale of 1=None to 5=Extremely, how anxious are you feeling about parenthood?” High parenthood-related anxiety was defined as reporting being very or extremely anxious about parenthood. During the onboarding survey, in response to the question “Which areas are you most interested in receiving support in?” users selected as many support interests as they desired from a list of options. Chi-square and Fisher exact tests were used to compare demographic characteristics and support interests of nonbirthing partners with low versus high parenthood anxiety. Logistic regression models estimated the odds ratios (ORs), with 95% CIs, of high parenthood-related anxiety with each user characteristic or digital support interest. Results Among 2756 nonbirthing partners enrolled in the digital platform during their partner’s pregnancy, 2483 (90.1%) were men, 1668 (71.9%) were first-time parents, 1159 (42.1%) were non-Hispanic White, and 1652 (50.9%) endorsed an annual household income of >US $100,000. Overall, 2505 (91.9%) reported some amount of parenthood-related anxiety, and 437 (15.9%) had high parenthood-related anxiety. High parenthood-related anxiety was more common among non-White nonbirthing partners: compared to those who identified as non-Hispanic White, those who identified as Asian, Black, or Hispanic had 2.39 (95% CI 1.85-3.08), 2.01 (95% CI 1.20-3.23), and 1.68 (95% CI 1.15-2.41) times the odds of high parenthood-related anxiety, respectively. Lower household income was associated with increased odds of reporting high parenthood anxiety, with the greatest effect among those with annual incomes of US $100,000 (OR 2.13, 95% CI 1.32-3.34). In general, nonbirthing partners were interested in receiving digital support during their partner’s pregnancy, but those with high parenthood-related anxiety were more likely to desire digital support for all support interests compared to those without high parenthood anxiety. Those with high parenthood-related anxiety had more than 2 times higher odds of requesting digital education about their emotional health compared to those without high parenthood-related anxiety (OR 2.06, 95% CI 1.67-2.55). Conclusions These findings demonstrate the need for perinatal anxiety-related support for all nonbirthing partners and identify nonbirthing partners’ demographic characteristics that increase the odds of endorsing high parenthood-related anxiety. Additionally, these findings suggest that most nonbirthing partners using a digital health platform with high parenthood-related anxiety desire to receive perinatal mental health support.
Introduction
The transition to parenthood is a period of major stressors and increased risk of mental health issues, regardless of whether or not the parent gives birth [1][2][3][4].Indeed, when stratified by gender (which frequently corresponds to birthing and nonbirthing roles), rates of perinatal anxiety are similar among women (4%-25%) [1,2] and men (3%-25%) [3,4].Despite the similar burden of perinatal anxiety between parents and the known interplay between maternal and paternal perinatal mood disorders [5][6][7], perinatal anxiety research on nonbirthing partners remains limited, and little is known about the desire of nonbirthing partners to receive mental health support during the perinatal period [8].Thus, we aimed to examine whether demographic characteristics or desired mental health supports differed among nonbirthing partners with compared to without high parenthood-related anxiety.
Eligibility Criteria/Recruitment
This study examined a cohort of users enrolled in the partner pathway in Maven, a digital health platform for pregnant people and their partners, from March 16, 2021, through October 20, 2022.Access to Maven is a sponsored benefit through an employer or health plan of the user or their partner.Users consented to the use of their deidentified data for scientific research upon creating an account on the digital platform.As previously described [9], users self-reported demographic information, desired support interests, and parenthood-related anxiety on a health survey at onboarding.For race and ethnicity, users selected a single option from a list of choices.In response to the question "Which areas are you most interested in receiving support in?" users selected as many support interests as they desired from the following list: "choosing a healthcare provider/team," "labor and delivery options," "preparing to be a working parent," "infant care," "learning about childcare options," "my own emotional health," "understanding my partner's physical experience during pregnancy," and "understanding my partner's emotional experience during pregnancy."Users reported their parenthood-related anxiety on a 5-item Likert scale in response to the prompt "On a scale of 1=None to 5=Extremely, how anxious are you feeling about parenthood?"To be included, users had to be enrolled in Maven's partner track and have completed the Maven Clinic's onboarding survey while their partner was pregnant.Thus, data for these analyses included platform utilization and user-reported data from the onboarding questionnaire.
Statistical Analysis
Due to small sample sizes, participants who reported their race/ethnicity as American Indian or Alaskan Native, Native Hawaiian or other Pacific Islander, or multiple races were categorized in the "other" category.Income was assessed categorically.High parenthood-related anxiety was defined as responding to the question on parenthood-related anxiety with a 4 ("very") or 5 ("extremely").Some parenthood-related anxiety was defined as responding to the question on parenthood-related anxiety with any response other than a 1 ("none") on the Likert scale.Descriptive analyses assessed user demographics and support interests stratified by presence of high parenthood-related anxiety.In bivariate analyses, the chi-square or Fisher exact tests were used to assess categorical variables.Logistic regression models estimated the odds ratio (ORs) and 95% CI of reporting high parenthood anxiety with each user characteristic or educational support preference.All analyses were conducted in R (version 3.6.3;R Foundation for Statistical Computing).
Ethical Considerations
The study was designated as exempt by the WCG Institutional Review Board, an independent ethical review board.
Results
Of the 4188 users enrolled in Maven's partner pathway during the study period, 3705 (88.5%) completed the onboarding survey.Of these, 2756 (74.4%) completed their survey while their partner was pregnant and were included for analysis.Overall, most (n=2483, 90.1%) nonbirthing partners self-identified as male, 2034 (73.8%) identified as first-time parents, and 2505 (91.9%) nonbirthing partners endorsed feeling at least some parenthood-related anxiety.
In this study population, 437 (15.9%) of nonbirthing partners were categorized as having high parenthood-related anxiety.Some demographic characteristics increased the odds of endorsing high parenthood-related anxiety (Table 1).Specifically, compared to non-Hispanic White nonbirthing partners, the odds of participants reporting high parenthood-related anxiety were more than 2-fold higher among Asian (OR 2.39, 95% CI 1.85-3.08)or Black (OR 2.01, 95% CI 1.20-3.23)nonbirthing partners and more than 60% higher among Hispanic nonbirthing partners (OR 1.68, 95% CI 1.15-2.41).Similarly, compared to non-first-time parents, first-time nonbirthing partners had twice the odds of reporting high parenthood-related anxiety (OR 2.01, 95% CI 1.55-2.65),and those with annual incomes of <US $50,000 or between US $50,000 and US $100,000 had more than 2-fold or 48% increased odds, respectively, of endorsing high parenthood-related anxiety compared to those with annual incomes of ≥US $100,000 (<US $50,000: OR 2.13, 95% CI 1.32-3.34;US $50,000-US $100,000: OR 1.48, 95% CI 1.07-2.01).The odds of endorsing high parenthood-related anxiety were similar between male and female nonbirthing parents (OR 1.20, 95% CI 0.69-1.98).In general, nonbirthing partners were interested in receiving digital support during their partner's pregnancy: the most requested support interests overall were infant care and understanding their partner's emotional experience during pregnancy (Table 1).However, those with high parenthood anxiety were more likely to desire digital support from all support interests compared to those without high parenthood anxiety.In particular, the odds of nonbirthing partners desiring to learn more about their own emotional health or their partner's physical or emotional experience during pregnancy were markedly higher compared to those without high parenthood-related anxiety (own emotional health: OR 2.06, 95% CI
Principal Results
In this large sample, nearly all nonbirthing partners reported feeling at least some parenthood-related anxiety, and a substantial proportion of nonbirthing partners desired education about their own or their partner's emotional health during the perinatal period.These findings demonstrate the need for perinatal mental health support for all parents, not just those who give birth, and suggest that digital health platforms may serve as logical entry points for nonbirthing partners to receive perinatal mental health support, as has been proposed [10].Furthermore, demographic factors such as identifying as non-White or having an annual income of less than US $50,000 were associated with increased odds of high parenthood-related anxiety.This suggests that some nonbirthing partners may face a disproportionate burden of parenthood-related anxiety, and perinatal support interventions targeting these subpopulations may improve equity for nonbirthing partners.
Comparisons With Prior Work
The rate of high parenthood-related anxiety in our study is consistent with levels of paternal anxiety in the literature (3%-25%) [3,4].Furthermore, the high prevalence of any self-reported anxiety in our study population supports prior findings suggesting that the most common mental health diagnosis for nonbirthing partners in the perinatal period is adjustment disorder with anxiety symptoms [11].
Clinical and Research Implications
In the United States, most current perinatal care delivery models do not provide much perinatal education and mental health support to nonbirthing partners [12].Furthermore, nonbirthing partners are known to have lower rates of engagement with health care than birthing partners [12,13].
In this study, nonbirthing partners who were participating in a digital perinatal health platform not only actively selected the perinatal educational support topics about which they wanted to learn but voluntarily endorsed the presence and extent of their parenthood-related anxiety.These findings suggest that digitally screening nonbirthing partners during their partner's pregnancy for their perinatal mental health and pregnancy-related health education needs could fill an important gap in nonbirthing partners' perinatal experience.Mental health support and educational content could be delivered digitally or via in-person education provided from prenatal care providers when nonbirthing partners present to prenatal care visits, as has been proposed [10,14].However, prior to widespread transformation of prenatal care delivery models, more research is needed to determine the optimal way to screen nonbirthing partners for mental health needs or perinatal educational preferences and to provide requested education and support based on the results of this screening.
Limitations
Despite the large study population, our study has limitations.Parenthood-related anxiety was self-reported, rather than identified via a validated anxiety measure.There is also a risk of selection bias since all participants in this study were already actively engaged in a digital health platform.Additionally, the cross-sectional design of this study limits causal interpretation of our results.Furthermore, because outcomes were not adjusted for confounders, some bias may remain in our unadjusted analyses.Lastly, though our study population was large, generalizability of our findings may be reduced as many participants reported annual incomes of >US $100,000 and people of some races/ethnicities were overly represented in the sample.
Conclusions
Among nonbirthing partners who used a digital health platform, most had some parenthood-related anxiety, and the odds of endorsing high parenthood-related anxiety were increased among non-White birthing partners, as well as those with annual incomes of less than US $100,000 and, in particular, less than US $50,000.Furthermore, though nonbirthing partners desired education on their own or their partner's emotional health during the perinatal period, the odds of desiring perinatal mental health support were higher among nonbirthing partners who endorsed high parenthoodrelated anxiety.These findings demonstrate not only the need for perinatal anxiety-related support for all nonbirthing partners, but that most nonbirthing partners with high parenthood-related anxiety using a digital perinatal platform desire to receive digital perinatal mental health support.
Table 1 .
Nonbirthing partner characteristics and support interests by parenthood-related anxiety level.
a N/A: not applicable.
|
2023-11-23T06:17:48.679Z
|
2023-11-20T00:00:00.000
|
{
"year": 2023,
"sha1": "e59e11af08bdc496e47aa653e44de55e47329b8e",
"oa_license": "CCBY",
"oa_url": "https://pediatrics.jmir.org/2023/1/e46152/PDF",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4b63cdd880ad5b233b1954f1e40b3a0df65f1725",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
29159358
|
pes2o/s2orc
|
v3-fos-license
|
Isolated primary schwannoma of the urinary bladder- a case presentation
Benign nerve sheath tumors (schwannoma and neurofibroma) involving the urinary bladders are rare with only case reports and limited series. Primary Schwannoma of urinary bladder is a very rare tumor, which may be benign or malignant. It can occur in any part of the body where nerve sheath is present. They usually occur in patients with Von Recklinghausen's disease.1 Neurofibromas of the genito-urinary tract most commonly affect males than females by a ratio of 3:1, and commonly affect the bladder, but there are also reports of neurofibromas involving the penis, clitoris, prostate, urethra, testis, spermatic cord and ureter.2 We report a case of primary isolated schwannoma of the urinary bladder in a patient without von Recklinghausen disease.
Introduction
Benign nerve sheath tumors (schwannoma and neurofibroma) involving the urinary bladders are rare with only case reports and limited series. Primary Schwannoma of urinary bladder is a very rare tumor, which may be benign or malignant. It can occur in any part of the body where nerve sheath is present. They usually occur in patients with Von Recklinghausen's disease. 1 Neurofibromas of the genito-urinary tract most commonly affect males than females by a ratio of 3:1, and commonly affect the bladder, but there are also reports of neurofibromas involving the penis, clitoris, prostate, urethra, testis, spermatic cord and ureter. 2 We report a case of primary isolated schwannoma of the urinary bladder in a patient without von Recklinghausen disease.
Case report
A 53 year-old male patient presented with total hematuria and irritative lower urinary tract symptoms namely urgency and voiding burning during 2,5 months. Clinical examination was normal. Laboratory test and urine culture were negative. Both on T1 weighted and T2 weighted imaging MRI showed the non-papillary solid mass in area urinary bladder neck and trigonum with isodensity to the detrusor and measuring 2.5 Â 3.5 cm in sagittal and frontal planes (Fig. 1). The cystoscopy investigation showed nonpapillary solid tumor in area urinary bladder neck and partly in trigonum, both ureteral orifices were free. Biopsies of the tumor were sent for further histological estimation (Fig. 2). The submucosal mass in macroscopic investigation was tan, smooth and rubbery. It was sectioned and stained with Hematoxylin and Eosin (H&E) for further evaluation. Light microscopy showed a spindle cell neoplasm with areas of dense cellularity (Antoni A) and areas of hypocellularity (Antoni B). Within the densely cellular areas, palisading nuclei alternated with pink, nuclear free zones (Verocay bodies) (Fig. 3). Since the abovementioned histological data is highly characteristic of a schwannoma the pathologist did not see the necessity in immunohistochemistry. This patient was successfully treated with TURB. The follow-up during 3, 6 and 12 months did not show any relapses disease.
Discussion
Sporadic cases of this tumor are even more rare. They represent <0.1% of all bladder tumors. Isolated schwannomas have also been discovered in other areas such as the kidney and retroperitoneum but rarely in the bladder. 3 It occurs most commonly in the 4th to 6th decade of life. Bladder schwannomas may present with voiding and/or storage symptoms, flank pain or incontinence. They are usually benign and malignant variants have also been described. It may also present with hematuria, Lower Urinary Tract Symptoms (LUTS) and suprapubic discomfort. Diagnosis is made initially by histopathological study and immunohistochemistry.
The radiological aspects of schwannomas are characteristic; especially on MRI and they can frequently evoke the diagnosis, which is confirmed by biopsy. Ultrasonography is less useful to diagnose schwannomas, although it can differentiate a solid from a cystic mass. A CT scan can show the relations between the schwannoma and adjacent organs, and the usual findings are predominantly solid, noncalcified, well-encapsulated lesions. However, ultrasonography and CT are rather nonspecific in differentiating schwannomas from other solid tumors. 4 MRI is slightly more sensitive than CT for the evaluation of suspected schwannomas, but differentiation between a bladder schwannoma and carcinoma remains difficult. Both schwannomas and carcinomas are usually isointense to skeletal muscle on T1 weighted imaging (T1WI) and isointense to slightly hyperintense to skeletal muscle on T2 weighted imaging (T2WI). 5 The prognosis of neurofibroma and schwannoma is generally good. Most patients with bladder neurofibroma have been treated by local excision and are alive without recurrence. In the English literature only about 7 cases have been reported till now. We reported eight case primary isolated Schwannoma urinary bladder without evidences Von Recklinghasen's disease. Our patient successfully treated by transurethral resection of bladder tumor (TURBT). The follow-up during 6 and 12 months did not show any relapses disease.
Strategy of management the schwannoma of urinary bladder
Considering that schwannomas of the urinary bladder are rare, they usually should not be included in the differential diagnosis. What need to do in such cases? What need to know about it?
The patients with such diagnosis mainly complains on hematuria and irritative lower urinary tracts or only recurrent hematuria. As a method imaging it is preferably to use MRI with T1 and T2 sequences. Bladder Schwannomas are diagnosed only by histopathological investigation including H&E staining or S100 immunohistochemistry. Surgical treatment successfully eliminates hematuria and related symptoms. It includes transurethral resection of bladder tumor (TURBT), laparoscopic or open cystectomy. The control cystoscopy need to perform through 3,6 and 12 months after surgical treatment.
Conclusions
Isolated primary schwannoma of urinary bladder is a very rare occurrence with only a few cases reported. The diagnosis is only histological. Optimal treatment of this tumor includes partial cystectomy or TURBT. The follow up should include control cystoscopy through 3,6 and 12 months after surgical treatment.
|
2018-05-25T23:38:05.976Z
|
2018-03-01T00:00:00.000
|
{
"year": 2018,
"sha1": "9db5a362ae02b7b0be42bd326b13e46892d52b4a",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.eucr.2018.02.020",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9db5a362ae02b7b0be42bd326b13e46892d52b4a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
251669907
|
pes2o/s2orc
|
v3-fos-license
|
Positive effects of COVID-19 on food preparation and expenditure habits: a comparative study across three countries
Objective: This study seeks to empirically investigate how the changing eating habits affect health habits within three countries with entirely different cultures and diets to understand to what extent the pandemic may be responsible for these changes. Design: Specifically, a questionnaire was conducted in China, Portugal and Turkey in early 2021. A series of statistical analyses were performed to identify how changes in individuals’ eating habits have influenced their diets, considering the pandemic context and the varying cultural contexts where this research was performed. Setting: A structured questionnaire form was developed and uploaded to an online platform with unique links for automatic distribution to respondents in each country. Data for the main survey were gathered between 3 January and 1 February 2021. Participants: Using snowball sampling, the authors leveraged their social networks by asking friends and colleagues to distribute the survey to potentially interested individuals. This distribution was stratified accordingly to the distribution of the population. The authors ultimately collected 319 useable surveys from China, 351 from Portugal and 449 from Turkey. Results: The pandemic inspired healthier food habits, mostly because people have additional time to cook, shop differently for food and spend more money on groceries. Conclusions: The study suggests that aside from cultural values and dietary habits, the available time and the fear of the pandemic most explained the new eating habits. Several implications are provided for researchers and overall society in these three countries.
distinguish the Chinese eating habits (4) . Turkish eating habits differ in the consumption of fats and oils (8) and the cooking techniques such as stewing, frying, grilling, roasting and baking.
This study assumes a micro-perspective to investigate how the change in eating habits in three countries with quite different cultural habits and diets impacts citizens' health habits. This cross-cultural research allows a dismantling of the impact of culture from that of the pandemic on residents' eating habits in different cultural settings. To this end, an ordered probit model was estimated for each country to understand how changes in food habits have influenced individuals' healthy eating. Recommendations are provided for researchers and the general public in these countries. The main contributions of this research are theoretical as it proposes a model that could be tested in different settings and scientific as this is the first research to dismantle cultural and pandemic effects under a cross-cultural setting empirically, and the present results recommendations to the society.
Literature review
Most people have experienced dramatic life changes during COVID-19. Restrictions have forced people to adapt to a 'new normal'. Being confined at home during the pandemic may have led some people to cope with stress and anxiety by devoting more time to food preparation, cooking and preservation. Having more time but preferring to shop for food less frequently may have also driven households to preserve food for future consumption. The following sections outline changes in individuals' food purchases, preservation, preparation, cooking and expenditure.
Possible changes in shopping habits
Many scholars have observed that lockdown influenced food purchases. Grocery store visits decreased during the lockdown in the Netherlands (9) and the USA (10) . Many studies have shown that online grocery shopping has become a primary means of food acquisition (9) . The food industry has also faced radical changes due to partial lockdowns, restrictions on in-store capacity and reduced operating hours. Some restaurants remained closed for months and needed to implement new approaches to service delivery, such as online ordering (11) and the development of mobile applications for such services (12) . People have made use of credit cards more intensively for online shopping, rather than going out to spend directly in restaurants and groceries (13) .
In Turkey, for instance, companies improved their existing applications to accept online orders from households (e.g. GetirBüyük). An online food delivery company also expanded its market potential (e.g. Yemeksepeti). To increase the delivery service, the supermarkets used taxis paid for by the municipalities in Portugal. Online purchases accelerated in China due to the COVID-19 pandemic. Consumers' purchase preferences (e.g. cosmetics) also favoured contactless, rather than in-person, transactions (14) . Wen et al. (2) found that Chinese diners' likelihood of having face-to-face dining declined during the pandemic. Elsewhere, countries such as India pioneered several online food delivery applications such as Swiggy and Zomato. These players are likely to improve their market share and threaten regular restaurants, with some establishments even deciding to cease operations. While there has been no indication of how long such practices may continue or their dominance in the market, specific population segments (e.g. the elderly and office workers) have benefited greatly.
Possible changes in food preservation
When the WHO declared COVID-19 a pandemic in March 2020, disruptions in the food chain resulted in limited access to fresh food (15) . This unforeseen outcome led to food insecurity; food was available but not necessarily accessible. The FAO defines food security as having consistent access to the food necessary for a healthy life. Food security is based on four pillars: ensuring a safe and nutritionally adequate food supply, stability in the food supply, food availability, and social and economic access to sufficient food (16) . Food preservation allows foods to retain their quality and be stored for longer periods and even throughout the year. Therefore, preserved foods can offer a practical solution to sustain a stable and adequate supply of and access to food. Today's food industry uses different preservation techniques to produce nutritious and safe foods with a long shelf life. For this reason, traditional food preservation skills have been in decline in recent decades (17) . This trend may explain why consumers bought food with a long shelf life early in the pandemic (15) . Individuals can also store preserved food longer, resulting in less frequent trips to the grocery store.
In Portugal, the traditional ways of food preservation are no longer used, and freezing methods replaced dried fish, salt conservation and bulk conservation. This preservation increased substantially when the food shortage panic started with the first lockdown. As one of the oldest methods for food preservation, drying is still practised by the rural population in Turkey. Many fruits and vegetables are seasonally abundant and cheaper, so rural people prefer to dry these fruits and vegetables. They consume such dried food during winter and sell them to earn money. Moreover, pickling or jam preparation is still practised by many urban populations. Panic buying at the beginning of the pandemic caused a food shortage. Some people then began to buy fresh foods and preserve them in the case of further shortages. The most straightforward and economical food preservation method is home freezing, which does not require specialised equipment (e.g. a canning machine or dehydrator) and requires limited preparation. Compared with other preservation methods, frozen foods also maintain their quality and nutritional value (18) .
Possible changes in food preparation
In the 20th century, alongside lifestyle changes and the entry of many women into the workforce, home cooking habits changed. Several studies have indicated that time spent by individuals on food preparation and cooking has declined in recent decades (19,20) . With increased availability and accessibility of ultra-processed foods (UPF), the time required to prepare and cook meals have decreased. Moreover, ultra-processed foods have enabled people to prepare meals with less skill. Meals eaten outside the home have increased simultaneously (21) . A decline in home cooking has contributed to nutritional concerns due to the adverse effects of consuming ultra-processed or takeaway foods. Several studies have shown that the consumption of ultra-processed foods is associated with an increased risk of diet-related diseases (22,23) . Likewise, takeaway foods often correspond to a higher intake of calories, fat and Na but a lower intake of fruits, vegetables and wholegrains (21) and a poorer diet quality (24) .
Studies showed that the consumption of UPF increased in Portugal (22) and China (25) . While UPF contributes 23·8 % of total energy intake among Portuguese (26) , in China, UPF provides 18 % of total energy (27) . Flavoured yoghurt, cold meat and soft drinks are the main preferred UPF among Portuguese (26) ; instant pork-mince steam bun/dumpling, instant noodles, cookies, cakes, sausages and packed snacks are the most commonly consumed UPF among Chinese (25) and confectionery, sweet biscuits, soft drinks and cold meats are the main UPF consumed by Turks (28) .
Many studies have reported a rise in home cooking during the pandemic (29) as government lockdowns forced people to spend more time at home. In addition, restaurants were shut down or offered only takeaway during these periods. Research has indicated that home cooking is associated with a healthier diet (19) , including greater consumption of fruit and vegetables (30) . In addition to the nutritional benefits of home cooking, a systematic review summarised other positive outcomes, for example, the development of personal relationships and stronger gender or cultural identities (31) .
Possible changes in cooking style
Perhaps regardless of country-based differences, more time at home has given people a chance to bake, try new recipes and practise different cooking styles. For instance, time-consuming tasks such as baking increased during lockdown (29) . Therefore, the pandemic has resulted in a transition to healthy cooking and food preparation: cooking more often, cooking with fresh ingredients and eating takeaway less frequently (32) . Parents reported cooking more meals from scratch during the pandemic (33) .
Possible changes in food expenditure
A large body of empirical evidence and secondary sources have addressed the pandemic's adverse effects on individuals' well-being and comfort, cultural values, and economic conditions worldwide (2) . Despite limited insight into how the pandemic has affected individuals' spending on at-home food consumption, COVID-19 appears to have adversely affected families with food allergies due to an increase in food prices and a lack of food availability; monthly spending among this group has increased by 23 % during the pandemic (34) .
The possible impacts of the pandemic on food-related expenditure span multiple categories. First, given the severe effects of COVID-19 on national economies, millions of citizens lost their jobs or earned less income compared to previous years. This trend may have directly resulted in lower food consumption expenses. Second, empirical evidence suggests that people are motivated to dine out for social interaction, leisure, pleasure and work activities (35) , although such behaviour varies across countries. However, long-term lockdowns forced households to stay home and prevented them from dining out with family or friends. So, individuals may have spent less on eating out but spent more on food for at-home consumption.
Third, as a direct consequence of more people living in the same household, total spending on at-home food consumption is likely to have increased. From a macroeconomic perspective, some countries have an inadequate food supply to meet the increasing demand. This problem may have led food prices to increase substantially, leading to unpredictable jumps in inflation rates both nationally and internationally. Furthermore, income-related uncertainty may have compelled individuals to save more money, which may have been spent on food (e.g. to enjoyably meet a basic human need). Finally, changes in food expenditure could have been influenced by age and personal preferences (e.g. being a vegetarian or meat-eater) (36) .
Methodology
Based on the findings of earlier studies regarding the pandemic's effects on consumer behaviour (34) , the survey distributed in this study was intended to investigate how COVID-19 may have inspired changes in individuals' food shopping, preservation, and preparation, cooking, and expenditure. The questionnaire has been described in detail elsewhere (37) .
As indicated in Fig. 1, the survey consisted of five parts. The first section focused on how respondents' food shopping patterns changed amid their 'new normal' compared with before the pandemic. The second section investigated the likelihood that respondents' food preservation and preparation habits changed. The following section provided a snapshot of respondents' at-home cooking styles compared with before the pandemic. The fourth section pertained to whether the pandemic affected respondents' food expenditure, healthy eating, satisfaction with rules guiding their 'new normal' and physical activity. These sections were developed with items measured with a five-point agreement scale, comparing the current situation with the previous one. The last section solicited respondents' demographics.
This questionnaire was uploaded to an online platform with unique links for automatic distribution to respondents. Data for the main survey were gathered between 3 January and 1 February 2021. Each author was responsible for approaching potential respondents in their respective countries by sending separate emails to each respondent. Using snowball sampling, the authors leveraged their social networks by asking friends and colleagues to distribute the survey to potentially interested individuals. This distribution was stratified accordingly to the distribution of the population. The first respondents were part of the social networks of the researchers, but this relation was lost as more people were involved in sharing the questionnaire. Only one person per household was asked to participate in the survey. Once data collection was discontinued, all questionnaires were checked for missing variables. Surveys containing more than five unanswered items, and those from eleven respondents who had responded incorrectly to an attention check question, were discarded from the analysis. The remaining items were merged into a single table to run statistical analysis and explore possible differences within or between countries. The authors ultimately collected 319 useable surveys from China, 351 from Portugal and 449 from Turkey.
Empirical data were analysed using factor analysis and non-parametric tests to examine differences between the three countries. An ordered probit model was constructed to discern the effects of changes in food habits on individuals' healthy eating. The analysis consisted of five steps. Exploratory factor analysis (EFA) was conducted to identify dimensions and constructs from the data, as no prior studies had tested these features together. Factor extraction involved maximum likelihood estimation with varimax rotation. The analysis applied a latent root criterion of 1·0 for factor insertion; 0·5 was the cut-off criterion for factor extraction. The second step involved rescaling the constructs extracted using a five-point scale. The means of all components within each construct were used. Once the constructs were derived from EFA and converted into a three-point scale, regression analysis was used. As the variables were ordinal, an ordered probit model was deemed suitable. The model cut-off for the first category was 'not important at all'. Stata 13 was used to estimate the model through a maximum likelihood function. The second analysis step involved estimating a general model from which individual country models were derived before validating the general model.
The third step entailed independent-sample Kruskal-Wallis tests of extracted components to determine whether the distribution of all samples was the same. The fourth step comprised pairwise comparisons of groups of two countries to test whether each sample distribution coincided. The fifth step involved estimating an ordered probit model, adjusted for the entire sample and estimated for each country, to determine how the changes in food habits had affected individuals' healthy food, dismantling the cultural habits each country represents.
Results
Data were gathered in China, Portugal and Turkey with roughly the same distribution. The authors initially intended to approach 314 respondents in each country, given the assumption of a binomial distribution with a Fig. 1 Steps in survey design maximum dispersion; that is, at least half of the population was expected to alter their food habits during lockdown with a CI of 95 % and a sampling error of 2·5 %. Among the 319 valid questionnaires from China, 351 from Portugal and 449 from Turkey, the sampling error was lower than anticipated and thus ensured better generalisability of the results. The sample profile in each country can be summarised as follows. On average, respondents were 43·0 years old in Portugal, 42·5 years old in Turkey and 30·0 years old in China. Most respondents in China were between 18 and 34 years old (76·4 %). Many of those in Portugal (75·8 %) and two-thirds in Turkey (65·7 %) were between 35 and 64 years old. Women were nearly identically represented in each country's sample: 66·0 % (China), 67·0 % (Portugal) and 65·0 % (Turkey). The sample distribution across ages is very similar to the distribution of the population by age in those countries. As such, it could be assumed that the sample is generalisable. Most respondents held a full-time job: 57·0 % (China), 82·0 % (Portugal) and 61·0 % (Turkey). Many respondents in China were students (25·0 %), and a fair proportion in Turkey was retired (17·0 %). Nearly half of the respondents lived with two or three other people. One-third of Chinese respondents lived with four to six other people (33·8 %), whereas one-quarter lived with one person in Portugal and Turkey. Regarding the risk of spreading COVID-19, Turkey ranked first in terms of family members who had tested positive (12·5 %), followed by Portugal (7 %) and China (0·5 %).
Part Icomparison across China, Portugal and Turkey As noted above, this study was composed of two main parts. The first part involved an overview of how the pandemic has led to potential changes in shopping habits, food preservation, food preparation, cooking styles, food expenditure, length of stay at home and demographic characteristics among individuals in China, Portugal and Turkey. Comparative results in each category are summarised below.
Changes in shopping habits
An EFA was performed to depict shopping styles during the lockdown. Twelve related questionnaire items spanned delivery orders and online shopping. Two items about shopping in person in supermarkets or open markets were unreliable, presumably because open markets were closed in some countries (e.g. Portugal), and supermarkets faced limited capacity and operating hours. As indicated in Table 1, the two extracted components accounted for 60·3 % of the variance (Kaiser-Meyer-Olkin (KMO) = 0·861, P < 0·001), and each demonstrated acceptable reliability (i.e. above 0·5) (Brown, 1996).
Online shopping and delivery orders differed across countries ( Table 2). As shown in Table 3, pairwise tests reinforced these variations: Portugal and China demonstrated a homogeneous shopping behaviour regarding delivery orders. The pairwise tests within Portugal and China do not show significant statistical differences, whereas Turkey and Portugal were homogeneous in online shopping. The frequency of delivery orders remained the same as before the pandemic in Portugal and China but declined in Turkey. In Portugal, online orders started to increase. Nevertheless, the delivery orders became so slow that a supermarket order turned into at least 4 weeks to be delivered.
Changes in food preservation
Food preservation also varies culturally. An EFA revealed six items grouped in three components: dried, bulk and frozen. Table 4 shows that all preservation methods differed significantly by country: dried, bulk and frozen. As indicated in Table 3, pairwise comparisons confirmed these variations: only bulk preservation methods were similar in Portugal and China. Drying was most common in Turkey. Bulk methods were slightly higher in China, followed by Portugal and Turkey. Freezing was most common in Portugal, followed by China, and to a lesser extent in Turkey. These results reveal the cultural traditions of those countries and the cooking styles. Bulk preservation may help a healthy life in Portugal and China as it is a traditional method of preserving vegetables in bulk storage by using salt-free techniques. There is no fermentation, either. In Turkey, other than frozen food, there is a culture of drying vegetables and fruits in the summer for consumption in the winter. These methods are more cost-effective than buying fresh foods, regardless of country.
Changes in food preparation
Food preparation items from the questionnaire were reduced using an EFA, which collapsed the nine items into two groups that collectively accounted for 64·5 % of the variance (KMO = 0·826, P < 0·001), KMO tests the consistency of the factorial analysis depicted (Table 5). Food preparation varied significantly across countries regarding daily cooking time and food preparation for future consumption (Table 2). Table 3 lists pairwise tests, showing that Portugal and Turkey were similar in daily cooking time. Portugal and China were similar in time spent on food preparation for future consumption. Daily cooking time increased for all three countries, with Portugal and Turkey registering higher increases than China. Food preparation for future consumption was similar to before the pandemic (Table 2). For instance, Rodrigues et al. (7) justify that Portuguese families like to spend time eating with the family. It is also very traditional to spend time preparing meals with the family during holidays or festive seasons. It is not surprising that this habit has been extended now that all the family is at home.
Changes in cooking style
Cooking style items were reduced with an EFA that classified the five items into two groups, accounting for 76 % of the variance (KMO = 0·826, P < 0·001) ( Table 6). Cooking styles and cooking logistics varied across countries (Table 2). We assume the procedures to prepare the food for cooking styles, whereas cooking logistics refers to the equipment to cook with. These differences were confirmed by pairwise comparison tests aside from Turkey and China, which demonstrated similar cooking styles. Portugal and Turkey had similar cooking logistics. As shown in Table 3, cooking styles were primarily different in Turkey but the same as before the pandemic in Portugal and China. Cooking logistics also varied before and during the pandemic in terms of the need for more space and equipment/utensils.
Changes in food expenditure
Food expenditure increased during the pandemic at different paces within countries (t = 234·598, P < 0·001; Table 2).
With an average of 3·84, Turkey demonstrated the most significant increase in food expenditure, followed by China (3·39) and Portugal (2·85). Pairwise comparison tests confirmed that all three countries presented different consumption patterns during the pandemic (Table 3).
Changes in length of stay at home Individuals from these countries spent more time at home during the pandemic but at different rates (Table 2). Pairwise comparisons suggested that all residents spent much more time at home due to lockdowns (Table 3). Part IIeffects of changes in food habits on healthy eating COVID-19 has drastically altered the social, economic and psychological spheres of life. It has also affected individuals' indoor and outdoor activities. Lockdowns have moved some individuals towards healthier habits. Accordingly, this research also aimed to test how such changes influenced healthy eating habits. Explanatory variables included food preparation (time to prepare meals and to cook to consume in the future, cooking styles and cooking logistics), preservation habits (dried, bulk and frozen) and shopping habits (ordering for delivery and shopping online) to explain individuals' healthy eating, length of time spent at home during the pandemic, household changes, changes in food expenditure, and satisfaction with life and physical activity. The general model accounted for 16·9 % of the variance in healthy eating habits. A likelihood ratio test with 14 df (n 119) was 305·78. Ten out of fourteen variables had significant β weights. Cooking for future consumption, cooking style, drying as food preservation and household changes were not significant.
The β sign and coefficient reflected how variables influenced respondents' healthy eating habits. Influential items included time to prepare meals, cooking logistics, bulk preservation, online shopping, time spent at home, food expenditure, satisfaction with the 'new normal' and physical activity. These results suggested that healthy habits were reinforced through more time cooking, better cooking logistics, bulk preservation, online shopping, more time at home, a higher groceries budget, more physical activity and satisfaction with the 'new normal'. Items with harmful effects were freezing food and ordering food for delivery, reducing individuals' healthy food habits.
The model estimated for Turkey accounted for 19·42 % of the variance with a likelihood ratio test with 14 df (n 449) of 185·59 (P < 0·05). For Portugal, the proportion of variance explained was 42·5 %, and the likelihood ratio for a sample of 315 was 24·30. For China, the variance explained was 18·6 %, and the likelihood ratio was 142·78 for a sample of 319.
As indicated in Table 7, seven variables were significant for Turkey. Healthy food habits arose from more time spent cooking, bulk preservation, online shopping, higher food expenditure, more physical activity and greater satisfaction with the rules of the 'new normal'. Ordering for home delivery had a negative impact on healthy eating habits. In Portugal, six variables were significant. Items positively affected perceived healthy eating habits were bulk preservation, time spent at home, satisfaction with the 'new normal' and physical activity. Items with a negative impact were cooking logistics and freezing food. Seven variables were significant for China. The following elements KMO, Kaiser-Meyer-Olkin. Scale for cooking style: 1, much more similar; 2, more similar; 3, much more different; 4, neither similar nor different; 5, more different. Scale for cooking logistics: 1, much less; 2, less; 3, neither less nor more; 4, more; 5, much more. Coef. positively influenced healthy eating habits: cooking style, cooking logistics, bulk preservation, time spent at home, food expenditure and physical activity.
To determine whether healthy eating habits varied across China, Portugal and Turkey, Student's t tests were estimated among the β regressors at a 95 % CI (Table 8). Turkey and Portugal demonstrated statistically significant differences in food preparation for future consumption, cooking styles, cooking logistics, drying as preservation and household changes; however, the β coefficients of these models were not significant. β weights showed opposite signs in both models. Therefore, within Portugal and Turkey, healthy eating habits are quite different. More specifically, the difference in β regressors was 11 v. 4 between Turkey and China and between Portugal and China.
Discussion and conclusions
The COVID-19 pandemic has altered individuals' routines as lockdowns have forced people to stay home. Noticeable changes in eating behaviour during the pandemic have involved home cooking and spending on at-home food consumption (29) . Using a survey administered in China, Portugal and Turkey, the current study empirically investigated how food purchases, preparation, cooking and expenditure have changed based on the pandemic. Comparing findings across these countries revealed specific insights into each country's characteristics. In particular, an ordered probit model demonstrated how changes in individuals' food habits had influenced their healthy eating habits to some extent.
First, people's shopping habits during the pandemic were explored based on ordering for delivery and shopping online. Findings support research showing that the pandemic has drastically altered people's shopping habits, leading them to rely on online shopping and delivery (11,12) . Portugal and China seemed to have similar shopping behaviour in ordering for delivery. Online shopping slightly increased in all of the three analysed countries. Turkey and Portugal do not present statistically significant differences regarding shopping online. Changes in shopping habits reflect the difference across countries regarding lockdown restrictions and the COVID-19 pandemic in general. Online shopping and ordering for delivery could help individuals spend less time in public. On the other hand, in Turkey, in-person grocery shopping was the only possibility to be outside during the confinement, so people still might prefer to do grocery shopping in person. Moreover, online grocery shopping is available mainly in urban areas. In addition, since the level of food expenditure has increased, people might prefer to visit discount supermarkets not offering online shopping in Turkey.
Second, people prefer different food preservation methods, such as dry, bulk and frozen, based on cultural traditions. For example, China and Portugal similarly favoured bulk preservation, Turkey preferred drying methods and the Portuguese also enjoyed freezing, followed by the Chinese people. During the pandemic, since working from home as possible, many people living in big cities moved to their summer houses, mainly in the coastal parts of Turkey. Also, some of them moved to their rural hometown. Living in smaller towns made it possible to follow some traditional food preservation methods like sun drying. Seasonal and local fruits and vegetables in the coastal or rural areas were easily accessible and abundant, so many urban populations re-discovered traditional ways of living. We did not ask respondents which food they applied preservation methods to. However, traditionally in Turkey, fruits like apples, apricots and plums and vegetables like okra, tomato, eggplant and pepper are sun-dried and stored for winter use. A possible explanation for why the Chinese and Portuguese preferred bulk preservation is that many stayed in their urban areas, and bulk preservation was the feasible food preservation method for them. COVID-19 may have led people to focus on freezing due to spending extra time at home. Moreover, after the declaration of the pandemic, many supermarkets faced stock problems due to panic buying, and thus uncertainty and fear led many people to stockpile (38) . To stock up and store fresh foods such as fruits, vegetables, meat, etc., for much more extended periods, people might have started to apply traditional preserving methods like home drying and freezing, because the study results indicated that responders used less frozen food for daily consumption which suggests that they prepared and stored frozen foods for future consumption. Third, two main factors were identified as relevant to food preparation (i.e. cooking time and future consumption). Significant differences among these countries could offer greater insights into people's habits (21,31) . Portugal and Turkey were similar in cooking time, whereas Portugal and China were similar in preparation for future consumption. As the length of time spent at home increased, it was not surprising that time spent cooking also increased in all three countries, because one of the main barriers to home food preparation/cooking is lack of time (19) . Our findings align with previous studies, which reported more time spent cooking in Italy, Denmark, Poland and China (34,39,40) . The level of food expenditure also rose during the pandemic, most notably in Turkey, followed by China and Portugal.
Consumers consider home cooking healthy, but the lack of time is the main barrier to home cooking (19) . Consistent with previous research, a positive association between more time for cooking and healthy eating habits among Turkish respondents was observedtime spent at home also positively influenced perceived healthy eating among Portuguese and Chinese respondents. Spending more time at home might result in respondents following healthy eating behaviours. Lusk (36) reported that preservation and freshness are indicators of perceived healthiness of food for consumers, and frozen fruits and vegetables were considered less healthy than fresh ones but healthier than canned food.
Our study observed a negative relationship between freezing food and healthy food habits among the Portuguese. This might suggest that the Portuguese preferred to consume fresh foods instead of freezing them for preservation. Bulk preservation of fresh fruits and vegetables is positively associated with healthy eating habits across the three countries in the study. This might suggest that respondents preferred bulk buying fresh fruits and vegetables to visiting grocery shops less often. Many studies reported a decrease in shopping frequency during the pandemic (3,39) . Further, fresh fruits and vegetables are defined as healthy by health authorities (41) but also perceived as healthy by consumers (42) .
This study integrated the disciplines of tourism and hospitality, marketing, communication, and food science (nutrition) to investigate how COVID-19 has affected people's food habits across Turkey, Portugal and China. Residents generally shifted to online shopping and delivery services during the pandemic. Insights on food preservation and preparation offer a clearer glimpse into people's eating habits during the pandemic. Understanding changes in food habits and healthy eating may aid marketers in helping customers adapt to a 'new normal' around foodrelated services (e.g. broader or more intuitive options for online food shopping and delivery; virtual cooking classes to expand one's culinary repertoire; voluntary training in food preservation methods). This study presents timely empirical evidence to assist policymakers and relevant industry practitioners in coping with events such as pandemics based on individuals' needs and expectations in different countries. Results also stress the role of national culture in food consumption; associated nuances should be taken into account for policy formulation and practice.
This study has several limitations. First, we assessed habit-based changes in three countries. Although the sample contained a heterogenous group based on respondents' geographical distribution, the survey was only distributed to people with access to a computer or smartphone. Respondents were also limited to educated residents of urban cities; those in rural areas and below 18 years of age were excluded. Furthermore, the samples by country are not homogeneous, with the Chinese sample comprising more young people than the Portuguese and Turkish samples. As a result, the findings cannot be generalised to other populations in these countries. Future studies should extend this research to a more global level.
Moreover, because changes in healthy eating behaviour constitute a long-term process, longitudinal studies could thoroughly reveal the impacts of the pandemic across individuals' broader life contexts. Furthermore, in line with Wen et al.'s (43) call for interdisciplinary social science research on the pandemic's impacts on specific industries and populations, more studies should explore food-related topics. One avenue to consider involves the relationship between food and public healthparticularly as COVID-19 remains a global health concern.
|
2022-08-20T06:17:54.556Z
|
2022-08-19T00:00:00.000
|
{
"year": 2022,
"sha1": "fac7b9d78a09d743afe4b60be93144837d6f9ae3",
"oa_license": "CCBY",
"oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/A65F7FF9CDF59ECDFCE338AC4ED22A68/S1368980022001720a.pdf/div-class-title-positive-effects-of-covid-19-on-food-preparation-and-expenditure-habits-a-comparative-study-across-three-countries-div.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d1ec27e8833df6ac154879052e23023bf98d98f1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
248512193
|
pes2o/s2orc
|
v3-fos-license
|
Wilms' tumour with spinal cord involvement
Spinal cord involvement of Wilms' tumour is rare. A 14-year-old girl presented with an abdominal mass, paraplegia and loss of bladder and bowel control. Radiological investigations confirmed the presence of a large intraabdominal mass with infiltration into the spinal canal with impingement of nerve roots and the spinal cord. Histopathological evaluation demonstrated a nephroblastoma. It was decided to commence prompt neoadjuvant chemotherapy to render the tumour amenable to surgical resection. The patient unfortunately demised before receiving her first dose. Early diagnosis and timeous initiation of treatment is critical in limiting morbidity and mortality associated with malignant spinal cord compression.
Introduction
Wilms' tumour (WT) is the most common malignant renal tumour in children accounting for 6% of all neoplasms. Approximately 12% of cases are metastatic at presentation, with the lungs, lymph nodes, liver, and bone being the most common sites of metastases. Spinal cord involvement of WT is exceedingly rare, and we report a rare case and review the relevant literature.
Case presentation
A 14-year-old girl with no prior medical history was brought into our unit reporting a 6-month history of abdominal discomfort and paraplegia. She had first noticed a fullness in the abdomen six months ago, progressively worsening. In addition, she reported weakness of her lower limbs which had progressed over the last month to complete paraplegia with loss of bladder and bowel control. She appeared chronically ill, wasted, pale, and severely malnourished on physical examination. A large, firm, painless mass was palpable in the left upper quadrant and the left flank. She had flaccid paraplegia with 0/5 power in both legs and absent tendon reflexes. Contrast-enhanced CT of the abdomen and pelvis (Fig. 1) revealed a large cystic and partially solid mass (21 x 14.5 × 14 cm) occupying the left upper quadrant and left flank region, extending into the pelvis caudally and crossing the midline. No recognizable normal left renal tissue could be distinguished from this lesion (Fig. 1). In addition, multiple pulmonary metastases were identified. MRI (Fig. 2) revealed significant infiltration of the tumour into the spinal canal with significant impingement of numerous exiting nerve roots and impingement of the spinal cord from T9-L1. An urgent percutaneous biopsy of the lesion was performed on admission.
Histopathological examination represented a renal neoplasm comprising papillary structures covered by primitive, simple columnar epithelium (Fig. 3A). The tumoural cells contained scant apical eosinophilic cytoplasm and oval hyperchromatic nuclei with increased nuclear-to-cytoplasmic ratios (Fig. 3B). Nucleoli were inconspicuous. Some of the tissue cores had focal atrophic native renal parenchyma. No blastema or stromal elements were demonstrated. Anaplasia was not seen. Immunohistochemistry revealed strong and diffuse labelling with cytokeratin 8/18, PAX8 and WT-1 (Fig. 3C) and was negative for EMA, TFE3, HMB-45 and calretinin. The overall features were compatible with an epithelial component of a nephroblastoma.
A neurosurgery consult was of the opinion that the complete loss of cord function was likely irreversible, and surgical intervention was deferred. Further investigation by the social worker and community health care representative revealed that the child comes from impoverished and difficult circumstances. The mother, the primary caregiver, was an unemployed, elderly alcoholic. A diagnosis of stage IV WT was made. It was decided at the multidisciplinary team (MDT) meeting to initiate prompt neoadjuvant chemotherapy (NAC) with vincristine, actinomycin-d and doxorubicin in the hope of making the tumour more amenable to surgical resection. Unfortunately, the patient demised soon after this MDT meeting before the initiation of NAC. The mother declined a postmortem examination to confirm the cause of death.
Discussion
Spinal cord involvement in childhood malignancies ranges from 2.7 to 4%. Neuroblastomas, soft tissue sarcomas, osteogenic and Ewing sarcomas are the most common malignancies responsible. In WT, spinal cord involvement is infrequent. The exact mechanism accounting for spinal cord involvement is unclear. Direct extension of the tumour through the vertebral foramina and subsequent spinal cord compression was the most likely explanation in our case. Other plausible mechanisms of spinal cord involvement in WT include haematogenous dissemination through the collateral circulation of the paravertebral venous plexus, lymphatic spread through the vertebral foramen, and extension along the perineurium of the spinal nerves or skeletal metastasis to the vertebral body. 1,2 Back pain, lower limb weakness, sensory loss, sphincter, and autonomic dysfunction are the most frequently reported symptoms associated with spinal cord compression. 3 These clinical features must be promptly recognized and treatment initiated as soon as possible to allow any chance of neurological recovery.
In its typical appearance, WT is triphasic and consists of variable proportions of blastemal, stromal, and epithelial cells. Our case only showed epithelial elements. The percutaneous biopsy sample may not have been entirely representative, or the tumour could have represented a monophasic (epithelial predominant) WTa rare histological variant. Examination of the excised tumour would have given more representative sections to distinguish between the two and assist with prognostication using the Société Internationale d'Oncologie Pédiatrique (SIOP) and Children Oncology Group (COG) schemas.
Although several guidelines may be used to manage WT, our unit adopts the SIOP guidelines, and NAC was preferred to render the tumour more operable. Due to the complete loss of cord function, our neurosurgery colleagues did not deem her a candidate for surgical decompression. Unfortunately, the patient demised before the commencement of NAC.
Black children of sub-Saharan African descent consistently show the highest incidence of WT globally at 11 cases per million. Due to contemporary advances in medicine and a multimodal treatment approach, including surgery, multiple-drug chemotherapy and radiotherapy, as well as the availability of standardized treatment guidelines from large multidisciplinary cooperative cancer groups, namely the COG and SIOP, the 5-year survival for patients with WT in the developing world is now more than 90% and is hailed as one of the greatest success stories in modern oncology. This is in stark contrast to the overall survival at 5-years in sub-Saharan African nations, reported as low as 25%. 4 Unfortunately, our patient presented very late in the disease process due to multiple social and economic issues. While one must acknowledge social, structural and cultural barriers responsible for this dismal overall survival in developing countries, a recent review by Apple and Lovvorn in 2020 suggested there may also be an underlying biological and molecular basis that may account for this discrepancy. 5 More research needs to be done to understand Wilms' tumorigenesis in our setting.
Conclusion
Malignant spinal cord compression is associated with a poor prognosis and may result in permanent paralysis, sensory loss and sphincter dysfunction. Therefore, early diagnosis and timeous initiation of treatment protocols, including high dose corticosteroids, chemotherapy and surgical resection, are critical in limiting functional morbidity and mortality.
Conflicting interests
The authors declare no conflict of interest.
Funding
The authors received no financial support for the research, authorship, and/or publication of this article.
Ethical approval
No ethical approval is required by the institution for the publication of individual case reports.
Informed consent
Written informed consent was obtained from the patients mother for the anonymised information and the accompanying images to be published in this article.
Contributor ship
JJ and AA reviewed the literature and drafted the manuscript. All authors issued final approval for the version to be submitted for publication.
|
2022-04-29T15:40:46.367Z
|
2022-04-25T00:00:00.000
|
{
"year": 2022,
"sha1": "32a7496e20929c8f3e6cf41c14e08e80457bc948",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.eucr.2022.102095",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "763af5f64dd60e158a9cd0e9852f88492a5af0e8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
113992391
|
pes2o/s2orc
|
v3-fos-license
|
Seismic Response of Underground Lifeline Systems
This paper presents and discusses the recent developments related to seismic performance and assessment of buried pipelines. The experience from the performance of pipelines during last earthquakes provided invaluable information and lead to new developments in the analysis and technologies. Especially, the pipeline performance during Canterbury earthquake sequence in New Zealand is taken as a case study here. The data collected for the earthquake sequence are unprecedented in size and detail, involving ground motion recordings from scores of seismograph stations, high resolution light detection and ranging (LiDAR) measurements of vertical and lateral movements after each event, and detailed repair records for thousands of km of underground pipelines with coordinates for the location of each repair. One of the important learnings from the recent earthquakes is that some earthquake resistant design and technologies proved to be working. This provides a motivation to increase international exchange and cooperation on earthquake resistant technologies. Another observation is that preventive maintenance is important to reduce the pipeline damage risk from seismic and other hazards. To increase the applicability and sustainability, seismic improvements should be incorporated into the pipe replacement and asset management programs as part of the preventive maintenance concept. However, it is also important to put in the most proper pipeline from the start as replacing or retrofitting the pipelines later requires substantial investment. In this respect, seismic considerations should be taken into account properly in the design phase.
Introduction
Observations from recent earthquakes provided opportunities to evaluate the pipeline performances with respect to pipeline properties, soil conditions and different levels of loadings. Earthquake damage to buried pipelines can be attributed to transient ground deformation (TGD) or to permanent ground deformation (PGD) or both. TGD occurs as a result of seismic waves and often stated as wave propagation or ground shaking effect. PGD occurs as a result of surface faulting, liquefaction, landslides, and differential settlement from consolidation of cohesionless soil. The effect of earthquake loading on pipelines can be expressed in terms of axial and flexural deformations. At locations where the pipeline is relatively weak because of corrosion, etc., breaks and/or cracks may be observed on the pipelines. If deformations are high, the damages can be in the form of separations of joints, wrinkling, buckling and tearing of pipelines.
There exist many studies which evaluated the effect of earthquakes on buried pipeline systems (Chen et al. 2002;Tromans et al. 2004;Hwang et al. 2004;Scawthorn et al. 2006;Yifan et al. 2008). A comprehensive study for a very large pipeline system can be found in O'Rourke and Toprak (1997) and Toprak (1998) which assess the Los Angeles water supply damage caused by the 1994 Northridge earthquake. A more recent example can be found in Toprak et al. (2014) and O'Rourke et al. (2012O'Rourke et al. ( , 2014 regarding pipeline performance during Canterbury earthquake sequence in New Zealand. Following the 7.1 Mw Sept. 4, 2010 Darfield earthquake, thousands of aftershocks with Mw as high as 6.2 have been recorded in the area of Christchurch, NZ. These earthquakes, termed the Canterbury earthquake sequence are unprecedented in terms of repeated earthquake shocks with substantial levels of ground motion affecting a major city with modern infrastructure. Furthermore, the earthquakes were accompanied by multiple episodes of widespread and severe liquefaction with large PGD levels imposed on underground lifelines during each event. The data collected for the earthquake sequence are likewise unprecedented in size and detail, involving ground motion recordings from scores of seismograph stations, high resolution light detection and ranging (LiDAR) measurements of vertical and lateral movements after each event, and detailed repair records for thousands of km of underground pipelines with coordinates for the location of each repair.
One of the most critical lessons of the recent earthquakes is the need for seismic planning for lifelines, with appropriate supplies and backup systems for emergency repair and restoration. Seismic planning however requires physical loss estimations before the earthquakes occur. Methodologies for estimating potential pipelines damage use relationships which are often called in different names such as "fragility curves", "damage functions", "vulnerability functions" or "damage relationships". These relationships are primarily empirical and obtained from past earthquakes. Buried pipeline damage correlations are critical part of loss estimation procedures applied to lifelines for future earthquakes. An extensive review of the past pipeline damage relationships primarily for ground shaking (transient ground deformations) can be found in Toprak (1998), Toprak and Taşkın (2007), Pineda-Porras and Najafi (2010). Especially, the Northridge earthquake was an important event for a leap in the development of pipeline damage relationships. The substantial earthquake damage in the City of Los Angeles water supply system and availability of the strong motion instruments throughout the area provided a unique opportunity to develop and improve damage correlations. The extensive database required use of geographical information systems (GIS) in the assessments. By using this database, Toprak (1998) andO'Rourke et al. (1998) relationships were developed primarily from cast iron (CI) pipeline damage although they made limited comparisons with damage for other pipe types. O' Rourke andJeon (1999, 2000) went one step ahead and developed separate relationships for CI, ductile iron (DI), asbestos cement (AC), and steel pipelines. They also developed relationships which uses pipe diameter (Dp) and PGV together. Trifunac and Todorovska (1997) developed pipeline damage relationships using the 1994 Northridge earthquake data. Their relationships relate the average number of water pipe breaks per km 2 with the peak strain in the soil or intensity of shaking at the site. American Lifelines Alliance (2001) project combined data from 12 US, Japan, and Mexico earthquakes and developed relationships for wave propagation damage. O' Rourke and Deyoe (2004) investigated why there is significant difference between HAZUS relationship and the other relationships developed after the 1994 Northridge earthquake. They concluded that the most significant difference between the data sets is seismic wave type. When plotted on repair rate versus ground strain, it appears that the scatter of data points from Mexico and other earthquakes reduces substantially. In terms of PGV, they introduce two different relationships, one to use in the case of R waves and the other for S waves. Most recently, O'Rourke et al. (2012O'Rourke et al. ( , 2014 concluded that the Christchurch data for RR vs. PGV follows the trends for AC and CI pipelines observed in previous earthquakes. The data and linear regressions are shown in Fig. 10.1. It is important to include the new data as they become available after earthquakes in order to develop more robust regressions for future fragility analyses of lifeline earthquake performance. Continuous service of lifeline systems such as drinking water and natural gas pipeline systems or getting their functionality quickly back right after an earthquake is very important and crucial for urban societies. It was observed in the past earthquakes that pipeline damage density was much higher at locations where permanent ground deformations (PGD) were observed. Hence, this paper deals with especially PGD effect evaluations. PGD occurs as a result of surface faulting, liquefaction, landslides and differential settlement from consolidation of cohesionless soils. It is important for utility companies to evaluate their existing systems against PGD effects as well as to design their new systems resistant to these effects. This paper presents the recent developments in the assessment of PGD effects on pipelines.
Pipeline Properties and Preventive Maintenance
Performance of pipelines in past earthquakes showed that the pipe material and joint type are important for the response to earthquake loading. Pipe compositions of pipeline systems may differ in cities and countries. The comparisons of water distribution networks in various countries (e.g., show that pipe compositions (including joint types) in the water distribution networks differ significantly from country to country. The history and development of water supply systems in urban areas of countries affect the existing pipe compositions. For example, the main types of buried water pipes in Japan are ductile cast iron pipes (DIP), grey cast iron pipes (CIP), steel pipes (SP), polyethylene pipes (PE), polyvinyl chloride pipes (PVC), and asbestos cement pipes (ACP). Ductile cast iron pipes account for 60 % of the total length of buried water pipes (Miyajima 2012). Especially, asbestos cement pipes are well known for their high damage rates during earthquakes. Figure 10.2 shows some typical joint types in Japan water distribution systems. These joints were primarily used in pipelines greater than 300 mm in diameter (Eidinger 1998). Table 10.1 provides properties of the seismic joints. Types "S" and "S-II" joints are special earthquake resistant joints whereas type K is a mechanical joint. Type "S" joints have 2-4 cm of flexibility (500-2,600 mm diameter) and type "S-II" joints have 5-7 cm of flexibility (100-450 mm diameter). Type "S" were used until 1980 and type "S-II" were used since 1980. During the 1995 Kobe earthquake, the performance of type "S" joints was average whereas performance of type "S-II" joints was very well. Type K joints didn't performed well. A more recent earthquake resistant joint ductile iron pipe (ERDIP) performed very well in recent earthquakes and selected by Los Angeles Department of Water and Power (LADPW) for pilot applications in USA (Davis 2012). Purpose of the pilot project is to allow the LADWP to become acquainted with the ERDIP, to obtain direct observations and experience of the design and installation procedures, to compare the design and installation of ERDIP with pipes normally installed by LADWP, and to make own assessment on suitability for using the ERDIP to improve network reliability (Miyajima 2012; Davis 2012). It is important to put in the most proper pipeline from the start as replacing or retrofitting the pipelines later requires substantial investment. Sufficient considerations should be given regarding the pipe materials and joints from the life expectancy and hazards points of view. Buried pipes of distribution systems are worn in the length of time because of the temperature, soil moisture, corrosion and other aging effects . For example, aging of pipes in a water distribution system may have three main results. First, aging of pipe material causes a decrease in the strength of pipe. Then pipe breaks are increased at the high pressure areas of the system. Second, aging of a pipe increases the friction coefficient of the pipe so the energy loss in that pipe rises. Then more pumping cost occurs and sometimes a gravity working system needs pumping. Finally, aging of pipes affect the water quality in the system and may cause discolored water. Aging of a pipe is unavoidable but this process may be delayed by some precautions. Cathodic protection for steel pipes, lining and coating for steel and ductile iron pipes are some anti-aging techniques. In the design phase of a water distribution Type A A rectangular rubber gasket is placed around the socket and the joint bolts are tightened with a gland Type T A rubber gasket is placed around the socket and the spigot is inserted into the socket Type K A modified version of Type A. This has only a rubber gasket which a rectangular one and a round one are combined Type S, Type S-II A rubber gasket and a lock ring are placed around the socket and the spigot is inserted into the socket. The joint has good earthquake resistance with high elasticity and flexibility and a disengagement prevention mechanism Type NS Same earthquake resistance as Type S but is easier to use than Type S system, analyzing the temperature changes in the area, pressure values of the system, chemical components of the soil and ground water helps for the selection of long life pipe material and suitable burial depth of pipes. Most public water utilities use the concept of "maintenance only when a breakdown occurs". However, in recent years "preventive maintenance" and "proactive management" concept is getting more attraction. The logic behind preventive maintenance (PM) is that it costs far less to regularly schedule downtime and maintenance than it does to operate the network until breakdown at which repair or replacement is imperative. The primary goal of PM is thus to prevent the failure of components of the network before they actually occur by using advanced methods of statistical and risk analysis. The consequences of "maintenance on the run" are unreliable service, customer dissatisfaction, and significant water losses of valuable resources due to leakage or pipe rupture. To take full advantage of this, the utilities must have an accurate topological image of the network, the age and type of materials used in its various branches and past maintenance records.
An interesting project on this topic was presented by Tsakiris et al. (2011) andToprak et al. (2012). The project is a European project under the Leonardo da Vinci program and entitled "Preventive Maintenance for Water Utility Networks (PM4WAT)". The project consortium was composed of seven organizations from four European countries, all Mediterranean that face similar problems with water resources and distribution (Toprak and Koç 2013). Some of these countries have old and non-homogeneous networks that are subject to ageing, massive water losses, seismic activity and other natural hazards. The consortium includes universities and research institutions, an ICT organization, VET providers and urban utility networks, selected with a view to their knowledge and experience. In particular the project objectives are: to transfer state of the art on preventive maintenance methodologies and practices from domain experts from the participating countries to personnel working in urban water utilities; to develop a training simulation (TS) platform that will advise trainees to estimate the reliability of a network and to examine various "what-if" scenarios; to provide training on pro-active rehabilitation and on the effects of natural hazards; and to develop courseware for web-based and off-line training on preventive maintenance of urban utility networks, made available in the four languages of the participating countries (English, Greek, Italian and Turkish).
The training simulator of the PM4WAT project is based on a Fifth Framework project SEISLINES (Age-Variant Seismic Structural Reliability of Existing Underground Water Pipelines) which was performed between 2000 and 2002 (Becker et al. 2002;Camarinopoulos et al. 2001). The product of SEISLINES was re-designed and adapted for the purposes of PM4WAT project. The training simulator uses real geographical information on the topology of the water utility networks as well as real data on the properties of the elements in the branches of the network. There are four intermittent (surge pressure, frost, seismic and thermal) and four permanent (earth, water, traffic and working pressure) loads considered by the simulator (Camarinopoulos et al. 2001). The original software SEISLINES has been thoroughly revised with the view to simplify the sequence of steps necessary to view the water network, select the critical points at which the reliability will be estimated and finally display of the results. The final product was with a userfriendly wizard, which would guide the user and provide functionality and with additional features such as exporting the archived reliability and rehabilitation results in Excel or text files for further investigation and analysis (Fig. 10.3).
A good example of replacement program was applied in Denizli, Turkey. In year 2003, Denizli Municipality evaluated the water balance of Denizli City,Turkey. The water balance was prepared as part of a project supported by the World Bank according to the IWA/AWWA methodology (Denizli City Water Works 2005). The results showed that there existed about 43 % non-revenue water. Physical losses amounted up to 36 %. Because of these relatively high physical losses and water quality issues and also seismicity considerations, Denizli Municipality decided to speed up the pipe replacement efforts. Pipeline repair logs and complaints from the customers pointed to especially the pipelines located in the central part of the city. A comprehensive evaluation of the system following the elements of a distribution integrity management program (DIMP) plan showed that any replacement should have started from the central part of the city. And replacements program started in 2008. Ductile iron was selected as the pipe material. The replacement program is still continuing but in the first few years pipelines primarily in the liquefaction prone areas (e.g., Toprak et al. 2009) were renewed. Contractors obtained ductile iron pipes and their fittings mainly from two sources. One of them is the Samsun Makina Sanayi Inc. from Turkey and the other is the Saint-Gobain Group from France ( Fig. 10.4a, b, respectively). Samsun Makina Sanayi Inc. produces special earthquake resistant type connections in order to avoid the deformation of the socket and pipe end. The socket parts of those pipes are manufactured with "long standard-type sockets", which has a longer design length than the standard manufactured pipes' sockets and inside the socket standard-type gasket is used together with the rubber backed steel ring, which prevents the pipe displacing from the socket. The groove opened to the end of the pipe prevents the pipe from displacing by attaching the steel ring. According to the Samsun Makina Inc. earthquake resistant type connection conforms the values mentioned in ISO 16134: 2006 (E) (Samsun Makina 2014).
• Expansion/Contraction performance: Class S-1 AE % 1 of L (L is the length of pipe usually 6 m) • Slip-out resistance: Class A ! 3D kN (D is the nominal diameter of pipe), • Joint deflection angle: M-2 AE 7.5 to <15 . BLUTOP is the patented name of the Saint-Gobain PAM Group ductile iron pipes which are designed to withstand a particularly high angular deviation of 6 . The enhanced jointing depth also decreases the risk of pipe dislocation. As a result, BLUTOP® offers excellent performance in soil subject to ground movements (Saint-Gobain-PAM 2014).
Field Observations of Pipeline Damage and Ground Deformations
Among the most notable research accomplishments in the last quarter of this century is the work of Hamada and coworkers (Hamada, et al. 1986;Hamada and O'Rourke 1992) in the use of stereo-pair air photos before and after an earthquake to perform photogrammetric analysis of large ground deformation. This process has influenced the way engineers evaluate soil displacements by providing a global view of deformation that allows patterns of distortion to be quantified and related to geologic and topographic characteristics. There are several examples where air photo measurements were used in pipeline damage assessment (e.g., Sano et al. 1999). In recent years, light detection and ranging (LiDAR) data were being used to detect ground displacement hazards to pipeline systems. Stewart et al. (2009) investigated the use of multiepoch airborne and terrestrial LiDAR to detect and measure ground displacements of sufficient magnitude to damage buried pipelines and other water system facilities that might result, for example, from earthquake or rainfall-induced landslides. They concluded that observed LiDAR bias and standard deviations enable reliable detection of damaging ground displacements for some pipelines types. Toprak et al. (2014) evaluated pipeline damages by using ground displacements from air photo and LiDAR measurements and made comparisons. High resolution LiDAR data were available through the Canterbury Earthquake Recovery Authority (CERA). Also horizontal and vertical displacements were available from stereopair air photos taken before and after the earthquakes to perform photogrammetric analysis of large ground deformations around Avonside area in Christchurch, NZ. Avonside area was in liquefaction zone.
Geospatial data in the form of GIS maps of the Christchurch water and wastewater distribution systems, locations of pipeline repair, and areas of observed liquefaction effects were integrated into a master GIS file. For the water supply systems, Toprak et al. (2014) study focuses on damage to water mains, which are pipelines with diameters typically between 75 and 600 mm, conveying the largest flows in the system. It does not include repairs to smaller diameter submains and customer service laterals. The database was presented in detail and discussed in O' Rourke et al. (2012). Figure 10.5 shows the water pipelines and repair locations in Avonside area. Also shown in the figure are air photo and LiDAR horizontal displacements. Measurements of lateral movement derived from the LIDAR surveys are provided as displacement in the east-west (EW) and north-south (NS) directions at 56-m intervals (CERA 2012). Horizontal displacements from air photo measurements are provided at 680 locations. There exist some benchmark displacement measurements in Christchurch area after the Canterbury earthquake sequence. Canterbury Geotechnical Database (CGD) provides about 403 benchmarks and their movement relative to earliest survey values after three big earthquakes. These data consist of information from Land Information New Zealand (LINZ 2014), Christchurch City Council, the Earthquake Commission (EQC) and CERA. There are 25 benchmarks from 403 benchmarks in Avonside area which are used in comparisons with LiDAR and air photos displacements.
For the purpose of horizontal strain calculations, the horizontal displacement data points are considered as corners of square elements. The grid with square elements may be regarded as a finite element mesh with bilinear quadrilateral elements. Knowing the coordinates of each corner and the corresponding displacement, the strains in the EW and NS directions (ε x and ε y , respectively) and shear strains (γ xy ) can be calculated by computing the spatial derivatives of displacements using linear interpolation. Accordingly, finite element formulations were used to determine horizontal ground strains in the center of the elements, following the method described by Cook (1995). Pipeline repair rates (RRs), repairs/km, corresponding to different strain levels were calculated from air photo and LiDAR lateral movement measurements. Because RR represents damage normalized by available pipe length, the RRs are a good indicator of relative vulnerability (Toprak et al. 2009. The r squared values for the correlation between pipeline damage and lateral ground strains from LiDAR are higher than the correlation from air photo, indicating stronger correlation. The difference between the regressions is not so significant for lower strains and almost identical for higher strain values. One of the most recent development in the pipeline damage correlations is to include the combined effects of horizontal ground strain and angular distortion. O'Rourke et al. (2012O'Rourke et al. ( , 2014 developed the correlations for the 22 Feb. 2011 earthquake. This concept is used frequently in the evaluation of building damage caused by ground deformation from deep excavations and tunnelling. A figure correlating the severity of building damage with respect to horizontal strain and angular distortion was developed by Boscardin and Cording (1989) from field measurements and observations at actual buildings combined with the results of analytical models of building response to ground movements. This approach is used Fig. 10.5 Ground displacement from LiDAR and air photos superimposed on pipelines and pipe repairs in Avonside extensively to predict and plan for the effects of ground deformation on surface structures. Angular distortion, β, is defined as the differential vertical movement between two adjacent LiDAR points (dv1 -dv2) divided by the horizontal distance, l, separating them, such that β ¼ (dv1 À dv2)/l. It is used in this work to evaluate the effects of differential vertical movement on pipeline damage. There are several advantages associated with this parameter. First, it is dimensionless, and thus can be scaled to the dimensions appropriate for future applications. Second, by subtracting the vertical movements of two adjacent points, one eliminates some systematic errors associated with the LiDAR elevation surfaces. Finally, angular distortion is a parameter used widely and successfully in geotechnical engineering to evaluate the effects of ground deformation on buildings (e.g., Boscardin and Cording 1989;Clough and O'Rourke 1990). The angular distortion for each 5-m cell associated with the LiDAR measurements was calculated in the GIS analysis with a third order finite difference method proposed by Horn (1981). Correlations of RR for different pipe types vs. β were shown in Fig. 10.6a.
Horizontal strain calculations (ε HP ) were performed according to the approach described above for Avonside area. Correlations of RR for different pipe types vs. ε HP were shown in Fig. 10.6b. Figure 10.7 provides the framework for predicting RR for AC and CI pipelines under the combined effects of lateral strain and differential vertical ground movement. The correlation was performed by counting repairs and lengths of AC and CI pipelines associated with ε HP and β intervals of 1 Â 10 À3 . This type of chart expands on the correlations generally used for buried pipeline fragility characterization to provide a more comprehensive treatment of ground deformation effects. Moreover, it provides a unified framework for predicting PGD effects on both buildings and underground lifelines.
Pipelines and Fault Crossings
Many water, natural gas, and oil pipelines must cross active faults. Faults can be strike, reverse, and normal slip. When reverse and normal faulting involve significant components of strike slip, the resulting movement is referred to as oblique slip. Reverse and normal faults tend to promote compression and tension, respectively in underground pipelines. Strike-slip may induce compression or tension, depending on the angle of intersection between the fault and pipeline. The angle of pipeline-fault intersection is a critical factor affecting the pipeline's performance. Two applications of a pipeline crossing fault zone are presented below: one is above ground and the other underground. One of the recent pipeline construction projects which had to take into account seismic considerations is the Sakhalin 2 Pipeline Project. It is one of the largest integrated oil and gas developments in the world. Twin oil (20 and 24 in.) and gas (20 and 48 in.) pipeline systems stretching 800 km were constructed to connect offshore hydrocarbon deposits from the Sakhalin II concession in the North to an LNG plant and oil export terminal in the South of Sakhalin island. The onshore pipeline route follows a regional fault zone and crosses individual active faults at 19 locations (Mattiozzi and Strom 2008;Vitali and Mattiozzi 2014;Vitali 2014). A two-tier approach was adopted in the design: (1) The pipeline shall withstand the "Safe Level Earthquake" (SLE) without or with minimal interruption of normal operation for any extensive repairs. The return period for the SLE event shall be 200 years. (2) The pipeline shall survive the "Design Level Earthquake" (DLE) without rupturing. Extensive damage but no leakage could occur to the pipeline, which would interrupt operation and require repair at one or more locations. The return period for the DLE event shall be 1,000 years. Table 10.2 shows the design requirements for the buried pipelines.
For the fault crossings in the Sakhalin Project, special trenches were considered in order to ensure safety of the pipelines subject to the design earthquake. The trench geometry and the backfilling nature have been adapted to results from the stress analysis. Different trench types and backfill materials were utilized along the pipeline route: "Draining Trenches" at 2 fault crossings, "Waterproof Trench" at 13 fault crossings, and "Waterproof Trench in Embankment" at 4 fault crossings ( Fig. 10.9). To avoid freezing, two important factors were controlled inside the trench: (a) Absence of water; (b) Thermal equilibrium. The first aspect is controlled Collapse in compression/ wrinkling ε ac /ε w 0,80 ε ac /ε w 1.0 Weld fracture ε at 0.02 (2.0 %) ε at 0.04 σ w /σ y ! 1.25 σ w /σ y ! 1.25 Upheaval buckling H f /H st ! 1.10 No requirement ε b bending strain, ε M max strain at peak moment in moment vs. strain curve, ε ac net compressive strain due to axial load, ε w compressive strain at which wrinkling occurs, ε at tensile strain in pipe, σ w minimum yield strength of weld/heat affected area, σ y specified minimum yield strength of pipe, H f actual burial depth, H st burial depth needed for stability (Mattiozzi and Strom 2008;Vitali 2014) with the construction of either waterproof or free draining trenches; the second is controlled with the installation of insulating slabs over the pipelines, within the trench. In order to minimize the types and dimensions of special trenches, for each fault crossing, two trench geometries were adopted: (a) Narrow trench; (b) Enlarged trench. Also for the trench backfill material, two solutions were proposed: (a) Clean sand backfill, (b) Light backfill material (LBM).
Conclusions
In this paper, recent developments related to assessment of seismic performance of buried pipelines are presented. The experience from the performance of pipelines during last earthquakes provided invaluable information and lead to new developments in the analysis. Some earthquake resistant design and technologies proved to be working in those earthquakes. This provides a motivation to increase international exchange and cooperation on earthquake resistant technologies. Another observation is that pipeline monitoring and mitigation studies are important to reduce the pipeline damage risk from seismic and other hazards. To increase the applicability and sustainability, seismic improvements should be incorporated into the pipe replacement and asset management programs. However, it is also important to put in the most proper pipeline from the start as replacing or retrofitting the pipelines later requires substantial investment. In this respect, seismic considerations should be taken into account properly in the design phase. Sufficient considerations should be given regarding the pipe materials, joints and soil-pipe interaction from the life expectancy and hazards points of view.
|
2019-04-15T13:05:05.376Z
|
2015-01-01T00:00:00.000
|
{
"year": 2015,
"sha1": "1dad7322a03e27968c12653d381a81c591e74f6d",
"oa_license": "CCBYNC",
"oa_url": "https://link.springer.com/content/pdf/10.1007/978-3-319-16964-4_10.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "415e333120739a64777ef08cd5ccb848631f39e9",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science",
"Geology"
],
"extfieldsofstudy": [
"Geology"
]
}
|
119114242
|
pes2o/s2orc
|
v3-fos-license
|
On the relativistic precession and oscillation frequencies of test particles around rapidly rotating compact stars
Whether analytic exact vacuum(electrovacuum) solutions of the Einstein(Einstein-Maxwell) field equations can accurately describe or not the exterior spacetime of compact stars remains still an interesting open question in Relativistic Astrophysics. As an attempt to establish their level of accuracy, the radii of the Innermost Stable Circular Orbits (ISCOs) of test particles given by analytic exterior spacetime geometries have been compared with the ones given by numerical solutions for neutron stars (NSs) obeying a realistic equation of state (EoS). It has been so shown that the six-parametric solution of Pach\'on, Rueda, and Sanabria (2006) (hereafter PRS) is more accurate to describe the NS ISCO radii than other analytic models. We propose here an additional test of accuracy for analytic exterior geometries based on the comparison of orbital frequencies of neutral test particles. We compute the Keplerian, frame-dragging, as well as the precession and oscillation frequencies of the radial and vertical motions of neutral test particles for the Kerr and PRS geometries; then we compare them with the numerical values obtained by Morsink and Stella (1999) for realistic NSs. We identify the role of high-order multipole moments such as the mass quadrupole and current octupole in the determination of the orbital frequencies especially in the rapid rotation regime. The results of this work are relevant to cast a separatrix between black hole (BH) and NS signatures as well as probe the nuclear matter EoS and NS parameters from the Quasi-Periodic Oscillations (QPOs) observed in Low Mass X-Ray Binaries.
INTRODUCTION
One of the greatest challenges of the general theory of relativity has been the construction of solutions to the Einstein-Maxwell field equations representing the gravitational field of compact stars such as neutron stars (NSs). Stationary axially symmetric spacetimes satisfy basic properties one expects for rotating objects, namely time symmetry and reflection symmetry with respect to the rotation axis (see e.g. . The simplest stationary axially symmetric exact exterior vacuum solution describing a rotating configuration is the well-known Kerr metric (Kerr 1963). The Kerr metric is fully described by two free parameters: the mass M and the angular momentum J of the object. However, it is known from numerical models that the quadrupole moment of rotating NSs deviates considerably from the one given by the Kerr solution Q Kerr = −J 2 /(Mc 2 ) (see e.g. Laarakkers and Poisson 1999, for details).
In the mean time, a considerable number of analytic exterior solutions with a more complex multipolar structure than the one of the Kerr solution have been developed (see e.g. Manko et al. , 2000Stephani et al. 2003). Whether analytic exterior solutions are accurate or not to describe the gravitational field of compact stars is an interesting and 1 leonardo.pachon@fisica.udea.edu.co 2 jorge.rueda@icra.it 3 cesar.valenzuela@correounivalle.edu.co very active topic of research (see e.g. Stute and Camenzind 2002;Berti and Stergioulas 2004;Pachón et al. 2006, and references therein).
The accuracy of analytic solutions to describe the exterior geometry of a realistic rotating compact star has been tested by comparing physical properties, e.g. the radius of the Innermost Stable Circular Orbit (ISCO) on the equatorial plane and the gravitational redshift (see Sibgatullin and Sunyaev 1998;Berti and Stergioulas 2004;Pachón et al. 2006, for details). In order to do such a comparison, the free parameters (i.e. the lowest multipole moments) of the analytic exterior spacetime, are fixed to the corresponding lowest multipole moments given by numerical interior solutions of the Einstein equations, for NS realistic models (see e.g. Berti and Stergioulas 2004).
Following such a procedure, the solution of Manko et al. (2000) has been compared by Stute and Camenzind (2002) and by Berti and Stergioulas (2004) with the numerical solutions for NSs calculated by Cook et al. (1994) and with those derived by Berti and Stergioulas (2004), respectively. However, being a generalization of the solution of Tomimatsu and Sato (1972), it cannot describe slowly rotating compact stars (see e.g. Berti and Stergioulas 2004), but the dynamics of astrophysical objects with anisotropic stresses (see Dubeibe et al. 2007, for details).
Following a similar procedure, based on tests of the ISCOs radii on the equatorial plane of the rotating neutron stars ob-tained by Berti and Stergioulas (2004), it has been shown that the six-parametric solution of (hereafter PRS solution, see Sec. 2 for details) is more accurate than the model of Manko et al. (2000). In addition, being a generalization of the Kerr solution, this solution can be used for arbitrary rotation rates.
Besides the ISCOs radii, there are additional physical properties that can be computed with analytic and numerical models and thus useful to compare and contrast the accuracy of analytic exact models. The aim of this article is to analyze the properties of orbital frequencies of neutral test particles in the PRS and in the Kerr geometries with especial focus on the Keplerian ν K , frame-dragging (Lense-Thirring) ν LT , as well as the precession(oscillation) frequencies of the radial and vertical motions, ν P ρ (ν OS ρ ) and ν P z (ν OS z ), respectively. The relevance of these frequencies relies on the fact that they are often invoked to explain the Quasi-Periodic Oscillations (QPOs) observed in some relativistic astrophysical systems such as Low Mass X-ray Binaries (LMXBs), binary systems harboring either a NS or a black hole (BH) accreting matter from a companion star. For instance, within the Relativistic Precession Model (RPM) introduced by Stella and Vietri (1998); ; ; , the kHz QPOs are interpreted as a direct manifestation of the modes of relativistic epicyclic motion of blobs arising at various radii r in the inner parts of the accretion disk around the compact object (see Sec. 6, for details).
In addition to the RPM, the Keplerian, precession and oscillation frequencies are used in other QPO theoretical models (see e.g. Lin et al. 2011, for a recent comparison of the existing models). Due to the influence of general relativistic effects in the determination of such frequencies, an observational confirmation of any of the models might lead to an outstanding test of general relativity in the strong field regime. In this line, it is of interest to compare and contrast the orbital frequencies given by the Kerr solution and by the PRS solution (see Sec. 3), which help to establish the differences between possible BH and NS signatures. We emphasize in this article the major role of the quadrupole moment as well as of the octupole moment of the object, whose possible measurement can be used as a tool to test the no-hair theorem of BHs (see e.g. Johannsen and Psaltis 2011) and to discriminate between the different theoretical models proposed to explain the physics at interior and exterior of the Neutron Stars. Additionally, in the case of NSs, the interpretation of QPOs as the manifestation of orbital motion frequencies might lead to crucial information of the NS parameters such as mass, angular momentum (see e.g. Stella and Vietri 1998;, and quadrupole moment (see e.g. . These parameters reveal, at the same time, invaluable information about the EoS of nuclear matter. The article is organized as follows. In Sec. 2 we recall the properties of the PRS solution. The computation of the orbital frequencies as well as the comparison of their features in the Kerr and in the PRS spacetimes, is shown in Sec. 3. In Sec. 4 we study the accuracy of the analytic formulas of the periastron and nodal frequencies derived by Ryan (1995) for stationary axially symmetric spacetimes. In Sections 5 and 6 we discuss the accuracy of the PRS solution in describing the frequencies of realistic NS models and its relevance in the Relativistic Precession Model, respectively. The conclusions of this work and a discussion on possible additional effects to be accounted for in the determination of the orbital frequencies, e.g. the effect of magnetic dipole moment, are outlined in Sec. 7.
THE PRS ANALYTIC EXACT SOLUTION
We first recall the PRS analytic model , for the exterior gravitational field of a compact object 4 . In the stationary axisymmetric case, the simplest form of the metric can be written as (Papapetrou 1953) where f , ω and γ are functions of the quasi-cylindrical Weyl coordinates (ρ, z). Thus, the components of the metric tensor g µν are Using the above line element, the Einstein-Maxwell equations can be reformulated, via Ernst's procedure in terms of two complex potentials E(ρ, z) and Φ(ρ, z) (Ernst 1968a,b). By means of Sibgatullin's integral method (Sibgatullin 1991;Manko and Sibgatullin 1993) this system of equations can be solved va where e(z) := E(z, ρ = 0) and f (z) := Φ(z, ρ = 0). The unknown function µ(σ) must satisfy the singular integral equation and the normalizing condition where ξ = z + iρσ, η = z + iρτ, ρ and z being the Weyl-Papapetrou quasi-cylindrical coordinates, σ, τ ∈ [−1, 1], e(η) := e(η),f (η) := f (η) and the overbar stands for complex conjugation. In , the Ernst potentials were chosen as We calculate the multipole moments following the procedure of Hoenselaers and Perjes (1990). We denote the mass multipoles by M i while, the current (rotation) multipoles, by S i . The electric multipoles are denoted by Q i and the magnetic ones by B i . Thus, for the PRS solution we have This allows us to identify m as the total mass, a as the total angular moment per unit mass (a = J/m, being J the total angular moment); while k, s, q and µ are associated to the mass-quadrupole moment M 2 , current octupole S 3 , electric charge and magnetic dipole, respectively. The potentials (10) can be written in an alternative way, we mean with Then, using Eqs. (6) and (10), we obtain the Ernst potentials and the metric functions in the whole spacetime where the functions A, B, C, H, G, K, and I can be found in the Appendix A. The PRS electrovacuum exact solution belongs to the extended N-soliton solution of the Einstein-Maxwell equations derived by , in the particular case N = 3. In addition, the functional form of the metric functions resembles the one derived previously by Bretón et al. (1999). Besides the limiting cases discussed in it is worth mentioning that, in the vacuum case q = 0 and µ = 0, for s = 0 this solution reduces to the solution of under the same physical conditions, namely q = 0, c = 0 and b = 0 in .
ORBITAL MOTION FREQUENCIES ON THE EQUATORIAL PLANE
Although for the case of compact stars contributions from the magnetic field could be relevant (see e.g. Sanabria-Gómez et al. 2010;Bakala et al. 2012), we focus in this work on the frequencies of neutral particles orbiting a neutral compact object. We calculate here the Keplerian ν K = Ω K /(2π), frame-dragging (Lense-Thirring) ν LT = Ω LT /(2π), radial oscillation and precession, ν OS ρ = Ω OS ρ /(2π) and ν P ρ = Ω P ρ /(2π), and vertical oscillation and precession frequencies, ν OS z = Ω OS z /(2π) and ν P ρ = Ω P ρ /(2π), respectively. The geodesic motion of test particles along the radial coordinate, on the equatorial plane z = 0, is governed by the effective potential (see e.g. Ryan 1995) where, for circular orbits, the energy E and angular momentum L are determined by the conditions V = 0 and dV/dρ = 0 (see . The frequencies at the ISCO's location (determined by the additional condition d 2 V/dρ 2 = 0) are of particular interest. Thus, before starting the discussion of the frequencies, it is important to explore the ISCO parametric dependence. We report here, as standard in the literature, the physical ISCO radius given by √ g φφ evaluated at the root of Eq. (19) that gives the coordinate ISCO radius. In the upper panel of Fig. 1 we plotted contours of constant ISCO radii as a function of the dimensionless angular momentum parameter j = J/M 2 0 and the star quadrupole moment M 2 , for the PRS solution. The use of the dimensionless parameter j in the horizontal axis allows to, qualitatively, relate deviations of the contour lines from vertical lines to the influence of the quadrupole moment. We can see that the ISCO radius decreases for increasing j and decreasing M 2 . A quantitative measurement of this influence could be derived from the effective slope of the contour lines. We are interested in the comparison with the Kerr geometry, so in the lower panel, we plotted contours of constant ratio r ISCO,PRS /r ISCO,Kerr as a function of j and the difference between the quadrupole moment of the PRS solution M 2,PRS and the Kerr quadrupole M 2,Kerr = −ma 2 , i.e. M 2,PRS − M 2,Kerr = M 2,PRS + ma 2 = mk, see Eq. (11). Deviations from the Kerr geometry are evident. Negative values of the angular momentum correspond to the radii of the counterrotating orbits obtained here through the change g tφ → −g tφ (see discussion below).
We stress that the accuracy of the PRS solution for describing the ISCO radius of realistic NSs was already shown to be higher with respect to other analytic models (see Pachón et al. 2006, for details). In Table 1 we compare the ISCO radius for two rapidly rotating NS, models 20 and 26, of Table VI of Pappas and Apostolatos (2012) for the EoS L. The lowest multipole moments of the analytic models are fixed to the numerical values obtained by Pappas and Apostolatos (2012). In the case of the Kerr solution, only M 0 and J can be fixed, while M 2 , and S 3 have values that depend on M 0 and J and therefore cannot be fixed. For the PRS solution with s = 0, M 0 , J and M 2 can be fixed while S 3 remains induced by the lower moments. We present also the ISCO radius obtained by fixing M 0 , J, M 2 , as well as S 3 in the PRS analytic exact model.
In Figs tained a quadrupole moment M 2 = −5.3 × 10 43 g cm 2 = 3.93 km 3 , with the latter value in geometric units, for a NS of angular rotation frequency ν s = 290 Hz (rotation period of 3.45 milliseconds), corresponding to a dimensionless angular momentum j = J/M 2 0 = 0.19, for the EoS L. For a fixed mass the quadrupole moment is an increasing func- (2012) tion of j because an increasing of the angular momentum at fixed mass results in an increasing of the oblateness (eccentricity) of the star, and so the quadrupole moment. Based on this fact, it is clear that not all the values of the (M 2 , j) pairs of quadrupole and angular momentum depicted in, e.g., Fig. 1 are physically meaningful. The maximum rotation rate of a neutron star taking into account both the effects of general relativity and deformations has been found to be ν s,max = 1045(M 0 /M ⊙ ) 1/2 (10 km/R) 3/2 Hz, largely independent on the EoS (see Lattimer and Prakash 2004, for details).
Corresponding to this maximum rotation rate, the angular momentum is J max = 2πν s,max I ∼ 6.56 × 10 48 I 45 g cm 2 s −1 , and where I 45 is the moment of inertia of the NS in units of 10 45 g cm 2 .
Keplerian Frequency
Now we turn into the frequencies analysis. For stationary axially symmetric spacetimes, the frequency of Keplerian orbits is given by (see e.g. Ryan 1995) where a colon stands for partial derivative with respect to the indicated coordinate and '+' and '-' stands for corotating and counter-rotating orbits, respectively. For the case of static spacetimes, i.e. for ω = 0 and therefore g tφ = 0, Ω K = ± √ −g φφ,ρ g tt,ρ /g φφ,ρ and the energy E and angular momentum L per mass µ of the test particle can be expressed in terms of the metric tensor components (see e.g. Ryan 1995), From here, it is clear that taking the negative branch of the root for Ω K in Eq. (20) is equivalent to studying a particle with opposite angular momentum, i.e. L count−rot = −L co−rot . Thus, in the static case the magnitude of the energy and angular momentum are invariant under the change Ω K → −Ω K . Now we consider the case of stationary space times, ω 0. The energy E and angular momentum L per mass µ are, in this case, given by (see e.g. Ryan 1995) The counter-rotating condition expressed by the negative branch of Eq. (20), can be generated by the change g tφ → −g tφ , which seems to be a more physical and transparent condition. In contrast to the static case, the counter-rotating orbit has now different energy and different magnitude of the angular momentum due the presence of the dragging of inertial frames, characterized by the metric component g tφ (cf. Eq. (26) below). In a nutshell, the dynamics of counterrotating orbits of a test-particle can be derived, starting from the positive branch of Eq. (20), by considering a spacetime with g tφ → −g tφ .
For the vacuum case, a similar analysis as the one developed by Herrera et al. (2006), clearly shows that the change in the global sign of g tφ is achieved by changing not only the angular momentum of the star, J → −J, but all the rotational multipolar moments. For the Kerr metric this change is obtained by changing the sign of the parameter a (see Appendix B) while in the PRS solution we need additionally change the sign of the parameter s associated to differential rotation, i.e., by changing a → −a and s → −s 5 .
Once we have clarified this important issue about the corotating and counter-rotating orbits, we proceed to analyze the functional dependence of the Keplerian frequency on the multipole moments. In the upper panel of Fig. 2 we plotted contours of constant Keplerian frequency for the PRS solution, ν K,PRS = Ω K,PRS /(2π), as a function of the dimensionless angular momentum parameter j and the quadrupole moment M 2,PRS , at the ISCO radius. It can be seen that the influence of the quadrupole moment is non-negligible, as evidenced from the departure of the contour lines from vertical lines. The Keplerian frequency grows for increasing J and M 2 . In the lower panel, we plotted contours of constant ratio ν K,PRS /ν K,Kerr as a function of j and the difference between the quadrupole moment of the PRS solution, M 2,PRS , and the Kerr quadrupole, M 2,Kerr .
It is appropriate to recall here that because the Keplerian as well as the other frequencies calculated below are evaluated using formulas in the coordinate frame, see for instance Eq. (20), they must be evaluated at coordinate radii ρ and not at physical radii given by √ g φφ . In the specific case of the ISCO the frequencies are evaluated at the radius that simultaneously solves the equations V = 0, dV/dρ = 0, and d 2 V/dρ 2 = 0, where V is the effective potential (19).
Oscillation and Precession Frequencies
The radial and vertical oscillation (or epicyclic) frequencies are the frequencies at which the periastron and orbital plane of a circular orbit oscillates if we apply slightly radial and vertical perturbations to it, respectively. According to Ryan (1995), in stationary axially symmetric vacuum spacetimes described by the Weyl-Papapetrou metric (1), the radial and 5 For the vacuum case, in the solution by Manko et al. (2000), the sign change of g tφ is obtained after performing simultaneously the replacements a → −a and b → −b. vertical epicyclic frequencies can be obtained as ,αα − 2(g tt + g tφ Ω)(g tφ + g φφ Ω) g tφ ρ 2 ,αα and the corresponding periastron (ν P ρ ) and nodal (ν P z ) precession frequencies as where α = {ρ, z}, respectively, and ν K = Ω K /(2π) is the Keplerian orbital frequency with Ω K given by Eq. (20).
In the upper panel of Fig. 3, we plotted contours of constant nodal precession frequency ν P z at the ISCO radius as a function of j = J/M 2 0 and M 2 for the PRS solution, at the ISCO radius. We can see now that the influence of the quadrupole moment is quite important. The nodal precession frequency increases for increasing J and decreasing M 2 , at fixed M 0 . In the lower panel we plotted contours of constant ratio ν P z,PRS /ν P z,Kerr , at the ISCO radius, as a function of j and the difference M 2,PRS − M 2,Kerr , in order to evidentiate deviations from the Kerr solution. The radial oscillation frequency ν OS ρ vanishes at the ISCO radius and therefore at such location the radial precession frequency equals the Keplerian frequency, whose contours have been plotted in Fig. 2.
In Figs. 4 and 5 we plotted the nodal precession frequency ν P z and the radial oscillation frequency ν OS ρ as a function of the Keplerian frequency ν K , respectively, for both the Kerr and PRS solutions. As an example, we have shown the results for rotating NS models 20 and 26 of Table VI The deviations of the quadrupole and current octupole moments given by the Kerr solution from the numerical values of Pappas and Apostolatos (2012) can be used to show the low accuracy of the Kerr solution to describe fast rotating NSs. The accuracy of the PRS solution in describing the ISCO radii of these two models has been shown in Table 1 of Section 3.
In Figs. 4 and 5 we can see the differences of the ν P z -ν K and ν OS ρ -ν K relations between the Kerr and PRS solutions for realistic NS models. The deviations of the Kerr solution, especially at fast rotation rates, are evident because of the influence of the deformation (quadrupole M 2 ) of the star as well as, although in less proportion, of the octupole current S 3 . In general, we observe that the larger the angular momentum, the poorer the performance of the predictions of Kerr solution.
We have also shown in Figs. 4-5 the influence of the current octupole S 3 in the determination of the precession and oscillation frequencies. We found that the effect of S 3 is only appreciable for the fastest models. The minor influence, in this case, of the current octupole S 3 is expected from the small values of the parameter s needed to fit the numerical values of Pappas and Apostolatos (2012). Clearly, larger values of the parameter s needed to fit realistic values of S 3 will enhance as well deviations from the Kerr spacetime.
The effects of a multipolar structure that deviates from the one of the Kerr geometry on the various quantities analyzed here are relevant for instance in the RPM of the QPOs observed in LMXBs (see e.g. Stella and Vietri (1998) and Section 6, for details).
Dragging of Inertial Frames
It is known that a prediction of general relativity is that a rotating object makes a zero angular momentum test particle to orbit around it, namely it drags the particle into the direction of its rotation angular velocity; such an effect is called dragging of inertial frames or Lense-Thirring effect. Consequently, oblique particle orbit planes with respect to the source equatorial plane will precess around the rotation axis of the object. In stationary axially symmetric spacetimes described by the metric (1) the frame dragging precession frequency is given by (see e.g. Ryan 1995) Many efforts have been done to test the predictions of general relativity around the Earth such as the analysis of the periastron precession of the orbits of the LAser GEOdynamics Satellites, LAGEOS and LAGEOS II, (see e.g. Lucchesi and Peron 2010) and the relativistic precession of the gyroscopes on-board the Gravity Probe B satellite (see Everitt et al. 2011, for details). The latter experiment measured a frame dragging effect within an accuracy of 19% with respect to the prediction of general relativity.
The smallness of this effect around the Earth makes such measurements quite difficult and has represented a multi year challenge for Astronomy. The frame dragging precession increases with the increasing of the angular momentum of the rotating object and therefore a major hypothetical arena for the searching of more appreciable Lense-Thirring precession is the spacetime around compact objects such as BHs and NSs. The much stronger gravitational field of these objects with respect to the Earth one allow them to attain much faster angular rotation rates and so larger angular momentum. Stella and Vietri (1998) showed how, in the weak field slow rotation regime, the vertical precession frequency ν P z (orbital plane precession frequency) can be divided into one contribution due to the Lense-Thirring precession and another one due to the deformation (non-zero quadrupole moment) of the rotating object, both of them comparable from the quantitative point of view. These frequencies could be in principle related to the motion of the matter in the accretion disks around BHs and NSs and thus particularly applicable to LMXBs. For fast rotating NSs and BHs the frequency at which the orbital plane, and so the frame dragging precession frequency, can reach values of the order of tens of Hz (see e.g. Stella and Vietri (1998) and Figs. 3 and 4).
Thus, it is clear that an observational confirmation of the relativistic precession of matter around either a NS or a BH will lead to an outstanding test of the general relativity in the strong field regime and, at the same time, an indirect check of the large effects of the frame dragging in the exterior spacetime of compact objects (see e.g. Morsink and Stella 1999, for details).
Although making independent measurements of the frame dragging effect around BHs and NSs is a very complicate task, it is important to know the numerical values of the precession frequency due to the frame dragging with respect to other relativistic precession effects, e.g. geodetic precession. In addition, it is important to know the sensitivity of the precession frequency to the object parameters such as mass, angular momentum, quadrupole, and octupole moment.
In the upper panel of Fig. 6 we plotted contours of constant frame dragging frequency ν LT for the PRS solution, at the ISCO radius, as a function of the the angular momentum per unit mass J/M 0 and the quadruple moment M 2 , for a compact object mass M 0 = m = 1.88M ⊙ . Correspondingly, in the lower panel of Fig. 6, we show the differences between the frame dragging precession frequency as predicted by the Kerr and PRS solutions, at the ISCO radius, as a function of j = J/M 2 0 and the difference between the quadrupole moments, M 2,PRS − M 2,Kerr .
The influence of the quadrupole moment in the determination of the frame dragging frequency is evident; the frequency ν LT given by a NS is generally smaller than the one given by a BH as can be seen from the value of the ratio ν LT,PRS /ν LT,Kerr < 1 obtained for configurations with a quadrupole moment that deviates with respect to the one given by the Kerr solution, namely for M 2,PRS − M 2,Kerr = M 2,PRS + ma 2 = mk 0, see Eq. (11).
It is also worth mentioning that frame dragging precession can be affected as well by the presence of electromagnetic fields (Herrera et al. 2006, see) and further research in this respect deserves the due attention.
ACCURACY OF RYAN'S ANALYTIC FORMULAS
Following a series expansion procedure in powers of 1/ρ, Ryan (1995) found that the periastron (radial) and nodal (vertical) precession frequencies, ν P ρ and ν P z given by Eq. (24) be written as a function of the Keplerian frequency ν K as and M 4 ] are the lowest three mass moments and, [S 1 , S 3 ], are the lowest two current moments. For the PRS solution in the vacuum case, M 4 = m(a 4 − 3a 2 + k 2 + 2as). The above formulas are approximate expressions of the periastron and nodal precession frequencies in the weak field (large distances from the source) and slow rotation regimes. We should therefore expect that they become less accurate at distances close to the central object, e.g. at the ISCO radius, and for fast rotating objects. However, such formulas are an important tool to understand the role of the lowest multipole moments on the values of the relativistic precession frequencies, such as the importance of the higher multipole moments at short distances and high frequencies as can be seen from Eqs. (27)(28).
At high frequencies, for instance of the order of kHz, deviations from the above scaling laws are appreciable. In Figs. 7 and 8 we compare the radial precession and vertical oscillation frequencies, ν P ρ and ν OS z , as a function of the Keplerian frequency ν K , as given by the full expressions (24) for the PRS solution and by the approximate formulas (27) and (28), respectively. 6 The lowest multipole moments M 0 , J, M 2 , and S 3 of the PRS solution have been fixed to the values of two models of the Table VI In the ν OS z -ν K relation, the blue dotted curve depicts the contribution from the angular momentum (we plot the series (28) up to V 3 ), for the blue dot-dashed curve we added the first contribution from the quadrupole moment M 2 (we cut the series at V 4 ), for the dashed blue line we added the first contribution from the octupole mass-current (series expansion up to V 7 ) and finally in the continuos blue line we consider 6 Because the scale of the ν P ρ and ν P z frequencies are very similar, we decided to plot in Fig. 7 ν P ρ and ν OS z whose scales are different allowing a more clear comparison with the PRS solution in a single figure. Fig. 7.-Comparison of the ν OS z -ν K and ν P ρ -ν K relations given by the PRS solution and the approximate expressions (27-28) derived by Ryan (1995). The lowest multipole moments M 0 , J, M 2 , and S 3 have been fixed to the values of the Model 2 of the Table VI For the analysis of the ν P ρ -ν K relation we followed the same procedure as described above. In this case, the Ryan's expressions tend from the top to the exact result, the continuous black curve, represented by the PRS solution. It is interesting to see that the introduction of the octupole moment (dashed red line) makes the approximation to deviate from the exact result, however by including more terms the accuracy is enhanced. As can be seen from Figs. 7 and 8 the quantitative accuracy of the Ryan's approximate formulas in the periastron precession frequency ν P ρ is less than the one obtained in the vertical oscillation frequency ν OS z . The importance of the high-order multipole moments such as the quadrupole and the octupole moments is evident in the high-frequency regime. This is in line with the results shown in Figs. 2-3 and in Figs. 4-5. We can see from Figs. 7 and 8 that the Ryan's approximate formulas describe more accurately the Model 2 than the Model 20. The reason is that, as we mentioned above, we should expect a better accuracy of the series expansions from low to moderate moderate rotation rates and consequently the same occur for the quadrupole deformations. It is clear that there are appreciable differences both in rotation and deformation between the two selected models; we recall also that the rotation frequency of the star can be expressed as a function of the dimensionless j parameter as ν s = G jM 2 0 /(2πcI) = 1.4(M/M⊙) 2 /I 45 kHz. It is noteworthy that we have checked that the Ryan's series expansions, Eqs. (27) and (28), fit quite accurately the exact results if taken up to order V 10 . In particular the values of the vertical oscillation and precession frequencies are fit better than the corresponding radial ones. For the Model 2 the radial oscillation frequency is well fitted by the Ryan's expression up to Keplerian frequencies of order ∼ 1.2 kHz while, for the Model 20, the approximate formulas break down at a lower value ∼ 0.7 kHz. These results are of particular relevance because it makes possible the extraction of the object parameters (for instance the lowest multipoles up to S 3 ) by the fitting of the observed QPO frequencies in LMXBs, providing they are indeed related to the precession and oscillation frequencies of matter in the accretion disk (see Section 6, for details) and for Keplerian motion not exceeding a few kHz of frequency.
ACCURACY OF PRS SOLUTION
We turn now to analyze the behavior of the Kerr and PRS solutions in predicting results for the Keplerian, frame dragging, and vertical oscillation frequencies, for realistic NSs. In particular, we compare their predictions with the frequencies calculated by . Since did not include the values of the octupole current moment S 3 , here we set s = 0 in Eq. (10) for the PRS solution. For the sake of comparison, we choose the results derived by for the EoS L, because for this EoS the highest rotating parameter j and quadrupole moment M 2 were found. In addition, the stiffness of such an EoS allows the maximum mass of the NS to be larger than the highest observed NS mass, M 0 = 1.97 ± 0.04M ⊙ , corresponding to the 317 Hz (3.15 milliseconds rotation period) pulsar J1614-2230 (see Demorest et al. 2010, for details).
This regime of high j and M 2 in realistic models is particularly interesting to test the deviations of the Kerr solution in the description of NS signatures as well as to explore the accuracy of the PRS solution. In Table 2, we present the results for four different sets of the star spin frequency ν s , namely ν s = 290 Hz (M1 and M2), ν s = 360 Hz (M3 and M4), ν s = 580 Hz (M5 and M6) and ν s = 720 Hz (M7 and M8).
In Table 2, we clearly observe that the results predicted by the PRS s=0 solution for the Keplerian and frame-dragging frequencies are in excellent agreement with those calculated by for even highly massive, rotating and deformed models such as the model M7 with M 0 = 2.17M ⊙ , j = 0.51 and M 2 = −39.4Q 0 . We notice that reported some configurations with negative values of ν z (see Table 2). We advance the possibility that this is due to instabilities of the numerical code that occur when the ISCO radius is located very close or inside the surface of the object. Thus, the values of the frequencies given by the analytic solution in these cases are to be considered predic- TABLE 2 ISCO radius r + , Keplerian frequency ν K , frame-dragging (Lense-Thirring) frequency ν LT , and vertical precession frequency ν P z of the co-rotating orbits calculated numerically by Morsink and Stella (1999) (upper index MS) and comparison with the corresponding predicted values given by the Kerr (upper index Kerr) and the PRS s=0 solution (upper index PRS). The quadrupole moment M 2 have been normalized for convenience to the value Q 0 = 10 43 g cm 2 .
tions to be tested for future numerical computations. This fact can be checked within the calculations of by exploring the properties of counter-rotating orbits which produce in general ISCO radii larger than the ones of the corotating ones. In Table 3, we depicted the results in the counter-rotating case where we can notice an improvement of the accuracy of the PRS solution with respect to the corotating case.
In this line, we consider worth performing numerical computations of the precession and oscillation frequencies of particles around realistic NSs in a wider space of parameters and using up-to-date numerical techniques which will certainly help to establish and elucidate more clearly the accuracy of analytic models. It is also appropriate recalling the recent results of Pappas and Apostolatos (2012) on the computation of the general relativistic multipole moments in axially symmetric spacetimes.
THE RELATIVISTIC PRECISION MODEL
The X-ray light curves of LMXBs show a variability from which a wide variety of QPOs have been measured, expanding from relatively low ∼ Hz frequencies all the way up to high ∼ kHz frequencies (see e.g. van der Klis 1995, for details). In particular, such frequencies usually come in pairs (often called twin peaks), the lower and upper frequencies, ν l and ν h respectively. BHs and NSs with similar masses can show similar signatures and therefore the identification of the compact object in a LMXB is not a simple task. If the QPO phenomena observed in these systems are indeed due to relativistic motion of accretion disk matter, the knowledge of the specific behavior of the particle frequencies (e.g. rotation, oscillation, precession) in the exterior geometry of NSs and BHs becomes essential as a tool for the identification of the nature of the compact object harbored by a LMXB.
It is not the scope of this work to test a particular model for the QPO phenomenon in LMXBs but instead to show the influence of the high multipole moments on the orbital motion of test particles especially the role of the quadrupole moment which is of particular interest to differentiate a NS from a BH. There are in the literature several models that describe the QPOs in LMXBs through the frequencies of particles around the compact object, and for a recent review and comparison of the different models we refer to the recent work of Lin et al. (2011). In order to show here the main features and differences between the Kerr and the PRS solutions we shall use the Relativistic Precession Model (RPM).
The RPM model identifies the lower and higher (often called twin-peaks) kHz QPO frequencies, ν l and ν h , with the periastron precession and Keplerian frequencies, namely ν l = ν P ρ and ν h = ν K , respectively. The so-called horizontal branch oscillations (HBOs), which belong to the low frequency QPOs observed in high luminosity Z-sources (see e.g. van der Klis 1995, for details), are related within the RPM model to the nodal precession frequency ν P z of the same orbits (Morsink and Stella 1999, see). We will use here in particular the realistic NS models of for the EoS L.
One of the salient features of the RPM model is that in the case of the HBO frequencies, the relations inferred from the first term of the expansions (27) and (28) ν P z = (2/3) 6/5 π 1/5 j m 1/5 (ν P ρ ) 6/5 , which implies a nodal precession frequency proportional to the square of the Keplerian frequency has been observed in some sources, for instance in the LMXB 4U 1728-34 (see Ford and van der Klis 1998, for details). In addition, 6/5 power law relating the nodal and periastron precession frequencies can explain (see ) the correlation between two of the observed QPO frequencies found in the fluxes of NSs and BHs LMXBs (see Psaltis et al. 1999, for details). This fact provides, at the same time, a significant test of the Ryan's analytic expressions. It is interesting to analyze the level of predictability of the precession and oscillation frequencies on particular astrophysical sources. In Fig. 9 we show the ν l -ν h relation within the RPM model, namely ν P ρ versus ν K for the models M1-M8 of Table 2. In the upper panel we show the results for the PRS solution while, in the lower panel, we present the results for the Kerr solution. We have indicated the QPO frequencies observed in the sources GX 5-1 (see e.g. , 4U 1735-44 (see e.g. ), 4U 1636-53 (see e.g. Wijnands et al. 1997), Sco X1 (see e.g. van der Klis et al. 1996), GX 17-2 (see e.g. , GX 340+0 (see e.g. Jonker et al. 2000), Cir X1 (see e.g. van der Klis et al. 1996), 4U 0614+091 (see e.g. Ford et al. 1997), and 4U 1728-34 (see e.g. Strohmayer et al. 1996).
Both the upper and lower panels of Fig. 9 have been plotted using the same frequency scales in order to aid the identification of the differences between the Kerr and the PRS solutions. One can notice that all the solid curves in the Kerr solution (lower panel of Fig. 9) are outside the range of the observed QPO frequencies exemplified, while all dashed and solid curves of the PRS are inside the QPO range. It is then clear that making a fit of the observed QPO frequencies of the selected LMXBs of Fig. 9 will necessarily require a different choice of parameters in the Kerr and PRS solutions. Therefore, conclusions for instance on the NS parameters (e.g. mass, angular momentum, quadrupole deformation) based on fitting QPOs using the Kerr geometry will deviate from the actual parameters (see e.g. Laarakkers and Poisson 1999, for details), extractable more reliably from a more complex geometry, such as the PRS one, that allows a better es- Table 2. We indicate the QPO frequencies observed in the sources GX 5-1, 4U 1735-44, 4U 1636-53, Sco X1, GX 17-2, GX 340+0, Cir X1, 4U 0614+091, and 4U 1728-34. The solid curves depict the results for the models M1 (solid) and M2 (dashed) with red lines, for the models M3 (solid) and M4 (dashed) with blue lines, for the models M5 (solid) and M6 (dashed) with green lines while orange lines stands for the results from models M7 (solid) and M8 (dashed). In the upper panel we present the results derived from the PRS s=0 solution while in the lower panel we present the results for the Kerr solution. In the lower panel we have added, to guide the eye, the inner red dashed and outer red solid curves of the upper panel using black lines. timate for instance of the quadrupole moment of a compact star.
In Fig. 9 we show the relation ν P z versus ν K for the models M1-M8 of Table 2. For the sake of comparison we show the low frequency branch observed in the LMXB 4U 1728-34 (see Ford and van der Klis 1998, for details). From the analysis of the pulsating X-ray flux it turns out that very likely the spin frequency of the NS in 4U 1728-34 is ∼ 363 Hz (see Strohmayer et al. 1996, for details). Thus, the models M3 (M 0 = 1.94M ⊙ , j = 0.24) and M4 (M 0 = 2.71M ⊙ , j = 0.18) in Table 2 that correspond to a NS of spin frequency 360 Hz are of particular interest for the analysis of this source. It was suggested by ; that the low frequency observed in 4U 1728-34 are likely to be due Table 2. The convention is as Fig. 9. We indicate the QPO frequencies observed in the LMXB 4U 1728-34 (see . The black curves indicate the 2ν P z -ν K relation for the models M3 and M4 (solid and dashed) following the suggestion of ; to excitations of the second harmonic of the vertical motion and therefore a better fit of the lower-higher QPO frequencies of 4U 1728-34 (and of similar sources) will be obtained for the relation 2ν P z -ν K . The black curves in Fig. 10 indicate the 2ν P z -ν K relation for the models M3 and M4 (solid and dashed) following the above suggestion. Although the improvement of the fit is evident, we notice that the NS parameters that correctly reproduce the features of 4U 1728-34 are likely in between the models M3 and M4.
CONCLUDING REMARKS
We have done an extensive comparison of the orbital motion of neutral test particles in the PRS and Kerr spacetime geometries. In particular we have emphasized on the Keplerian and frame-dragging frequencies, as well as the precession and oscillation frequencies of the radial and vertical motions.
We have evidentiated the differences in this respect between the Kerr and PRS solution, especially in the rapid ∼kHz rotation regime. Such differences are the manifestation of the influence of the high order multipole moments such as the quadrupole and octupole.
|
2012-05-16T02:30:46.000Z
|
2011-12-07T00:00:00.000
|
{
"year": 2011,
"sha1": "7e3ecb36c4ead75cf22f20aa7eaab3d4527490f3",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1112.1712",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "7e3ecb36c4ead75cf22f20aa7eaab3d4527490f3",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
225384797
|
pes2o/s2orc
|
v3-fos-license
|
Characterizing bird‐keeping user‐groups on Java reveals distinct behaviours, profiles and potential for change
This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. © 2020 The Authors. People and Nature published by John Wiley & Sons Ltd on behalf of British Ecological Society 1Department of Natural Sciences, Manchester Metropolitan University, Manchester, UK 2BirdLife International, Cambridge, UK 3Cornell Lab of Ornithology, Cornell University, Ithaca, NY, USA 4Cedar House, Chester Zoo, Upton-byChester, Chester, UK 5Fakultas Teknobiologi, Kampus II Gedung Thomas Aquinas, Universitas Atma Jaya Yogyakarta, Yogyakarta, Indonesia
| INTRODUC TI ON
Around 5,000 species of terrestrial birds, mammals, amphibians and reptiles are globally threatened with extinction due to overexploitation in the international wildlife trade, and this number may almost double in the near future (Ribeiro et al., 2019;Scheffers, Oliveira, Lamb, & Edwards, 2019). Bird species are far more widely represented in trade than mammals, and a disproportionate number of avian taxa are threatened by overexploitation (Alves, Lima, & Araújo, 2013;Bush, Baker, & Macdonald, 2014). This is particularly prevalent in Southeast Asia (Coleman et al., 2019;Harris et al., 2017), where intense demand has precipitated an 'Asian Songbird Crisis' (Lee, Chng, & Eaton, 2016;Rentschlar et al., 2018;Sykes, 2017).
Halting the extraction of birds from the wild, or at least reducing it to sustainable levels, is thus a global conservation priority (Bezerra, Araújo, & Alves, 2019;Marshall et al., 2020a;Symes, Edwards, Miettinen, Rheindt, & Carrasco, 2018) alongside addressing the problem of habitat loss, which in Asia threatens more bird species than anywhere except Amazonia (BirdLife International, 2020).
The trapping and trading of birds globally is driven principally by demand for pets, but also by the need for nutritional and medicinal resources, symbolic or cultural practices and gambling-related contests (Bezerra et al., 2019;de Oliveira, de Faria Lopes, & Alves, 2018;Jepson, 2010;Harris et al., 2017;Souto et al., 2017). Domestic consumption of birds as pets in two large biodiverse countries, Brazil and Indonesia, may actually be larger than the total international market (Alves et al., 2013;Jepson & Ladle, 2005;Rentschlar et al., 2018).
Regulating domestic trade to prevent significant impacts on wild bird populations is, however, problematic, as the size and variety of the networks involved can make enforcement logistically and politically difficult (Alves et al., 2013;Bezerra et al., 2019).
In Indonesia, where at least 26 bird species are globally threatened through overexploitation (BirdLife International, 2020), most of the trade is domestic (Chng, Eaton, Krishnasamy, Shepherd, & Nijman, 2015;, but demand also drives the importation of birds from other countries in the region (Leupen et al., 2018). The legislation surrounding the trade in wild birds in Indonesia is comprehensive, and the list of protected species, which can only be traded if they are captive-bred, was recently updated to include newly recognized and recently Red-Listed species Miller, Gary, ansyah, Sagita, & Adirahmanta, 2019). Even the harvest of unprotected wildlife is, in theory at least, regulated through a quota system set by a governmental body, the Indonesian Institute of Sciences (LIPI). Harvest quotas have, however, only been set for a few species, thereby rendering the capture or trade of any other species illegal . Nevertheless, the trade and ownership of wild-caught birds is ubiquitous across Indonesia Marshall et al., 2020a) and bird traders are often confused about or unaware of the law (Rentschlar et al., 2018) making enforcement both difficult and unpopular (Janssen & Chng, 2018;Miller et al., 2019).
Initial research explored the underlying behaviours and motivations of bird-keepers from an anthropological or historical perspective, and proposed a market-based way to reduce pressure on wild bird populations (Jepson, 2010;Jepson & Ladle, 2005, 2009Jepson, Ladle, & Sujatnika, 2011). This entailed substituting captive-bred birds under a certification scheme, promoting singing competitions between captive-bred birds only and establishing ringing courses to help distinguish wild-caught from captive-bred individuals (Jepson & Ladle, 2009). Even so, recent evidence indicates that captivebreeding has not been able to meet the demand for songbirds Harris et al., 2015Harris et al., , 2017.
Identifying and characterizing consumers based on behaviours and preferences has allowed researchers to break seemingly homogeneous audiences into groups on which to target demand reduction efforts (Razavi & Gharipour, 2018;Shairp, Veríssimo, Fraser, Challender, & Macmillan, 2016;Williams, Gale, Hinsley, Gao, & St. John, 2018). Such techniques have helped to understand demand for various wildlife products including orchids (Hinsley, Veríssimo, & Roberts, 2015), rhino horn (Dang Vu & Nielsen, 2018;Truong, Dang, & Hall, 2016) and saiga horn , and their potential value for finding ways to reduce demand for Asian songbirds requires urgent exploration.
In this study we seek to distinguish songbird-keeping usergroups on Java based on their behaviours and preferences, and to identify the demographic determinants of user-group membership.
We also track differences in bird taxa owned across user-groups and 5. Efforts to increase the sustainability of bird-keeping in Java should focus on emphasizing the importance of captive-bred birds, in particular to hobbyists, the largest user-group, whose bird-keeping behaviour poses the biggest threat to wild bird populations, whilst also incentivizing legitimate breeding enterprises among contestants and breeders.
K E Y W O R D S
cage-bird, conservation marketing, consumer demand, sustainable use, wildlife trade the degree of movement between user-groups over a 2-year period.
Our profiles of user-groups aim to identify specific threats to wild bird populations by characterizing for each group (a) species typically owned; (b) preferences for wild-caught or captive-bred birds and (c) number of birds owned and turnover of individual birds. This exercise may then benefit conservation by segmenting audiences on behaviour and demographics in such a way as to allow demand reduction interventions to be more appropriately and precisely targeted (Hinsley et al., 2015).
| Study design
In 2018 we collected data on bird ownership characteristics during a survey of households on Java, Indonesia, using a stratified sampling technique to capture a spectrum of rural and urban districts within each of the island's six provinces (Marshall et al., 2020a). Within communities and neighbourhoods of selected districts, households were systematically sampled (full details on sampling methodology can be found in Appendix A), and interviews carried out with the most senior member of the household available.
The motivations for bird-keeping in Java include the desire for success in contests, which drives preferences for birds with high-quality songs or colours , and the desire for social status, which drives preferences for birds that are normally hard to acquire (Jepson, 2016). However, broad user-groups are primarily described in terms of recreational pursuits (Thomas-Walters et al., 2019). The heterogeneity of the bird-owning community allows us to characterize three potential user-groups: (a) hobbyists, who keep birds primarily as pets and rarely engage in competitions or captive-breeding; (b) contestants, who keep birds primarily to enter them in singing contests, but may also breed birds; and (c) breeders, who breed and/or train birds for resale or as a hobby, but do not regularly enter birds in contests.
To assign bird-keepers to one of the three user-groups, respondents were asked to choose all motivations for keeping birds that were applicable to them: (a) to keep as a hobby, (b) to enter singing contests and (c) to breed or train birds. We also collected data on: species identity, abundance and origin (i.e. captive-bred or wildcaught) of all cage-birds in the household; the consumption behaviour and preferences of bird-keeping respondents (i.e. number and fate of birds owned previously; purchasing habits; time spent tending birds); and socio-economic and demographic profiles at both household and individual levels (see Appendix B for list of survey questions).
To represent household socio-economic status objectively, we used a composite household asset index (HAI: Filmer & Pritchett, 2001). We adopted a checklist of household items and conditions (Schreiner, 2012) and summed the total number of such items to create a score to serve as a proxy for the economic status of the respondent, with higher score indicating greater affluence (Harttgen & Vollmer, 2013). To establish a household occupancy index, we asked respondents how many people lived in their household and how many bedrooms they had, and then calculated the number of people per bedroom. To estimate losses of birds, we calculated the proportion of them owned in 2016 that respondents reported to have subsequently died. As the owning of trafficked wildlife is not illegal under Indonesian legislation our questions did not relate to perceived illegal behaviour; thus in common with previous research into songbird-keeping (Burivalova et al., 2017;Krishna et al., 2019) we assumed that respondents provided information about the origins of their birds truthfully.
We defined cage-birds as we did in Marshall et al. (2020a)-birds (both native to Indonesia and exotic) kept, bought or sold as pets or used in singing contests, including passerines (Passeriformes), pigeons and doves (Columbiformes), owls (Strigiformes), woodpeckers (Piciformes) and cuckoos (Cuculiformes). When birds owned by respondents were actually seen by interviewers (>80% of survey events), they were, in the majority of cases, identified to species level. When birds were not seen, or the interviewer could not recognize them, identification was based on respondent use of market names for the birds, and almost always resulted in their being assigned only to genus level. For example, several species of leafbird Chloropsis spp. have one common market name, as do white-eyes Zosterops spp. Taxonomy follows del Hoyo and Collar (2014, 2016).
| Analysis
We profiled the three user-groups based on bird-keeping habits, focusing on the differences in prevalence of behaviours and preferences; where appropriate, differences were tested across groups using Kruskal-Wallis and chi-squared tests. We fitted binary logistic mixed effects regression models (GLMMs) to identify those socioeconomic attributes associated with (a) ownership/non-ownership of cage-birds and (b) user-group membership versus non-membership among bird-keepers (explored in three separate models). We excluded responses from households where the principal bird-keepers were not present, except for the initial analysis concerning presence or absence of cage-birds within a household. In all models, community was included as a random factor to account for pseudoreplication across the 92 communities. We used model selection and averaging based on the Akaike information criterion (AIC), creating global models with all potential predictors (Table S1); prior to inclusion continuous variables were standardized and checked for collinearity, and predictors with high variance inflation factors (>1.9) were excluded. The top models were defined as those within ΔAICc < 2 of the model with the lowest AIC value (Grueber, Nakagawa, Laws, & Jamieson, 2011). If no model proved better (i.e. Akaike weight < 0.6) from a top set of candidate models, model-averaging was performed, calculating full (zero) method-averaged parameter estimates and using measures of relative variable importance to determine the strength of a predictor's association with the response variable Random forests, a nonparametric decision-tree-based technique that uses bootstrapped subsets of training data to generate an ensemble of models that are then aggregated into a final model (Breiman, 2001), were used to identify characteristics of user-group membership based on numbers of bird species and individuals and on composition of taxa owned by households in 2018. We used repeated 10-fold cross-validation over a tuning grid of potential values to parameterize the model (i.e. the number of variable splits and trees generated) to achieve the highest predictive accuracy (Kuhn, 2008). The statistical and random forest analyses were carried out using the MuMIn (v1.15.6, Bartoń, 2018), lMe4 (Bates, Machler, Bolker, & Walker, 2015), randoMForest (Liaw & Wiener, 2002) and caret (v6.0-84, Kuhn, 2008)
| Ethics statement
Research teams gained permission from, and agreed to stipulations set by, the heads of neighbourhood and relevant administrative authorities prior to data collection. Interviewers always received prior informed consent from respondents. Name of interviewer and time and date of survey were recorded before interviews; all data were
| Household demographic data
With an interview response rate of ~60% (Marshall et al., 2020a), we surveyed 3,040 households from all six provinces of Java.
Based on Java's reported 2010 census population of 36,720,166 households, the estimates of bird ownership we present have an associated ± 1.68% margin of error at the 95% confidence level (Newing, 2010). A comparison of the demographic attributes of our sample and the 2010 census data is given in Table S2. Median age (lower quartile-upper quartile) of respondents was 42 (16-91). Most respondents had a high school education (60%), and the largest occupational category was manual labour (35%), yet a large minority were not in formal employment (29%; Table S1). The mean ± SD HAI score was 14.8 ± 4.8 (range = 0-34), and the median (lower quartileupper quartile) number of people per bedroom was 1.7, 1-2. Of households surveyed, 957 (31%) kept birds in 2018; of the remaining 2,083 (69%), 1,603 (77%) had never kept birds, while 161 (8%) kept birds in 2016.
| Bird-keeping behaviours
Differences in numbers of birds owned, purchasing habits and time spent tending birds per day were most marked between hobbyists and the two other user-groups (contestants and breeders; Table 1). Hobbyists (57% of bird-keepers) tended to keep only small numbers of individuals and species but high proportions of wildcaught birds. Hobbyists were the most likely to receive birds as gifts, although trapping birds themselves or buying them directly from trappers or travelling salesmen was equally prevalent across all user-groups. Contestants and breeders shared many characteristics, but contestants tended to buy more expensive birds and spend more time tending their birds than breeders. Mortality of birds since 2016 was highest in the hobbyist group (proportion of birds that died was 0.22 for hobbyists vs. 0.13 in contestants and 0.15 in breeders), but the difference was not significant. While all user-groups owned threatened species, hobbyists owned a greater proportion of them than the others. Although there were only small differences in preferences concerning the song quality of wild-caught and captive-bred birds, hobbyists were the least likely to express a preference or to take origin into account when purchasing birds (Table 2).
| User-group classification
Our user-group classification had an overall accuracy of 84% (Table S3) Table S4). Overall, the biggest change between the 2 years was an increase in proportions of hobbyists and contestants, both with relatively large recruitment from non-bird ownership in 2016.
| Socio-economic profiles
Our mixed effect models indicated the importance of seven demographic and geographic variables in characterizing cage-bird ownership, and subsequently user-group membership (Figure 3; full model outputs in Table S5). Compared to those who owned no birds ('non-bird-keepers'), bird-keepers were more likely to live in urban communities and in the eastern provinces. They were also more likely to be employed, and to have attained a high school education, while non-bird-keepers were more likely to have experienced TA B L E 1 Characteristics and preferences of the three songbird-keeping user-groups (respondents self-reported membership of these groups). n varies according to numbers of disregarded responses for various questions, the lower number of people keeping birds in 2016 and reluctance to answer. n was particularly low for losses of birds: hobbyists n = 213, contestants n = 154 and breeders n = 103. Differences in numbers of birds owned and money and time spent on birds were tested using between-group post hoc differences from Kruskal-Wallis, the remainder with χ 2 tests (e.g. H < C indicates hobbyists showed a significantly lower response than contestants) Table S6), occupation (contestants were the most likely to be employed in business); and demography (hobbyists tended to be older than both breeders and contestants, who were the youngest user-group; Figure 3).
| D ISCUSS I ON
The clearest and most significant threat to wild bird populations from bird-keeping is the consumption behaviour of Java's most abundant user-group, hobbyists, which may represent up to seven million households (Marshall et al., 2020a). The high volume of birds owned by this group, including the largest proportion of potentially wild-caught and threatened birds, is acquired primarily through convenience and availability, with little importance placed on origin or song quality (Burivalova et al., 2017). Furthermore, mortality of cage-birds was highest among hobbyists, and the sheer numbers of hobbyists keeping wild-caught birds across Java means that there is likely to be a huge throughflow of birds into the market . Conversely, the prevalence (Marshall et al., 2020a) and abundance of highly sought-after taxa (e.g. White-rumped Shama, Oriental Magpie-robin, leafbirds) kept by contestants suggests that an anthropogenic Allee effect (Courchamp et al., 2006) is at work, drawing some species into an extinction vortex through their ever-increasing rarity in the wild, market value and status-giving properties Krishna et al., 2019). Although breeders show similar behaviours and preferences to contestants, they also favour profitable taxa (lovebirds, canaries Serinus spp., doves) that can be easily bred and resold for a much-elevated price. Indeed, the capacity for F I G U R E 3 Effect sizes (with 95% CIs) of the (a) geographic, (b) occupational and (c) demographic predictor variables with the highest relative variable importance (>0.6) across models predicting bird ownership (against non-bird ownership) and user-group membership (against other bird-keepers) contestants and especially breeders to produce their own birds may offer a counter to trapping pressures on wild populations (Nijman, Langgeng, Birot, Imron, & Nekaris, 2018). Nevertheless, an unknown but potentially significant proportion of birds held by bird-keepers in Java may come from low-intensity recreational trapping in the wild. Moreover, the large numbers of birds kept, predictably high mortality of wild-caught birds during capture, transportation and marketing (Indraswari et al., 2020) and low survival of many sensitive species in captivity, combine to suggest that the drain on wild populations is likely to be high.
| Informing evidence-based behaviour change
Our study sought to profile songbird-keeping user-groups by characterizing and identifying the behaviours that should underpin conservation efforts to increase the sustainability of birdkeeping. In combination with previous studies, we are closer to understanding the temporal dynamics of demand for songbirds and the implications these pose for future conservation efforts (Jepson & Ladle, 2009;Marshall et al., 2020a). Bird-keeping has increased in prevalence in urban centres in Java, and the abundance of captive-bred exotic birds, such as lovebirds and canaries, has grown dramatically (Marshall et al., 2020a). Tracking changes in behaviours, and in particular those that have the largest impact on wildlife populations, is vital to determining the success of conservation interventions (Veríssimo & Wan, 2018).
This study contributes to the body of evidence on Indonesian songbird-keeping practices by expanding the detail of how usergroups differentially effect bird populations, establishing a baseline against which interventions aimed at reducing the impact on wild birds can be measured (Reddy et al., 2017). Previous efforts to increase the availability and popularity of captive-bred alternatives (Jepson & Ladle, 2009) have unfortunately been neutralized by a large increase in the prevalence of often wild-caught native birds (Marshall et al., 2020a). Future efforts should focus on the 'demarketing' (Veríssimo, Vieira, Monteiro, Hancock, & Nuno, 2020) of wild-caught birds in addition to redirecting demand (Moorhouse, Coals, D'Cruze, & Macdonald, 2020) towards captive-bred birds among all user-groups, but hobbyists in particular. Given that effective behaviour change usually requires considerable time (Greenfield & Veríssimo, 2019), movement between user-groups even over a very short (2-year) period could reduce the chances of targeted interventions having a lasting effect on their behaviours and preferences. On the other hand, this dynamism may reflect a responsiveness and flexibility among the population towards adopting more sustainable birdkeeping behaviours. Demand reduction campaigns certainly need to operate on this latter assumption.
A key intervention to reduce demand for wildlife products is the dissemination of information and targeting of campaigns (Veríssimo, Challender, & Nijman, 2012). The bird-keeping community in Java could represent as many as 12 million households (Marshall et al., 2020a). By breaking down this vast audience into user-groups the possibility arises of tailoring and targeting messages for their maximum impact. Interestingly, bird-keepers tended to have moderate levels of education, with our result suggesting that there may be at least two separate non-bird-keeping groups based on educational attainment, those who have not achieved a high school education and those who have achieved higher levels of education. Slightly more affluent, hobbyist bird-keepers are typically middle-aged and from the western provinces, so increasing the importance placed on the origin of birds, as well as on the quality and longevity of captive-bred individuals (Burivalova et al., 2017), may help stem the large inflow of wild-caught birds into hobbyist households. Aspects of bird-keeping have moved away from traditional practices (Jepson & Ladle, 2009) as evidenced by the younger, urban profile of contestants which, as a key consumer demographic in driving national business, suggests competitive bird-keeping will remain an important aspect of the Indonesian economy (Naafs, 2018). Consequently, the choice and source of taxa for competitive bird-keeping among Java's young urban men must be key targets in any campaign to achieve sustainability in the bird trade. Breeders, however, appeared to be the least likely to stop bird-keeping in the short term, more often becoming contestants and less often hobbyists. It may be that, as the most invested group, breeders frequently change the species they keep, both influencing and reacting to market trends; if so, they may be receptive to conservation programmes promoting the captive-breeding of threatened species.
The greater financial and temporal investments made by contestants and breeders in their birds, which acquire both status-earning and resale value, may help explain why bird origin was more important for them than for hobbyists. There is huge potential profit and status in breeding and training birds , and initiatives could stress the value to be placed on origin (equivalent to 'pedigree'). Contestants and breeders both stressed the importance of sourcing birds from particular locations, and promoting a strong cultural attachment to place (Kristianto & Jepson, 2011) Nevertheless, successful conservation marketing campaigns and environmental education can shift social norms and increase compliance with local legislation (Salazar, Mills, & Veríssimo, 2019;Veríssimo & Wan, 2018). In view of the importance placed on community responsibility and legislation (Kristianto & Jepson, 2011) conservationists could borrow from such approaches to highlight the social undesirability, illegality and risks associated with the laundering or trapping of birds.
| Limitations and caveats
We sought to obtain as representative a sample as possible of households across urban and rural districts from all six provinces of Java by combining a stratified sampling approach to district selection (Marshall et al., 2020a) with the systematic sampling of households within selected districts. When comparing the demographic profile of our study sample with available data from the 2010 Indonesian Census (Badan Pusat Statistik, 2010) for Java as a whole, there are some differences in a number of attributes (see Table S2 in Appendix B). Overall, our sample under-represented those aged 15-24 (14% less than the census), those who have achieved a degree or higher educational attainment (17% less) and those who live in smaller households (21% less), and over-represented those who have achieved high school education (15% more; Table S2). These differences suggest our approach had some of the limitations of previous research (Jepson & Ladle, 2009). For example, there are difficulties in obtaining access and research permissions from certain gated communities that typically occur in more affluent urban areas. The potential bias the omission of such communities creates may be accentuated by their importance in driving trends in the consumption of rarer highly prized species among portions of the bird-keeping community (Jepson, 2016). Future work should address this issue, potentially using online survey techniques to reach such 'high end' consumers (Baltar & Brunet, 2012;Bornstein, Jager, & Putnick, 2013).
| Conclusions
Although conservationists may justly view bird-keeping as inherently detrimental to wild bird populations (Sykes, 2017), within Indonesia the trade in birds is seen as far too economically important and culturally ingrained to be halted completely (Jepson, 2016). Moreover, despite the accumulating evidence of rolling local and even global extinctions , the long tradition of breeding native species (such as Zebra Dove) means that commercial breeding is repeatedly identified as a viable solution to the extraction of wild birds (Nijman et al., 2018). Further research is required to define audiences more precisely, explore the attitudes and perceptions of birdkeepers and frame content aimed at changing specific behaviours (Kidd et al., 2019), but our current breakdown into three user-groups offers an opportunity to begin programmes targeting each group.
ACK N OWLED G EM ENTS
We greatly appreciate the generosity of all the respondents who agreed to be interviewed for this research, including those who participated in the pilot study, and we thank V. de Liedekerke and D. Atma Jaya Yogyakarta as the named partner institution. We thank all the students and graduates who assisted with data collection, and also all the local government employees who granted permission to carry out research across the diverse communities of Java. Thanks are also owed to Paul Jepson and two anonymous reviewers for their helpful suggestions and comments that greatly improved the paper.
This research was funded by Chester Zoo, Manchester Metropolitan
University and Oriental Bird Club (OBC). Icons used for Figure 4 were originally made by Freepik, Good Ware and DinosoftLabs from www.flati con.com.
CO N FLI C T O F I NTE R E S T
Nothing to declare.
F I G U R E 4
Profiles for each user-group based on key behaviours and preferences, demography and dynamism, and the potential issues and solutions to reduce the pressure their behaviours place on wild bird populations manuscript. All authors contributed critically to the drafts and gave final approval for publication.
DATA AVA I L A B I L I T Y S TAT E M E N T
Due to the personal nature of the demographic information collected for this study, fully anonymized data are available from the
|
2020-08-13T10:09:06.905Z
|
2020-08-11T00:00:00.000
|
{
"year": 2020,
"sha1": "8cb74663b0465c112eea680b882d31bb624b21de",
"oa_license": "CCBY",
"oa_url": "https://besjournals.onlinelibrary.wiley.com/doi/pdfdirect/10.1002/pan3.10132",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "8b6a07bd3aab7fafb8e5cd0c2eec42a685e87358",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Geography"
]
}
|
164730372
|
pes2o/s2orc
|
v3-fos-license
|
Optimization of a Power Line Communication System to Manage Electric Vehicle Charging Stations in a Smart Grid
: In this paper, a procedure is proposed to design a power line communication (PLC) system to perform the digital transmission in a distributed energy storage system consisting of fleets of electric cars. PLC uses existing power cables or wires as data communication multicarrier channels. For each vehicle, the information to be transmitted can be, for example: the models of the batteries, the level of the charge state, and the schedule of charging / discharging. Orthogonal frequency division multiplexing modulation (OFDM) is used for the bit loading, whose parameters are optimized to find the best compromise between the communication conflicting objectives of minimizing the signal power, maximizing the bit rate, and minimizing the bit error rate. The o ff -line design is modeled as a multi-objective optimization problem, whose solution supplies a set of Pareto optimal solutions. At the same time, as many charging stations share part of the transmission line, the optimization problem includes also the assignment of the sub-carriers to the single charging stations. Each connection between the control node and a charging station has its own frequency response and is a ff ected by a noise spectrum. In this paper, a procedure is presented, called Chimera, which allows one to solve the multi-objective optimization problem with respect to a unique frequency response, representing the whole set of lines connecting each charging station with the central node. Among the provided Pareto solutions, the designer will make the final decision based on the control system requirements and / or the hardware constraints.
Introduction
Traditional power grids have been coupled with communication networks today, leading to the so-called smart grids. A smart grid enables information flows among various components of the grid, ranging from power plants to distributed energy resources, and from local utilities to residential and commercial customers. The purpose is to better monitor and control power generation and consumption. In smart grids, renewable sources, such as wind and solar energy, vary with weather and daylight conditions, so that an energy storage system is required to accumulate spare energy and to feed it back into the grid when required.
In smart grids, electric vehicles have an impact on energy storage through vehicle-to-grid (V2G) technologies [1], in which the electric vehicles (and even hybrids) can be seen as a distributed network of batteries that can store power at off-peak times and help power on the grid when demand peaks. Hence, V2G is useful to provide energy when demand shifts and to reduce electricity costs, to supply energy to energy markets, and to increase the use of localized renewables.
Each vehicle must have some requirements: 1. A connection to the grid for electrical energy flow; 2. A control or logical connection necessary for communication with the grid operator; 3. Controls and metering onboard the vehicle; and 4. There must be an agreement between the owner of the battery and the grid operator that electricity can be put into or drawn from the battery.
Thus, it is important to have an automated and standardized exchange of information between the vehicles and the grid. In this regard, different protocols for the communication are used [2,3]: ISO/IEC 15118 concerns the communication between an electric vehicle and the charging spot whereas the IEC 61850 is related to the communication between the charging spot and the energy provider ( Figure 1). A heterogeneous set of network technologies can support the smart grid communications ranging from wireless to wired solutions. Among the latter, PLCs have been deployed outdoor for last mile communications and indoor for home area networks [4]. A wide range of PLC technologies are available for different applications. Ultra-narrowband, operating at a very low data rate (100 bps) in the low-frequency band, is used in particular for load control. Narrowband PLC (NB PLC), operating in the 3-500 kHz band to deliver a few hundred kbps, has been used for last mile communications over MV and LV lines. In order to transmit data over a narrowband PLC network, different specifications are used. The most used standards are PRIME, developed by the PRIME Alliance [5], and G3, powered by Maxim company [6]. Both the standards provide source coding techniques to correct as many errors as possible at the receiver side due to the severe channel disturbances. In the range 1.8-86 MHz there is the broadband PLC (BB PLC), which is mainly used for home area networks and can provide several hundred Mbps. The typical examples of broadband PLCs conform to the standards IEEE 1901 [7], HomePlug [8], and ITU-T G.hn [9].
Both the NB-PLC and BB-PLC could be used in the smart grid applications, both in low voltage (LV) and medium voltage (MV) networks, with pros and cons [10,11]. NB-PLC are suitable for smart grid applications where a low-data rate is required, whereas BB-PLC solutions offer higher flexibility and a better trade-off between data rate, latency, robustness and energy efficiency [11]. In [12,13], the authors clarify the role of PLC technology for smart grid applications.
In this paper, to have the maximum flexibility, the BB-PLC technology is used. Specifically, in the V2G application, the most competitive advantage of using PLC is that no installation of an additional communication grid is required since the charging stations must be reached by the existing power grid to be fed. Moreover, the use of dedicated modulation techniques, such as OFDM (orthogonal frequency division multiplexing), allows for a broadband transmission, hence, PLC is competitive in more than in economic terms. Some disadvantages have to be taken into consideration: power lines do not necessarily provide a sure support, due to the presence of different elements; the attenuation of the data could be a problem; the electrical noise on the line limits the speed of the data transmission.
Designing the PLC system for the V2G needs a multi-objective optimization where some conflicting objectives should be considered: the communication capacity, the total transmission power, and the communication error probability. The multi-objective optimization process provides A heterogeneous set of network technologies can support the smart grid communications ranging from wireless to wired solutions. Among the latter, PLCs have been deployed outdoor for last mile communications and indoor for home area networks [4]. A wide range of PLC technologies are available for different applications. Ultra-narrowband, operating at a very low data rate (100 bps) in the low-frequency band, is used in particular for load control. Narrowband PLC (NB PLC), operating in the 3-500 kHz band to deliver a few hundred kbps, has been used for last mile communications over MV and LV lines. In order to transmit data over a narrowband PLC network, different specifications are used. The most used standards are PRIME, developed by the PRIME Alliance [5], and G3, powered by Maxim company [6]. Both the standards provide source coding techniques to correct as many errors as possible at the receiver side due to the severe channel disturbances. In the range 1.8-86 MHz there is the broadband PLC (BB PLC), which is mainly used for home area networks and can provide several hundred Mbps. The typical examples of broadband PLCs conform to the standards IEEE 1901 [7], HomePlug [8], and ITU-T G.hn [9].
The optimization problem must be formalized depending on the modulation technique. Usually, to better exploit the available transmission band, the OFDM [8] system is assumed, which consists of splitting the transmission band into mutually orthogonal sub-carriers. Each one is used as a sub-carrier where the signal can be considered as if it were the only one in the channel. Every sub-carrier is loaded with a bit rate (BR) depending on the specific signal-to-noise ratio (SNR). OFDM is a modulation scheme commonly adopted in several application domains, such as mobile phones, digital TV and radio, and xDSL, each one having specific features and requirements. In particular, in the application presented in the present paper, different transmitters share the same channel, therefore, each sub-carrier must be assigned exclusively to one transmitter.
The optimization of the modulation scheme works by allocating the bits to the sub-carriers, which is referred to as bit-loading [8] problem, and which depends on the properties of the channel and the requirements of the transmission. In the present paper, the optimal design of the modulation system in PLCs is formalized as a multi-objective optimization problem with the following three conflicting objectives: the BR, to be maximized; the bit error rate (BER) and the interference on adjacent equipment, to be minimized [16][17][18]. The interference mostly depends on the power of the modulated signal and on its evolution in the time.
The problem faced has its own specificity, since the transmissions of the charging stations travel through common branches of the grid. Consequently, they must share the same frequency band. However, each channel has its own frequency response and power spectral density of noise. Hence, the optimal solution of the design problem should consider all the channels simultaneously. Other applicative domains have the same specificity, for example, cellular phones or digital audio broadcasting (DAB) networks.
In the proposed method, the OFDM symbols, to be sent by all the transmitters, are merged into one, which simultaneously travels in all the channels. A fictitious frequency response, called Chimera, is created which allows to perform the multi-channel optimization of the OFDM modulation scheme as if all the transmissions should be sent through a unique channel (named here Chimera channel). To define the Chimera frequency response an integer quadratic optimization problem is formalized whose aim is to assign band resources to the charging stations connected to the same feeder, depending on the number of supplied plugs, maximizing, at the same time, the overall capacity of the transmission.
The rest of the paper is organized as follows. In Section 2, the problem at hand is outlined. In Section 3, the proposed Chimera algorithm is described. Section 4 reports the formalization of the multi-objective problem aiming to off-line design the PLC system. Results of the optimization procedure on a case study are reported in Section 5. Finally, in Section 6 some conclusions are given.
V2G Communication System
The present paper aims to optimally design a PLC system that allows to manage a distributed energy storage system integrated with a smart grid. The storage is composed of a number of electric vehicle batteries connected to prefixed charging stations. It is assumed that one modem is installed at the interface between the charging station and the smart grid. Therefore, the lines connecting the charging station to the plugs are not considered in the design problem.
To prove the validity of the method, a simple feeder network is considered, which is a part of a wider network already considered in [19]; this does not invalidate the generality of the results, as the method is applied separately for each single feeder. Moreover, frequency responses and noise spectra are obtained from simulations rather than measurements because the method can be demonstrated regardless of the actual data. Network Simulator 3 (NS-3) [20], which is a free and open source discrete event network simulator, has been used. Figure 2 shows the topology of the case study, with a number of low voltage nodes, 3 kW loads, and four charging stations (checkered squares in Figure 2) supplied by a medium voltage/low voltage substation with 20 kV/400 V, 1MVA transformer. All the lines are three-phase underground commercial cables, whose datasheets are included in the database of the simulator. A phase-neutral cable, among all those available, has been randomly chosen to be the PLC channel. The lengths of the lines are reported in Figure 2. Each charging station supplies a variable number of charging plugs and it is connected to the control center through the smart grid. As each charging station is connected to the control center by a different path, each one is associated to a different frequency response. database of the simulator. A phase-neutral cable, among all those available, has been randomly chosen to be the PLC channel. The lengths of the lines are reported in Figure 2. Each charging station supplies a variable number of charging plugs and it is connected to the control center through the smart grid. As each charging station is connected to the control center by a different path, each one As known, the PLC channels are not time invariant due to load connection/disconnection [21]. For this reason, different scenarios have been simulated corresponding to different configurations of loads and active charging plugs and the lower envelop of the frequency responses has been assumed. In Figure 3, the power spectral densities (PSDs) of the signal as a function of the frequency in the nodes corresponding to the charging stations are shown. The frequency responses have been determined by feeding the channel with a white Gaussian noise and by measuring the spectrum of the signal at the receiver side. As a simplifying assumption, only the transmission from the charging station to the control center has been considered, without losing generality. Nonetheless, for each channel, two frequency responses for bi-directional transmission should be assessed, and a separate sub-band should be allocated to each. Thus, transmissions wouldn't need to be synchronized. Since the attenuation affects the performance of the PLC network, increasing with frequency and distance, longer LV lines must use frequencies in lower bands to guarantee a minimum performance. As known, the PLC channels are not time invariant due to load connection/disconnection [21]. For this reason, different scenarios have been simulated corresponding to different configurations of loads and active charging plugs and the lower envelop of the frequency responses has been assumed. In Figure 3, the power spectral densities (PSDs) of the signal as a function of the frequency in the nodes corresponding to the charging stations are shown. The frequency responses have been determined by feeding the channel with a white Gaussian noise and by measuring the spectrum of the signal at the receiver side. As a simplifying assumption, only the transmission from the charging station to the control center has been considered, without losing generality. Nonetheless, for each channel, two frequency responses for bi-directional transmission should be assessed, and a separate sub-band should be allocated to each. Thus, transmissions wouldn't need to be synchronized. Since the attenuation affects the performance of the PLC network, increasing with frequency and distance, longer LV lines must use frequencies in lower bands to guarantee a minimum performance. database of the simulator. A phase-neutral cable, among all those available, has been randomly chosen to be the PLC channel. The lengths of the lines are reported in Figure 2. Each charging station supplies a variable number of charging plugs and it is connected to the control center through the smart grid. As each charging station is connected to the control center by a different path, each one is associated to a different frequency response. As known, the PLC channels are not time invariant due to load connection/disconnection [21]. For this reason, different scenarios have been simulated corresponding to different configurations of loads and active charging plugs and the lower envelop of the frequency responses has been assumed. In Figure 3, the power spectral densities (PSDs) of the signal as a function of the frequency in the nodes corresponding to the charging stations are shown. The frequency responses have been determined by feeding the channel with a white Gaussian noise and by measuring the spectrum of the signal at the receiver side. As a simplifying assumption, only the transmission from the charging station to the control center has been considered, without losing generality. Nonetheless, for each channel, two frequency responses for bi-directional transmission should be assessed, and a separate sub-band should be allocated to each. Thus, transmissions wouldn't need to be synchronized. Since the attenuation affects the performance of the PLC network, increasing with frequency and distance, longer LV lines must use frequencies in lower bands to guarantee a minimum performance. Depending on the occupation level of the channel, different information can be exchanged between the charging stations and the control system. Note that, because all the charging stations are connected to the same feeder, any transmission reaches all the nodes. However, as each sub-carrier is assigned exclusively to a specific node, each receiver is able to retrieve the transmission addressed to it.
Each node is connected to the control node through a power line, whose frequency response depends on the cable-laying and the length. For this reason, the optimal bit-loading should consider all the different frequency responses and all the possible states of the network. In order to reduce the design optimization complexity, a Chimera frequency response is built starting from the frequency responses of the nodes (the charging stations) connected to the considered feeder.
Chimera Algorithm
One of the most significant shortcomings of PLC is the frequency response of the transmission channel, as it has an irregular gain diagram, with deep notches, and an irregular phase diagram. OFDM technology allows mitigation of such shortcomings [22].
As the charging stations served by one feeder share the same physical channel that connects them to the control node, each frequency band has to be assigned exclusively to one charging station. The optimization of the OFDM modulation should take into account the different frequency responses of all the channels. In this paper, an algorithm, called Chimera, has been proposed to optimize the allocation of sub-carriers to the nodes served by the same feeder. The output of the algorithm is a single frequency response (Chimera), obtained by combining the entire set of frequency responses of the charging stations. At the same time, the set of transmissions of all nodes is merged into a single bit stream, which is assumed to travel through the Chimera channel.
The bit stream is first subdivided into frames of assigned numbers of bits, which have to be modulated. The frame, in turn, is subdivided into as many strings of bits (words) as the number of sub-carriers, and the length of each word is assigned according to the capacity of the sub-carrier where it will be loaded.
Depending on the single capacity, a constellation, defined in the complex plane, is associated with each sub-carrier. Each point of the constellation is associated to a string of bits, so that each possible word can be transmitted by sending the coordinates of the corresponding point. Therefore, the length of each word, i.e., the number of bits, is equal to the base-2 logarithm of the number of points in the constellation. Having a constellation with many points favors the bit rate, but, at the same time, this reduces the margin among near points, which affects the BER of receiving data. In this paper, only a limited number of design parameters are considered, such as the transmission band and its subdivision into sub-carriers, the allocation of bits to each sub-carrier (bit loading) and the maximum power of transmission. Further design parameters, such as zero-padding, cyclic prefix, time guard, and so on [22,23], have been considered here by assuming a proper margin for the duration of the single OFDM symbol.
Let N be the number of nodes of the feeder that share the same transmission band, and K the number of sub-carriers in which the transmission band is subdivided. Each node is associated with a specific frequency response (see Figure 3). It is assumed that each node supplies V n plugs, whose number can be different for different nodes. Moreover, it is assumed that the nodes communicate only with the central node, therefore, a number of frequency responses equal to the number of nodes have to be taken into account. The same subdivision in sub-bands is applied to all those frequency responses.
The capacity C nk of the k-th sub-carrier of the channel n is given by the formula of Shannon-Hartley [24], which depends on the bandwidth B and the signal-to-noise ratio (SNR): The value in Equation (1) is the theoretical upper bound of BR in the sub-carrier, but a lower value of BR is assumed in order to maintain the BER within an acceptable threshold.
The bandwidth is common to all the sub-carriers of all the frequency responses, therefore from here on we consider a normalized value of B. The total capacity of nth channel, corresponding to a specific node, is given by the sum of the capacities of all sub-carriers: This expression can be calculated for each frequency response, and it assumes that all the sub-carriers are assigned to each node. An incidence matrix M NxK is used to represent the assignment of sub-carriers to the single nodes. The columns of M correspond to the sub-carriers, whereas the rows correspond to the channels. In each row, each 1 identifies a sub-carrier assigned to the corresponding node. Each column can have at most one element equal to 1 and the others are equal to 0. The columns with all zeros correspond to not assigned sub-carriers. By using matrix M, the capacityC n assigned to each node can be expressed as: where m nk is the element of M corresponding to the frequency response n and the sub-carrier k. The aim of the procedure is to assign to each charging station a capacity proportional to the number of plugs it feeds. This rule can be expressed by the following statement: The allocation problem consists in defining the matrix M. As the unknowns are binary, the previous series of equations has not, in general, an exact solution, therefore the following objective function is defined, which has to be minimized: where s identifies the channel with the lowest total capacity: The quadratic function (5) allows to obtain the most homogeneous distribution of capacities, but it does not guarantee the total capacity is maximized. For this reason, a further term is added to the objective function, which weighs the capacity assigned to the channel with the lowest total capacity, obtaining: where α is the weight coefficient; a small value favors the homogeneity among the channels, while a large value tends to increase the capacities assigned to all the channels. Hence, the best α is the one that maximizes the lowest allocated capacity. In order to avoid both the trivial solution of null matrix M, and the same sub-carrier be assigned to different channels, the following constraint has to be stated: N n=1 m nk = 1 ∀k = 1, . . . , K In case one sub-carrier is not suitable for the assignment, the corresponding column of the matrix M has to be removed.
Summarizing, the allocation problem can be defined as the following quadratic, binary minimization problem: minJ s.t.
Multi-Objective Optimization
The demand of transmission is maximizing the Bit Rate (BR) (bit/s), which is equal to the number of bits per frame multiplied by the number of OFDM symbols per second. Two other conflicting objectives must be minimized at once. The first is the total power of the signal P tot , which is the sum of the powers of all the sub-carriers. The second conflicting objective is the error bit rate (BER), which depends on the noise spectrum: a probability density function can be associated to each point of the constellation centered in these points and the BER, for a given point, will be equal to the probability that the distance between a received point and its reference position be greater than the semi-distance between two adjacent points. In this paper, Gaussian distribution is assumed, so that there is a direct relationship between BER and the least distance among the points of the constellation. Different assumptions on the statistical distribution of points around the reference position do not change the application of the proposed method.
Note that, by setting the maximum transmission power of each constellation (P MAX ), its area is univocally assigned (area of the outer circle in Figure 4). Then, by setting the minimum distance between the points of the constellation (i.e., the BER), also its number of points is determined, hence, also the BR. Figure 4 refers to the 16-APSK scheme for a generic sub-carrier [25]. In the same figure, the shadow circles centered in the points of the constellation have a radius equal to 3 times the standard deviation σ of the Gaussian distribution. The distance between two adjacent points is greater than 6σ.
Multi-Objective Optimization
The demand of transmission is maximizing the Bit Rate ( ) (bit/s), which is equal to the number of bits per frame multiplied by the number of OFDM symbols per second. Two other conflicting objectives must be minimized at once. The first is the total power of the signal , which is the sum of the powers of all the sub-carriers. The second conflicting objective is the error bit rate ( ), which depends on the noise spectrum: a probability density function can be associated to each point of the constellation centered in these points and the , for a given point, will be equal to the probability that the distance between a received point and its reference position be greater than the semi-distance between two adjacent points. In this paper, Gaussian distribution is assumed, so that there is a direct relationship between and the least distance among the points of the constellation. Different assumptions on the statistical distribution of points around the reference position do not change the application of the proposed method.
Note that, by setting the maximum transmission power of each constellation ( ), its area is univocally assigned (area of the outer circle in Figure 4). Then, by setting the minimum distance between the points of the constellation (i.e., the ), also its number of points is determined, hence, also the . Figure 4 refers to the 16-APSK scheme for a generic sub-carrier [25]. In the same figure, the shadow circles centered in the points of the constellation have a radius equal to 3 times the standard deviation of the Gaussian distribution. The distance between two adjacent points is greater than 6 . The requirements on the are particularly stringent, therefore, the APSK constellations [25] are adopted, which are increasingly considered in 5G mobile communications; for a given power, APSK gives the maximum distance between adjacent points. Without loss of generality, different constellation schemes can be assumed because the inter-dependence among power, and holds valid.
The water-filling (WF) algorithm [26,27] is used to allocate the bits to the sub-carriers. Note that, WF can be used either to design the constellations or to dynamically allocate the bits to the subcarriers. In this work, it has been used for the former purpose.
The multi-objective optimization of the bit loading consists in finding the solutions that reconcile the conflicting demands. In particular, a solution is said nondominated, or Pareto optimal, if none of the objective functions can be improved in value without worsening some of the others. The set of such solutions is called Pareto Front [14] and, in the problem at hand, it represents a discrete subspace of the surface of the feasible solutions. In fact, as is a discrete variable, except for a discrete set of The requirements on the BER are particularly stringent, therefore, the APSK constellations [25] are adopted, which are increasingly considered in 5G mobile communications; for a given power, APSK gives the maximum distance between adjacent points. Without loss of generality, different constellation schemes can be assumed because the inter-dependence among power, BR and BER holds valid.
The water-filling (WF) algorithm [26,27] is used to allocate the bits to the sub-carriers. Note that, WF can be used either to design the constellations or to dynamically allocate the bits to the sub-carriers. In this work, it has been used for the former purpose.
The multi-objective optimization of the bit loading consists in finding the solutions that reconcile the conflicting demands. In particular, a solution is said nondominated, or Pareto optimal, if none of the objective functions can be improved in value without worsening some of the others. The set of such solutions is called Pareto Front [14] and, in the problem at hand, it represents a discrete subspace of the surface of the feasible solutions. In fact, as BR is a discrete variable, except for a discrete set of points in the (P tot , BER) plane, the increase of one or both of them does not correspond to an increase of BR.
The multi-objective optimization aims to find this set of Pareto solutions and then to present them to the designer who will choose among them [14,15]. In the problem at hand, the surface BR = f (P tot , BER) has been densely sampled, and the set of non-dominated points has been selected, which is assumed as the Pareto front.
Results
The optimization problem in (9) is solved with a genetic algorithm implemented in the Matlab environment [28]. A transmission band of 1-8 MHz is assumed, because the SNR is unfavorable outside such interval. A width of 100 kHz has been assigned to each sub-carrier, then 70 sub-carriers are available. In practice, the number of actually available sub-carriers could be much lower, because of the great attenuation of the power line and the presence of noise. In this work, for each channel, the power spectrum of the noise has been obtained with the same NS3 simulator, which is in good agreement with experimental results retrieved from the literature [21,29].
The number of variables to be optimized is equal to 280 (70 sub-carriers for four channels), and the initial population has been generated using a uniform random number generator in the range {0;1} After 28,000 iterations the algorithm ends producing the matrix M. This allows us to obtain a single frequency response for all the charging stations by multiplying M with their frequency responses. The values of the genetic algorithm parameters are reported in Table 1. Figure 5 shows the obtained Chimera frequency response assumed to represent all the charging stations in Figure 2. The weight coefficient α has been set equal to 1. , ) has been densely sampled, and the set of non-dominated points has been selected, which is assumed as the Pareto front.
Results
The optimization problem in (9) is solved with a genetic algorithm implemented in the Matlab environment [28]. A transmission band of 1-8 MHz is assumed, because the is unfavorable outside such interval. A width of 100 kHz has been assigned to each sub-carrier, then 70 sub-carriers are available. In practice, the number of actually available sub-carriers could be much lower, because of the great attenuation of the power line and the presence of noise. In this work, for each channel, the power spectrum of the noise has been obtained with the same NS3 simulator, which is in good agreement with experimental results retrieved from the literature [21,29].
The number of variables to be optimized is equal to 280 (70 sub-carriers for four channels), and the initial population has been generated using a uniform random number generator in the range {0;1} After 28,000 iterations the algorithm ends producing the matrix . This allows us to obtain a single frequency response for all the charging stations by multiplying with their frequency responses. The values of the genetic algorithm parameters are reported in Table 1. Figure 5 shows the obtained Chimera frequency response assumed to represent all the charging stations in Figure 2. The weight coefficient α has been set equal to 1. To perform the offline optimization of , and , the function = , ) is sampled in a regular 100 × 100 grid, within the interval of 1-10 [W] of , and 10 −5 -10 −3 of . The upper envelope of the noise PSDs of the four channels has been assumed as unique noise spectrum, as shown in Figure 6. Note that, given the Chimera frequency response and the noise, by applying the water filling algorithm, the power value for each sub-carrier is defined as a function of the total To perform the offline optimization of BR, BER and P tot , the function BR = f (P tot , BER) is sampled in a regular 100 × 100 grid, within the interval of 1-10 [W] of P tot , and 10 −5 -10 −3 of BER. The upper envelope of the noise PSDs of the four channels has been assumed as unique noise spectrum, as shown in Figure 6. Note that, given the Chimera frequency response and the noise, by applying the water filling algorithm, the power value for each sub-carrier is defined as a function of the total transmitting power P tot . Hence, given the noise, the standard deviation σ is known, then the minimal inter-distance between the constellation points is determined. In this way, the number of points of the constellation is calculated, by obtaining the number of bits associated to each point, then the length of bit frame is obtained, and, finally, the BR. the constellation is calculated, by obtaining the number of bits associated to each point, then the length of bit frame is obtained, and, finally, the . The sampled surface is shown in Figure 7. The non-dominated points are also represented by dots. The relationship between the three objectives allows us to obtain a ranging from 24.3-60 Mb/s. as function of and , and Pareto front (dots). Figure 8 gives an equivalent representation of the Pareto front in the plane ( , ). It is a contour plot of the front, and the points represent the Pareto-optimal solutions. As it can be noted, in order to have a minimal at least a power of 1 W is needed. Moreover, limiting the available power to about 7 W, a of 52.1 Mb/s can be achieved, with a of the order of 10 . If a higher (e.g., 60 Mb/s) must be achieved, the will strongly affect the transmission. The sampled surface is shown in Figure 7. The non-dominated points are also represented by dots. The relationship between the three objectives allows us to obtain a BR ranging from 24.3-60 Mb/s. the constellation is calculated, by obtaining the number of bits associated to each point, then the length of bit frame is obtained, and, finally, the . The sampled surface is shown in Figure 7. The non-dominated points are also represented by dots. The relationship between the three objectives allows us to obtain a ranging from 24.3-60 Mb/s. as function of and , and Pareto front (dots). Figure 8 gives an equivalent representation of the Pareto front in the plane ( , ). It is a contour plot of the front, and the points represent the Pareto-optimal solutions. As it can be noted, in order to have a minimal at least a power of 1 W is needed. Moreover, limiting the available power to about 7 W, a of 52.1 Mb/s can be achieved, with a of the order of 10 . If a higher (e.g., 60 Mb/s) must be achieved, the will strongly affect the transmission. Figure 8 gives an equivalent representation of the Pareto front in the plane (P tot , BER). It is a contour plot of the front, and the points represent the Pareto-optimal solutions. As it can be noted, in order to have a minimal BR at least a power of 1 W is needed. Moreover, limiting the available power to about 7 W, a BR of 52.1 Mb/s can be achieved, with a BER of the order of 10 −5 . If a higher BR (e.g., 60 Mb/s) must be achieved, the BER will strongly affect the transmission. Depending on the available total transmitting power, the plot returns the available for each plug in kByte/s. As an example, with an available transmitting power of 5 W, the is equal to about 190 kByte/s for each of the 32 plugs in the four charging stations. Figure 9 shows the Pareto optimal solutions corresponding to the minimal BER = 10 −5 . Depending on the available total transmitting power, the plot returns the available BR for each plug in kByte/s. As an example, with an available transmitting power of 5 W, the BR is equal to about 190 kByte/s for each of the 32 plugs in the four charging stations. Figure 9 shows the Pareto optimal solutions corresponding to the minimal = 10 .
Depending on the available total transmitting power, the plot returns the available for each plug in kByte/s. As an example, with an available transmitting power of 5 W, the is equal to about 190 kByte/s for each of the 32 plugs in the four charging stations. Finally, in Figure 10 the 70 APSK constellations corresponding to the Pareto point (BR = 190 kB/s, BER = 10 −5 , P = 5 W) are shown. The colors refer to the different charging stations. As can be noted, the optimization procedure assigned different constellations to distinct sub-carriers.
Discussion
In a smart grid environment new challenges are posed to the power system operators by electric vehicles. Several studies have been proposed in the literature presenting control and optimization strategies for managing the charging/discharging of the electric vehicles' batteries [30]. However, the communication requirements still remain an open issue. In fact, different charging and discharging management strategies can be adopted ranging from a fully-centralized charge control decision, to distributed (or transactive), and to price control, which has limited communication requirements [31]. In the first case, the decisions are taken at the system level, hence it is generally accepted that it is the best control system in terms of, e.g., security of the power system. However, a more sophisticated communication infrastructure is needed that foresees bidirectional communication flow at the price of higher cost. Power line communication technology stands as a good candidate provided that its design is optimized. The design procedure presented in this paper, allows one to solve the bit-loading problem, finding a compromise among the conflicting objectives of minimal signal power and , while maximizing the . Among the Pareto-optimal solutions, provided by the optimization process, the designer can perform the definitive choice depending on the hardware and the communication requirements. Once this choice is made, the entire constellation system is designed.
Discussion
In a smart grid environment new challenges are posed to the power system operators by electric vehicles. Several studies have been proposed in the literature presenting control and optimization strategies for managing the charging/discharging of the electric vehicles' batteries [30]. However, the communication requirements still remain an open issue. In fact, different charging and discharging management strategies can be adopted ranging from a fully-centralized charge control decision, to distributed (or transactive), and to price control, which has limited communication requirements [31]. In the first case, the decisions are taken at the system level, hence it is generally accepted that it is the best control system in terms of, e.g., security of the power system. However, a more sophisticated communication infrastructure is needed that foresees bidirectional communication flow at the price of higher cost. Power line communication technology stands as a good candidate provided that its design is optimized. The design procedure presented in this paper, allows one to solve the bit-loading problem, finding a compromise among the conflicting objectives of minimal signal power and BER, while maximizing the BR. Among the Pareto-optimal solutions, provided by the optimization process, the designer can perform the definitive choice depending on the hardware and the communication requirements. Once this choice is made, the entire constellation system is designed.
|
2019-05-26T14:02:05.822Z
|
2019-05-09T00:00:00.000
|
{
"year": 2019,
"sha1": "4c7afb5c013f754a312c1464ce13ad42ee123dad",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1073/12/9/1767/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "25a2ae9e5944a8d475bd0399da0eec9221a21695",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
119284946
|
pes2o/s2orc
|
v3-fos-license
|
Gauged Floreanini-Jackiw type chiral boson and its BRST quantization
The gauged model of Siegel type chiral boson is considered. It has been shown that the action of gauged model of Floreanini-Jackiw (FJ) type chiral boson is contained in it in an interesting manner. A BRST invariant action corresponding to the action of gauged FJ type chiral boson has been formulated using Batalin, Fradkin and Vilkovisky based improved Fujiwara, Igarishi and Kubo (FIK) formalism. An alternative quantization of the gauge symmetric action has been made with a Lorentz gauge and an attempt has been made to establish the equivalence between the gauge symmetric version of the extended phase space and original gauge non-invariant version of the usual phase space.
The gauged model of Siegel type chiral boson is considered. It has been shown that the action of gauged model of Floreanini-Jackiw (FJ) type chiral boson is contained in it in an interesting manner. A BRST invariant action corresponding to the action of gauged FJ type chiral boson has been formulated using Batalin, Fradkin and Vilkovisky based improved Fujiwara, Igarishi and Kubo (FIK) formalism. An alternative quantization of the gauge symmetric action has been made with a Lorentz gauge and an attempt has been made to establish the equivalence between the gauge symmetric version of the extended phase space and original gauge non-invariant version of the usual phase space.
I. INTRODUCTION
The self-dual field in (1 + 1) which is also known as chiral boson is the basic ingredient of heterotic string theory [1][2][3][4]. This very chiral boson plays a crucial role in the study of quantum hall effect too [5,6]. Seigel initiated the study of chiral boson in his seminal work [7]. Another description of chiral boson came from the work of Srivastva [8].
In these two descriptions [7,8], the lagrangian of chiral boson were constituted with the second order time derivative of the field. In the description of Seigel chiral constraint was in a quadratic form where as in the description of Srivastava it was in a linear form. One more ingenious description of chiral boson came from the description of Floreanini and Jackiw [9]. In this description the lagrangian of chiral boson was constituted with first order time derivative of the field. In Ref [10], we find an interesting description towards quantization of that free FJ type chiral boson. In a very resent work [11], we find an application of augmented super field approach to derive the off-shell nilpotent and absolutely anti-commuting (anti-)BRST and (anti-)co-BRST symmetry transformations for the BRST invariant Lagrangian density of a free chiral boson. Another recent important development towards the BFV quantization of the free chiral boson along with study of Hodge decomposition theorem in the context of conserved charges has came in [12] The obvious generalization of free chiral boson is to take into account of the interaction of gauge field with that and this interacting field theoretical model is known as gauged model of chiral boson. The interacting theory of chiral boson was first described by Bellucci, Golterman and Petcher [13] with Seigel like kinetic term for chiral boson. So naturally the theory of interacting chiral boson with FJ type kinetic was wanted for as free FJ type chiral boson became available in [9] and that was successfully met up by Harada [14]. After the work of Harada [14], interacting chiral boson based on FJ type kinetic term attracted considerable attention [15][16][17][18][19][20] in spite of the fact that this theory of interacting chiral boson was not derived from the iterating theory of chiral boson as developed in [13]. Harada obtained it from Jackiw-Rajaraman (JR) version of chiral Schwinger model with an ingenious insertion of a chiral constraint in the phase space this theory [21]. So there is a missing link between the two types of interacting gauged chiral boson. An attempt towards search for a link is, therefore, a natural extension which we would like to explore. In fact, we want to show whether the gauged model of FJ type chiral boson is contained within the gauged chiral boson of Seigel type chiral boson which is available in [13]. The study of this model may be beneficial from another another point of view indeed; where anomaly is the central issue of investigation [14,17,[21][22][23][24][25]27], since it is known from Ref. [14] that the model took birth from the JR version of chiral Schwinger model and it is known that chiral generation of Schwinger model [28] due to Hagen [26] gets secured from unitarity problem when anomaly was taken into consideration in it by Jackiw and Rajaraman [21]. In this respect, the recent chiral generation of Thirring-Wess model is of worth-mentioning [29,30]. So when the issue of searching of the desired link gets settled down a natural extension that comes automatically in mind is to study the symmetry underlying in the model and perform the quantization of the model. BRST quantization in this context scores over other.
BRST formalism provide a natural framework of covariant quantization of field theoretical models and is interesting in its own right since it ensures unitarity and renormalizability of the theory [31][32][33]. Therefore, BRST quantization of the gauged chiral boson would certainly be of interest. So we apply the Batalin, Fradkin and Vilkovisky (BFV) [34][35][36][37] formalism in order to get a BRST invariant reformulation of the said model. In fact, we will use here the improved version due to FIK [38] in our work since it helps to get the Wess-Zumino [39] term in a transparent way which was found lacking in the work [20]. The Wess-Zumino term for the free chiral boson obtain in [20], though agrees with the conventional Wess-Zumino term that can be inherited from [40], the term which was demanded by the author as the Wess-Zumino term for the gauged model of chiral boson fails to do so. Surprisingly, however, the final BRST invariant effective action for gauged chiral boson presented in [20] shows on shell BRST symmetry. So a natural question arises whether or not FIK formalism fails to produce the appropriate Wess-Zumino term for the gauged model of FJ type chiral boson since it was found to be instrumental to get the BRST invariant reformulation for several physical sensible field theoretical models with the appropriate Wess-Zumino term [41][42][43][44][45][46][47][48][49]. To explore the above fact, we are in fact, driven towards the reinvestigation of the BRST invariant reformulation of the gauged model of FJ type chiral boson.
Gauged model of chiral boson with the Wess-Zumino term would be a gauge invariant theory in the extended phase space. So if our attempt gets a positive shape towards BRST quantization with the appearance of appropriate Wess-Zumino term then a natural extension would be to proceed towards the alternative quantization of the gauge invariant part of the theory and the next task would certainly be to show the equivalence between the physical content of the actual gauge non-invariant theory and the gauge invariant theory of the extended phase space which we would also like to address within this work. Note that this type of investigation is not possible without the appropriate Wess-Zumino term which was found lacking in [20] The plan of the paper is as follows. In Sec. II we are intended to find the missing link between the two types of mutually exclusive developments of gauged chiral boson. Sec. III will be devoted toward the BRST invariant reformulation of the gauged model of chiral boson which is based on FJ type kinetic term for chiral boson. In Sec. IV, we quantize the gauge invariant part of the lagrangian obtained during the process of BRST quantization in Sec. III with the Lorentz gauge. In Sec. V, an equivalence is made between the actual gauge non-invariant theory with the gauge invariant transmuted form obtained in Sec. III.
II. A GAUGED MODEL OF CHIRAL BOSON WITH THE SIEGEL TYPE KINETIC TERM
The gauged model of chiral boson with the Siegel type of kinetic term is described by the lagrangian density [13] Here over dot and over prime represent the time and space derivative respectively. Here m 2 is written as ae 2 for later convenience. The symbol e indicates the coupling constant which has one mass dimension. The momenta corresponding to the field A 0 , A 1 , φ and λ respectively are The canonical Hamiltonian density of the system is obtained through a Legendre transformation: Using equations (2),(3),(4) and (5), we find that H c takes the following form In equation (7), u and v are the two lagrange multipliers. The following two equations are identified as primary constraints of this system since these two do not contain the time derivative of the fields. The preservation of the constraints (8) and (9) leads to the following two constraints: In order to single out the physical degrees of freedom we proceed to quantize the theory with the following gauge fixing condition.
The generating functional of this system now reads After integrating out of the momenta of the fields we get the generating functional Z in the following form where This is the gauged model of chiral boson with FJ type kinetic term. Note that L GCB represents a lagrangian density that has generated from L B and it agrees with the lagrangian found in [14]. So we find that the gauged model of chiral boson with FJ type kinetic term is contained within the gauged version of Siegel like chiral boson [13]. It is beneficial to compute the Dirac brackets for completeness of the analysis since it is a constrained theory and ordinary Poisson brackets become inadequate for the theories endowed with constraint. The Dirac bracket [50] for the two field variables A and B is defined by where Here Ω i 's stands for the standing second class constraints embedded in the phase space of the theory. Therefore, to compute Dirac brackets we need to construct the matrix constituted with the Poison brackets of the constraints (8), (9),(10), (11) and (12). The required matrix is The matrix C ij is nonsingular. So inverse of it exists which is found out to be Here ǫ(x) is the sign function,ǫ(x) = +1 for x > 0 and ǫ(x) = −1 for x < 0. d dx ǫ(x) = 2δ(x) Using the definition (16), straightforward calculations renders the following Dirac brackets between the field variables.
Here ( * ) indicate the Dirac bracket. Here we end up the description of this Sec. and in the following section we proceed to wards BRST quantization.
III. BRST QUANTIZATION OF THE GAUGED MODEL OF CHIRAL BOSON WITH FJ TYPE KINETIC TERM
In this section we are intendant to carry out the BRST quantization of the gauged model of chiral boson with FJ type kinetic term using the BFV based improved version of FIK since we are familiar with the several successful attempts with this improved version towards the generation of the appropriate Wess-Zumino term during the process of BRST quantization [41][42][43][44][45][46][47][48][49].
According to this formalism H m is usually known as the minimal Hamiltonian which is defined by and the BRST charge Q and the gauge fixing function G have the following expressions respectively: G =C a χ a +P a λ a .
The structure coefficients U c ab and V a b come from the Poisson brackets among the constraints Ω's themselves of the theory and the Poisson's brackets with the canonical Hamiltonian H c (q i , p i ): where w a (a= 1,2..........N) represents the N number of constraints embedded in the phase space of the theory defined by the Hamiltonian H c (q i , p i ). In order to single out the physical degrees of freedom N number of additional conditions φ a = 0, are are required to impose within the phase space. The constraints φ a = 0 and Ω a = 0 together with the Hamiltonian equations may be obtained from the action where λ a and π a are lagrange multiplier having Poisson's bracket [λ a , π a ] = iδ a b and λ a is contained within the gauge fixing conditions in the form φ a = λ a + χ a . The variables (C iP i ) and (P i ,C i ) are the two sets of canonically conjugate anti-commuting ghost coordinates and momenta having the algebra [C i ,P i ] = iδ(x − y) and [P i ,C i ] = iδ(x − y). The lagrange multiplier λ a and π a have the Poisson bracket [λ a , π a ] = iδ a b . The quantum theory, therefore, can be described by the generating functional Z G = dq i dp 1 dλ a dπ a dC a dP a dP a dC a e iSG , where the action S G is With this input, we may proceed towards the BRST quantization of theory under consideration. The lagrangian density of gauged model of FJ type chiral boson is given by For this lagrangian density (34) the canonical momenta corresponding to the field A 0 , A 1 and φ respectively are The canonical hamiltonian can be calculated using equations (35), (36) and (37) through a Legendre transformation as has been done earlier: Equation (35) and (37), do not contain any time derivative of the fields. So these two are the primary constraint of the theory.
Therefore, the Hamiltonian reads The preservation of ω 2 renders the following new constraint The preservations of ω 1 and ω 3 however do not give rise to any new constraint. These two conditions fix the velocities u and v respectively: and Note that the constraints labelled by Ω ′ s in Sec. II differs in number with the constraints labelled by ω ′ s in this section because of the presence lagrange multiplier λ in that section. However careful look reveals that Ω 1 ≈ ω 2 , Ω 3 ≈ ω 3 and Ω 4 ≈ ω 1 . Now imposing the expression of u and v in (41) the Hamiltonian turns into The constraints of the theory satisfy the following Poisson brackets between themselves [ω 1 , ω 3 ] = 0, [ω 2 , ω 2 ] = 0. [ The involution relation between the Hamiltonian and the constraints ω 1 , ω 2 and ω 3 are The set of second class constraints ω 1 , ω 2 and ω 3 can be converted into a first a class set with the help of two auxiliary canonical pairs (θ, π θ ) and (η, π η ). The first class set of constraints those which are constructed from the said second class set of constraints using these auxiliary fields are the following.
The Hamiltonian consistent with the first class set of constraint (53), (54) and (55) is where H BF would certainly be constituted with the auxiliary fields which is found out to be For consistency, the time evaluation of these first class set must be identical to the (50), (51) and (52). Precisely these are the following.
The stage is now set to introduce the two pairs of ghost and anti-ghost fields (C i ,P i ) and (P i ,C i ). We also need to introduce a pair of multiplier fields (N i , B i ) The multipliers and the ghost anti-ghost pairs satisfy the following canonical Poisson's Brackets: According to the definition and In this situation BRST charge Q and the fermionic gauge fixing function G can be written down as We are now in a position to fix up the gauge condition which is very crucial for achieving appropriate Wess-Zumino term. It is found that the following gauge fixing conditions render the required service to wards that end.
Let us now calculate the commutation relation between the BRST charge and the gauge fixing function: Generating functional for this system can be written down as where [Dµ] is the Liouville measure in the extended phase space.
and the action S is explicitly given by The above formulation allows the following simplification: Exploiting the above simplification (72) we obtain the effective action in the following form.
We are in a state to integrate out of the fields π 1 ,π 0 ,η,B 1 ,B 2 ,N 1 ,N 2C1 ,P 1 ,P 3 ,P 2 , one by one to have the action in a desired shape. After integrating out of the said fields and choosing N 3 = A 0 , the action reduces to If we now define π η = e(a − 1)η, C 3 = C, and B 3 = B we get the desired BRST invariant effective action: The action (75) is now found to remain invariant if the fields transform as follows.
The above transformations are the very BRST transformation generated from the BRST charge (63). The Wess-Zumino term for the theory under consideration can easily be identified as This very action (82) contains the appropriate Wess-Zumino term corresponding to the theory of our present consideration and it agrees with the Ref. [40]. We would like to reiterate that in [20] it was lacking for. In fact, in [20], the term which was demanded by the author as the Wess-Zumino term though do not agree with Ref. [40], nevertheless, the author finds on shell BRST invariance with that Wess-Zumino term. The term standing in equation (82) however establishes the off-shell BRST invariance. To achieve the appropriate Wess-Zumino term for this theory which agrees with [40], is a novel aspect of this reinvestigation.
IV. AN ALTERNATIVE QUANTIZATION OF THE GAUGE INVARIANT VERSION OF THE THEORY
The quantization of gauged model of FJ type chiral bosom was available in [14]. It was attempted there to quantize it in a gauge non-invariant manner. The gauge invariant version certainly can be quantized. We refer the works [49,51], where the authors made alternative quantization of chiral Schwinger model with the Faddeevian anomaly and generalized version of QED where vector and axial vector interaction gets mixed up with different wight respectively. Some gauge fixing is needed in this situation indeed. We choose the Lorentz gauge and proceed to quantize the gauge symmetric version of the gauged model of FJ chiral boson. The gauge symmetric version of the said theory with Lorentz gauge is described by the lagrangian density.
Gauge fixing is needed in order to single out the real physical degrees of freedom from the gauge symmetric version of the extended phase space. The Euler-Lagrange equations of motion corresponding to the fields φ, A 0 , A 1 , B and η that follow from the lagrangian density (83) respectively arė It is found that the following expression of A µ , φ and η represents the exact solution of the equations (84),(85),(86),(87) and (88) .
if the following free field equations are maintained. Therefore, the free fields in terms of which the system is completely described are The equal time commutation relations corresponding to the free fields are found out to be Note that F = π 1 represents a massive field with mass m and h represents a massless chiral boson. These two are the replica of the spectrum as obtained in [14]. The equations involving B appear because of the presence of the auxiliary field in the Lorentz gauge fixing. Note that B has the vanishing commutation relation with the physical field B and h. The field ζ represents the zero mass dipole field playing the role of gauge degrees of freedom that can be eliminated by operator gauge transformation. So the theoretical spectrum agrees in an exact manner with the theoretical spectrum obtained in [14] V
. TO SHOW THE EQUIVALENCE BETWEEN THE GAUGE INVARIANT AND GAUGE VARIANT VERSION OF THE MODEL
In this section an attempt is made to show the equivalence between the gauge invariant version of the extended phase space and the gauge variant version of the usual phase space of the gauged model of FJ chiral boson. It is important because to make the model gauge invariant phase space was needed to extend introducing the Wess-Zumino fields. So, what service does the Wess-Zumino fields actually renders is a matter of utter curiosity.
To meet it let us start with the lagrangian of the gauged FJ type chiral boson with the appropriate Wess-Zumino term as is obtained from our investigation. The said lagrangian density reads To show the equivalence between the gauge invariant and the gauge variant version of this model we proceed with computation of the canonical momenta corresponding to the fields φ, A 0 , A 1 and η: The equations (106) and (107) are independent of velocity so these two represent the two primary constraints. Explicitly these two are Using the equations (106),(107),(108) and (109), a Legendre transformation leads to the canonical Hamiltonian H c corresponding to the lagrangian density (105): The preservation of the constraint of T 1 leads to a new constraint The ref. [52], suggests that we have to choose appropriate gauge fixing at this stage to meet our need and we find that gauge fixing conditions those which have been found suitable for this system are the following: Under insertion of the conditions of (114) and (115), T 3 and H c , turns intoT 3 andH c those which are explicitly given byT respectively. Note that with the gauge fixing conditions (114) and (115) push back the constraint T 3 intoT 3 which was the constraint of the usual phase space and as a result H c lands ontoH c which was the hamiltonian of the usual phase space. It has therefore become evident that physical contents remains the same in the gauge symmetric version of the theory in the extended phase space. The extra fields therefore renders there incredible service towards bring back of the symmetry without disturbing the physical sector. For completeness of the analysis we compute the Dirac brackets of the physical fields using the definition (16). The matrix C ij in this situation is and the inverse of it is the following Using the definition (16), it is straightforward to compute the Dirac brackets between the field variables: [A 0 (x), φ(y)] * = 1 e(a − 1) δ(x − y), [A 1 (x), π 1 (y)] * = δ(x − y), [φ(x), π φ (y)] * = δ(x − y), Here also ( * ) stands to symbolize the Dirac bracket. Note that the Dirac brackets between the fields computed here are identical with the set of Dirac brackets computed in Sec. II. It is indeed the expected result.
VI. CONCLUSION
We have started our investigation with the gauged version of the Siegel type chiral boson. From this action we have landed onto the gauged version of FJ type chiral boson. Harada in [14] showed that this action can be derived from JR version of Chiral Schwinger model imposing a chiral constraint into the phase space of the theory. Our investigation however reveals that the gauged version of FJ type chiral boson is contained within the Seigel action in an interesting way. In fact, it is a successful endeavor of obtaining the gauged version of chiral boson in a different line of approach.
An extension towards the BRST invariant reformulation of the gauged version of the FJ type chiral boson has been made using BFV based improved FIK formalism. Though in [20], an attempt was made towards BRST quantization of the same model. However, in that work the part of the effective action which was demanded as the Wess-Zumino term did not agree with the Ref. [40]. In spite of that, with that Wess-Zumino term the author established the on-shell BRST invariance.
The way we have made the BRST invariant reformulation leads to the appropriate Wess-Zumino term and this does agree with the Ref. [40]. It is interesting that the appropriate Wess-Zumino term has automatically appeared during the process of BRST quantization and with Wess-Zumino term, we observe the off shell BRST invariance.
An alternative quantization has found possible due the appearance of the appropriate Wess-Zumino term. From alternative quantization we have seen that the theoretical spectrum agrees with the spectrum obtained in the quantization of the gauge non-invariant version of this model. It is indeed the expected result.
An equivalence between the gauge invariant version of the gauge model of FJ type chiral boson of the extended phase space has been established with the gauge non-invariant version of the usual phase space following the same line of approach as it was available from the work [52]. It is worth-mentioning that the gauge fixing plays an important role to establish this equivalence.
|
2017-07-13T16:30:59.000Z
|
2016-12-21T00:00:00.000
|
{
"year": 2016,
"sha1": "3694013ace0acda75613b117a9e060ff67c73195",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1612.07095",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "3694013ace0acda75613b117a9e060ff67c73195",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
9375528
|
pes2o/s2orc
|
v3-fos-license
|
“The Internet is a Mask”: High School Students' Suggestions for Preventing Cyberbullying
Introduction: Interactions through technology have an important impact on today's youth. While some of these interactions are positive, there are concerns regarding students engaging in negative interactions like cyberbullying behaviors and the negative impact these behaviors have on others. The purpose of the current study was to explore participant suggestions for both students and adults for preventing cyberbullying incidents. Methods: Forty high school students participated in individual, semi-structured interviews. Participant experiences and perceptions were coded using constant comparative methods to illustrate ways in which students and adults may prevent cyberbullying from occurring within their school and community. Results: Students reported that peers would benefit from increasing online security, as well as becoming more aware of their cyber-surroundings. Regarding adult-provided prevention services, participants often discussed that there is little adults can do to reduce cyberbullying. Reasons included the difficulties in restricting online behaviors or providing effective consequences. However, some students did discuss the use of in-school curricula while suggesting that adults blame people rather than technology as potential ways to prevent cyberbullying. Conclusion: Findings from the current study indicate some potential ways to improve adult efforts to prevent cyberbullying. These strategies include parent/teacher training in technology and cyberbullying, interventions focused more on student behavior than technology restriction, and helping students increase their online safety and awareness.
INTRODUCTION
Technology exposure for youth has increased substantially in the past decade, with students spending about the same amount of time using technology as they do in school. 1 While access to technology has many advantages, it also increases the potential for cyberbullying. 2 Cyberbullying has been defined as the repeated use of technology to cause intentional distress or to threaten others. 3,4 Researchers have demonstrated that being a victim of cyberbullying was associated with negative mental health and behavioral concerns such as loneliness, 5 conduct problems, 4,6 and feelings of fearfulness. 7 Some studies have suggested that victims of cyberbullying were at increased risk friends. 10,11,13 Students have been found to be less likely to talk to adults about cyberbullying when compared to victims of traditional bullying. 10,11,13 The reported reasons for not talking to adults about cyberbullying included the fear that reporting incidents would result in technology being taken away, as well as a lack of confidence in adults' ability to address the problem. 3,10,13 The current literature provides some suggestions about how adults can address cyberbullying. These suggestions included clearer policies and psychoeducational interventions regarding online safety. 3 To date, few studies have focused on student suggestions for how adults can reduce or prevent cyberbullying. Student-generated strategies for parents have included setting age-appropriate limits on technology use, monitoring their children's technological activities, sharing evidence of cyberbullying with the school, and informing children about appropriate ways to resolve conflicts. 3 More research is needed to understand what students believe are effective strategies for adults because students may have a better understanding than adults about what would reduce or prevent peer engagement in cyberbullying.
The purpose of the current study was to explore student suggestions for preventing cyberbullying. The majority of studies regarding how students cope with cyberbullying refer to actions taken after an incidence occurred (e.g., deleting messages, telling an adult); however, information regarding how students may protect themselves from future cyberbullying would be beneficial. Additionally, allowing students to provide suggestions for adults based on their own experiences and perceptions would offer insight into how parents, teachers, and others in the community can help prevent cyberbullying. Further, it has been suggested that differences in cyberbullying perceptions may vary based on the school participants attend. Student reports indicated that urban students felt that cyberbullying, while still a concern, was not as important as other life effects when compared to suburban and rural students. 15 It is possible that other differences between urban and suburban students exist regarding how they respond to cyberbullying incidents.
There were 3 research questions: 1) How do students describe their approaches to preventing cyberbullying; 2) How do students believe adults can be effective in reducing cyberbullying?; and 3) Are there differences based on gender or school location (i.e., urban, suburban) in student perceptions of cyberbullying prevention?
METHOD Participants
We used a combination of convenience (i.e., those readily available to the researchers) and criterion sampling (i.e., students had to meet a set of requirements to participate). 16 The criteria for participation included that the student was enrolled in the high school and had access to and used technology on a daily basis. The second criterion was assessed through a survey administered prior to the interview to assess the amount of access and use of technology (Table). Based on the recommended number of participants for this particular form of qualitative methodology, 16 the total target sample size was 40 participants, with 20 participants from each participating school to allow for cross-site analysis (i.e., across schools). 17 We recruited participants at the suburban school through the use of fliers placed in hallways and lobbies, as well as requests for volunteers that were made over a public announcement system each morning. When similar procedures at the urban school resulted in very few participants, additional steps were taken, as per the request of the dean of students and instructional technology teacher. These steps involved sending recruitment letters to 90 randomly chosen students across all 4 grades. These procedures resulted in the target of 20 participants per school, with all volunteers indicating sufficient technology usage and access. The suburban sample consisted of students ranging in age from 15 to 19 (M ¼ 17.5, SD ¼ 1.05) while the urban participants were from 15 to 18 years old (M ¼ 16.0; SD ¼ 1.13). Descriptive information for participants can be found in the Table.
Data Collection
We obtained parental consent and student assent for all students under the age of 18. Students who were 18-years-old and over signed consent for participation. All procedures and forms were approved by the university Institutional Review Board. Graduate research assistants conducted semi-structured interviews with students to discuss various aspects of electronic communication and cyberbullying. 18 (For a copy of the interview protocol, contact the first author.) Interviews were recorded and then transcribed verbatim and uploaded into Atlas.Ti 5.0, a computer-based data management program.
Data Analysis
The current study used a sequential qualitative methodology with multiple phases of data analyses which involved cross-site analysis. 17 Data analysis was based on grounded theory and used an inductive-deductive approach. 19 Inductive (i.e., data-driven) methods helped to uncover themes based solely on information from respondents. 19 Deductive (i.e., literature-driven) methods were then used to determine how developed codes related to previous literature regarding cyberbullying. 19 Two researchers individually reviewed interviews to identify possible themes and met once a week to discuss themes and determine appropriate codes. After considering both data-driven and literature-based information, we developed an initial coding manual. 18 The 2 researchers then applied the initial coding manual to each interview using a constant comparative method. 20 Two researchers individually applied codes to each interview based on question-response segments. They would meet weekly to discuss discrepancies in coding until consensus was obtained for each interview. 20 The coding manual was organized in a hierarchical structure that included primary codes (Level 1) and sub-codes for secondary themes (Level 2). The manual was revised after reviewing each interview resulting in a final manual based on consensus among raters. 21 Interrater reliability (i.e., IRR) for each interview was calculated until the researchers obtained 90% IRR on three consecutive interviews. 21 Once this criterion was met, raters divided and individually coded the remaining interviews and met weekly to determine IRR for 10% of each of the remaining interviews to control for coder drift. 19 The suburban interviews were coded first, with an initial IRR mean of 86.5% and a total of 9 interviews being coded before the criterion of 90% on 3 consecutive interviews was met. 21 The coder drift IRR was 96.8%, with an overall mean IRR for all 20 interviews at 92.5%. The initial IRR for the urban sample was 88.9%, with a total of 11 interviews coded prior to meeting the criterion for individual coding. The IRR during the coder drift phase for the urban sample was 93.7%, with 91.3% as the overall IRR. Coding the urban interviews resulted in changes to the final coding manual; therefore, raters applied these changes to the suburban sample with an IRR of 100%. Frequency counts for the total sample, school location, and gender can be found in the figure.
Student Preventive Coping (Level 1)
Student Preventive Coping addressed research question 1 and involved strategies focused on averting cyberbullying (Figure). This could include general protective strategies or reactions to situations that had the potential to result in cyberbullying. This Level 1 code included 2 sub-codes (Level 2), increased security and awareness and talk in person. These strategies are discussed in the following sections, including differences based on gender and school location when appropriate.
Increased Security and Awareness (Level 2)
In an attempt to prevent cyberbullying, many students reported increased security and awareness (n ¼ 39). These strategies included password protection, restricting who has access to online networking profiles, limiting the amount of personal information available online, and being more aware of the cyber-environment (e.g., who you are talking to). For example, one 18-year-old female suburban student explained that people ''can only see what you put [online],'' so students can reduce the risk of being cyberbullied by filtering what the information they make available. A 15-year-old female urban student also reported that people could put themselves at risk by not being aware of whom they were talking to, stating ''people put on the internet mask and pretend to be who they want to be,'' so students should be mindful of their interactions online. Students described this increased awareness as a way of identifying potentially risky situations. Interestingly, students did not focus just on their own awareness but discussed making sure others are aware of potential cyberbullying situations as well. For example, a 17-year-old male urban student reported that he let his friends know of ''this guy who was trying to start a fight, just saying threatening stuff and spreading rumors'' by posting a warning to his Facebook page.
Talk In Person (Level 2)
The Level 2 code talk in person reflected the need to talk face-to-face with a person during a disagreement in order to prevent the negative situation from leading to cyberbullying. Sixteen students discussed the need for this preventive strategy due to the inability to detect tone or sarcasm online. A 17-yearold female urban student explained that cyberbullying might be prevented when having a disagreement online, if students would ''get it off the Internet . . . [they] need to talk to them to their face, because the Internet can be like a mask so that [the other person] doesn't really have to face them.'' She further explained that sometimes this mask causes students to ''say things they wouldn't say to your face or in a way that's hurtful.'' Approaching others in person can help a student discern tone, sarcasm, so that they can read and respond appropriately to the situation. An 18-year-old male suburban student stated that when ''face-to-face you can see their expressions'' and understand if they were joking or not, whereas online ''words can be misinterpreted'' and escalate to cyberbullying.
Ways to Reduce Cyberbullying-Parents, Schools and Community (Level 1)
The second primary research question, student suggestions regarding ways in which adults (e.g., parents, school personnel, and community members) could address cyberbullying resulted in the Level 1 code Ways to Reduce Cyberbullying-Parents, School and Community and two Level 2 codes: Curriculum and Blame people not technology (Figure).
Curriculum (Level 2)
When describing how adults may help address cyberbullying, 3 male suburban students discussed the use of a curriculum or school information session, and this was coded curriculum. One 16-year-old stated that you ''have to educate the actual people'' and that this education could be provided as a class or assembly. The 3 students who discussed the use of a curriculum indicated that information should be provided early (i.e., elementary school) and by someone experienced with technology and cyberbullying. A 17-year-old male student explained schools could provide: Like a class, just say early . . . like late elementary, early middle school . . . People teaching should either be people who have done it before, know that it's wrong, or people who have a good understanding about it.
Blame People, Not Technology (Level 2)
Two suburban male participants discussed blame people, not technology (see Table), explaining that adults should focus on the people abusing technology rather than the negative aspects of technology or taking it away from students. One participant explained: ''no one wants to blame another human, cause humans can fight back.'' He continued by stating that ''teachers don't want to get blamed, the students don't want to get blamed, so they blame an object.'' Students explained that addressing those who abuse the technology would change Figure. Coding hierarchy for the Level 1 codes student preventive coping, ways to reduce cyberbullying: schools and community, and no way to prevent or reduce.
behavior (e.g., more effective consequences) instead of restricting technology access.
No Way to Reduce Cyberbullying (Level 1)
Twenty-seven of the 40 students reported the Level 1 code no way to reduce cyberbullying, with the majority of these students being from the urban school (Table). Students reported that nothing could be done to reduce cyberbullying, typically due to the difficulty tracking perpetrators, the ability to circumvent security blocks, and the fact that some students will continue despite consequences. When asked if there was a way to prevent cyberbullying, a 17-year-old male urban student answered, ''Not that I can think of. . .you can't really stop somebody from talking to someone else because there is, like, freedom of speech.'' When asked the same question, a 16-yearold female suburban student replied, ''I don't think so. Kids are going to be kids and they are going to argue regardless, they would just find another way.''
DISCUSSION
Using in-depth individual interviews, we obtained information regarding how students believe cyberbullying may be prevented based on their personal experiences and perceptions of the phenomenon. When discussing how peers can help protect themselves from online peer aggression, the majority of the participants suggested increasing protection efforts when online, confirming previous literature. 3,10 In addition to online security, participants focused on how students need to be more aware of their cyber-surroundings. Students often described using social media, such as online message boards and social networking sites (e.g., posting on Facebook), to warn others of cyberbullies, to ask for guidance, and to let the online community know of cyberbullying threats. Students in the current study were likely to reach out to their online community and network when addressing cyberbullying, rather than going to an adult (e.g., teacher, parent). This particular finding indicates an important potential avenue for prevention and intervention.
While students discussed using their online resources to identify and prevent cyberbullying, they also reported that sometimes removing oneself from that medium can reduce cyberbullying which represented a unique finding. Students reported that when negative interactions begin online it is beneficial to approach the situation face-to-face so that the internet, serving as a mask, does not interfere with communication. Helping students recognize that the internet often makes it hard to discern meaning and/or tone is one way students and adults can help prevent cyberbullying.
Unique findings concerned information about how adults can reduce cyberbullying. This included the use of classroom or school-wide lessons to educate youth about cyberbullying that involve people who ''have experience'' in cyberbullying. This suggests that the credibility of those providing such curricula would be important to students and that trustworthiness would be assessed by how much knowledge the educator has, not only of technology but of cyberbullying behaviors. This indicates an important area for practice in that school personnel may need training before providing the services suggested by the participants in this study.
Few students reported adult intervention (e.g., teachers, parents) as an effective way to reduce cyberbullying. Further, students reported that rather than removing technology from victims for protection, schools and parents could develop strategies for addressing students who engage in cyberbullying behaviors. This finding suggests that schools and adults reconsider how they address cyberbullying, moving away from policies that restrict technology access and toward programs addressing specific attitudes or behaviors regarding cyberbullying. The finding regarding the limited number of suggestions for adult intervention was in contrast to a previous study where participants reported parents could help by monitoring and restricting their child's access to technology. 3 One reason may be developmental differences, as this earlier study included middle school students while the current study used high school students who may opt for more independent problem solving.
Finally, the current study used cross-site analysis 17 to examine differences in student suggestions based on gender and school location. In general there were no qualitative differences between male and female participants. Regarding school locations, urban students (n ¼ 18) more often stated that there was nothing adults could do to reduce cyberbullying when compared to suburban students (n ¼ 9). Similar to previous research, 15 urban students stated that while cyberbullying was a negative aspect of their lives, they had additional stressors that could take precedence over addressing electronic victimization, such as taking care of siblings or weekend jobs. Differences between urban and suburban students illustrate the need to take into account context and culture when providing services to students experiencing cyberbullying. Additional research is warranted to explore these differences and implications for research and practice.
LIMITATIONS
One limitation of the current study was using only individual interviews to obtain qualitative information. There are many methods for qualitative research (e.g., focus group interviews) that may have provided additional information. Further, during the 2 data collection points, though only separated by 3 months, advances in technology may have had an effect on student technology usage. For example, Facebook added instant messaging, which allowed students in the urban sample to discuss technology that was not available during data collection with suburban students. Also, changes were made during the second data collection phase at the urban high school because the researchers did not receive responses using the methods that had recruited suburban participants (e.g., fliers). Therefore, recruitment was adapted to the particular culture and context of the urban school. 22 However, the differences in recruitment procedures may have resulted in samples that differed in motivation to participate and this may have been confounded with urban/suburban differences.
CONCLUSION
Using their experiences with and perceptions of cyberbullying, participants in the current study were able to illustrate ways for adults and students to prevent cyberbullying and to explain why those strategies may be beneficial. Students appeared to rely more on themselves and their online community when addressing cyberbullying than has been suggested by prior research. They provided fewer strategies for adults and largely reported that adults have limited, and often ineffective, options for reducing cyberbullying. The participants in the current study emphasized the need to receive help from those trained in technology and cyberbullying. However, it is possible that rather than focus on adult-led prevention efforts, parents and teachers can help students increase their own skills and abilities when protecting themselves against online aggression. Future research is needed to further investigate these findings.
|
2016-11-30T08:43:22.317Z
|
2014-08-01T00:00:00.000
|
{
"year": 2014,
"sha1": "744fdd298a2abff4273a5d1e8a9aa3c378fbf9d5",
"oa_license": "CCBYNC",
"oa_url": "https://escholarship.org/content/qt4vb009q5/qt4vb009q5.pdf?t=nt8c86",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "744fdd298a2abff4273a5d1e8a9aa3c378fbf9d5",
"s2fieldsofstudy": [
"Education",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
56403163
|
pes2o/s2orc
|
v3-fos-license
|
DNA damage repair defects as a new class of endocrine treatment resistance driver
Beyond homologous recombination defects in breast cancer Cancer cells constantly balance the cost of incurred DNA damage against the benefit of uninhibited proliferation. In the past decade, translational advances have enhanced our understanding of diverse cellular processes associated with tumor genome integrity that impact this balance, and therefore, can be leveraged as therapeutic opportunities. In breast cancer, the emphasis of investigations into DNA damage pathways and tumor outcomes has been germline variants that affect tumor incidence, as exemplified by BRCA1/2 and to a lesser extent, PALB2, ATM, CHEK2, RAD51, TP53 among others [1]. Among these components, BRCA1 and BRCA2, belonging to the homologous-recombination pathway have been most widely studied in ovarian and breast cancer. In 2003, a seminal report examined lifetime risk of breast and ovarian cancer in women with BRCA1/2 germline mutations [2], propelling investigation into the role of BRCA1/2 in breast cancer in the context of tumor incidence, tumor biology and reproductive events. Such studies established the prevalence of “BRCAness” in estrogen-receptor negative breast cancers and lead to the concept of creating synthetic lethality in BRCA2-deficient cells by treating them with PARP-inhibitors [3]. However, the role of somatic defects in other DNA repair pathways in breast cancer biology and clinical outcome remained understudied. New class of drivers of endocrine treatment resistance: Single strand break repair pathways About three quarters of breast cancers are estrogen receptor positive (ER+), i.e they express estrogen receptor at a level detectable by immunohistochemistry. Although these cancers tend to be less immediately aggressive than other subtypes, between 40 and 50% of ER+ patient tumors demonstrate resistance to standardof-care endocrine therapy with many patients relapsing 5 or more years after diagnosis (Figure 1). Resistance can be broadly classified as either acquired or intrinsic, meaning resistance arises after treatment with endocrine interventions or that resistance is innate within the tumor rendering it instantly and preemptively resistant to endocrine interventions. Of these two mechanisms of resistance, acquired resistance is the best studied. Editorial
estrogen-receptor negative breast cancers and lead to the concept of creating synthetic lethality in BRCA2-deficient cells by treating them with PARP-inhibitors [3]. However, the role of somatic defects in other DNA repair pathways in breast cancer biology and clinical outcome remained understudied.
New class of drivers of endocrine treatment resistance: Single strand break repair pathways About three quarters of breast cancers are estrogen receptor positive (ER+), i.e they express estrogen receptor at a level detectable by immunohistochemistry. Although these cancers tend to be less immediately aggressive than other subtypes, between 40 and 50% of ER+ patient tumors demonstrate resistance to standardof-care endocrine therapy with many patients relapsing 5 or more years after diagnosis ( Figure 1). Resistance can be broadly classified as either acquired or intrinsic, meaning resistance arises after treatment with endocrine interventions or that resistance is innate within the tumor rendering it instantly and preemptively resistant to endocrine interventions. Of these two mechanisms of resistance, acquired resistance is the best studied. Mutations in ESR1 as well as activation of growth factor pathways, e.g. HER2, have been well established as drivers of acquired resistance in preclinical studies and in clinical data from patient tumors [4]. More recently, ESR1 gene fusions were also identified as drivers of acquired resistance in patients with metastatic ER+ breast cancer [5,6]. These combined insights into the underlying biology of acquired endocrine treatment resistance in as many as 40% of resistant patients has resulted in potentially more effective anti-estrogens that are currently being tested in clinical trials.
On the other hand, drivers of intrinsic resistance have been understudied, with the notable exception of HER2 amplification [7], the discovery of which reclassified ER+ breast cancer and significantly improved therapeutic options. Two recent studies identified defects in DNA damage repair genes belonging to single strand break repair pathways, primarily mismatch and excision repair, as an entirely new causal mechanism observed in ~1/3rd of intrinsically endocrine treatment resistant ER+ breast cancer patients ( Figure 1) [8,9]. These results suggest that distinct pathways may be dysregulated in patient tumors that are intrinsically resistant to endocrine treatment, and open new avenues for improvement of diagnostic and therapeutic clinical space.
Promise for new therapeutic and predictive avenues -prediction of sensitivity to CDK4/6 inhibitors
Preclinical causal and mechanistic investigation into the role of single strand break repair pathways in endocrine treatment resistance suggested a common mechanism of dysregulated G1-S transition by which mutation or downregulation of select mismatch repair and excision repair genes lead to endocrine treatment-resistance [8,9]. Loss of any of the specific mismatch, nucleotide excision or base excision repair components identified leads to unchecked activation of CDK4 even in the presence of endocrine treatment, rendering these tumors resistant to endocrine treatment but sensitive to CDK4/6 inhibitors in combination with endocrine treatment [8,9]. This discovery presents the use of CDK4/6 inhibitors (e.g. palbociclib, abemaciclib) as front-line therapy in ER+ breast cancer patients, thereby increasing chances of preventing resistance and metastasis. In a recent study, selective CDK4/6 inhibitors where shown not only to induce tumor cell cycle arrest, but also promote antitumor immunity [10], hence providing rationale for new combination regimens comprising CDK4/6 inhibitors and immunotherapies as anti-cancer treatment. High mutation load consequent of these endocrine therapy resistanceinducing single stand break repair defects should further contribute to the immunogenicity of these tumors. These discoveries also lay the foundation for new diagnostic assays that can stratify patients early on in the timeline of their disease as likely to respond, or not, to endocrine treatment and CDK4/6 inhibitor treatment, a potential breakthrough in effective clinical management of breast cancer.
Overall, advances in translational research have identified potential causes of acquired endocrine treatment resistance in 30-40% of breast cancer patients resulting in an escalation of clinical investigations testing (Figure 1) targeted therapies that will undoubtedly present clinicians with more options when treating their patients. Recent discoveries of a role for DNA repair defects will likely similarly impact clinical treatment for patients with ER+ breast tumors that are intrinsically resistant to endocrine treatment. Continuing studies and new insights into the biology underlying this condition provide promise of truly effective personalized medicine for this subset of patients.
|
2018-12-18T14:04:06.961Z
|
2018-11-20T00:00:00.000
|
{
"year": 2018,
"sha1": "e0065329c7be2da57483b62ba637096a95f6172a",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.18632/oncotarget.26363",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e0065329c7be2da57483b62ba637096a95f6172a",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
215855404
|
pes2o/s2orc
|
v3-fos-license
|
Musculoskeletal multibody simulations for the optimal tribological design of human prostheses: the case of the ankle joint
A thorough determination of the loading of the ankle joint is useful both for the optimal design of prostheses and for the preclinical testing in terms of tribological performances. In vivo measurements of joint forces are usually not easy in the in-vivo settings, then non-invasive in-silico methods should be considered. Nowadays resultant joint loads can be reliably estimated by using musculoskeletal modelling in an inverse dynamic approach, starting by motion data obtained in gait analysis laboratories for several human activities. The main goal of this study was to provide a set of dynamical loading curves obtained by the AnyBody Modelling SystemTM (AMS) computer software starting from ground reaction forces and kinematic data obtained by Vaughan et al. in the case of human normal gait. The model accounts for 70 Hill modelled muscles and the muscular recruitment strategy was choose as polynomial criteria. The results are presented in terms of Antero Posterior, Proximo Distal, Medio Lateral Forces and Ankle Eversion, Plantar Flexion, Axial moments, discussing their role on the synovial lubrication phenomena effect in the Total Ankle Arthroplasty (TAR) for the optimal prostheses structural and tribological design.
Introduction
In the last year's total joint arthroplasty (TJA) has become a common and a well-established surgical procedure in the case of severe arthritis especially regarding lower limb synovial joints [1]. Total Hip Replacements (THR) [2], Total Knee Replacement (TKR) [3] and Total Ankle Replacement (TAR) [4] are devoted to a substitution respectively of the hip, knee and ankle joint by using prostheses which unfortunately requires, in some cases, revisions and/or substitutions [5][6][7][8]. This problem is particularly felt in the case of THA since higher revision rate (about three times more for 100 patients) [9]. Among many others implants failure causes, nowadays particular interest is devoted to a correct tribological design of the implants [10,11] in order to achieve more performant prostheses and to decrease, in this way, the rate of THA revision.
An optimized prostheses tribological design requires the choice of more and more performants materials in terms of stress, strain but also in wear resistance characteristics [12]; also the optimized CEURSIS IOP Conf. Series: Materials Science and Engineering 749 (2020) 012008 IOP Publishing doi:10.1088/1757-899X/749/1/012008 2 geometrical design is necessary to favour the synovial lubrication phenomena especially in terms of full film and/or elasto-hydrodynamic lubrication inside the bio-bearing gap [13,14].
For achieving this, a detailed description of the acting loads on the prostheses and of the kinematical behaviour during the common human daily activities is required. [15]. This is not always simple to obtain in-vivo, since direct force/displacements measurements on joints is in most cases not feasible due to economical and ethical problems [16] and it could be referred only to the body characteristics of the patients under experimental activity. For this purpose the possibility to obtain predicted loads from in-silico simulation is a challenge [17] toward an optimized wear assessment tool necessary for the optimized tribological design of the joint.
This paper aims to give a contribute toward this direction focusing on the theoretical background in which the musculoskeletal multibody algorithms operate, and discussing the obtained results with the use of a musculoskeletal multibody model with a Vaughan gait type.
The musculoskeletal model
In this study, we used a musculoskeletal modelling software, AnyBody Modelling System™ AMS), to estimate force and moment components acting on the ankle joint during a level walking. The AnyBody Modelling System™ is software based on the human musculoskeletal modelling able to simulate the dynamics of human motion. This environment adopts the inverse dynamics approach and different algorithms allow selecting the appropriate recruitment strategies allowing a complete analysis of the load components acting on the different joints of the human body during a known human body movement. Following the inverse dynamic approach, for achieving the joint forces the kinematic data and ground reaction force must be furnish as input for the simulations. These data are usually obtained in the Gait Analysis laboratories by using special Motion Capture apparatus, which allow to measure by cameras the subject's gait kinematics, monitoring markers fixed in particular points of the body on the person's skin ( Figure 1). In particular this setup is able to measure an individual gait pattern by collecting the kinematical data of the lower limbs and the pelvis through a walking cycle (gait). Figure 1 shows three frames of the gait model driven by kinematic data. The dark spheres are the skin markers, the black lines representing ground reaction forces. The positions with their first and second derivatives in time, together with knowledge of the ground reaction forces, after a data filtering, represent the software input to predict the net forces in the leg. In fact, the inverse dynamics is based on the knowledge of the motion and the external loads data to determine the unknown internal forces. Following this approach, then the calculation of the behaviour of each muscle force is made possible by solving a redundancy muscular problem. In fact the muscular system is a quite complex system and for each motion many different sets of muscle forces could be involved; chose of the appropriate set is made by the central CEURSIS IOP Conf. Series: Materials Science and Engineering 749 (2020) 012008 IOP Publishing doi:10.1088/1757-899X/749/1/012008 3 nervous system (CNS) which instantly chooses one of them in order to produce the assigned kinematics. At moment the selection strategy is still not fully understand, however, the approach used by AnyBody Modelling System is well described in [18]; the software uses an algorithm to determine the activation of each muscle in order to replicate the function of the central nervous system. The list of the considered muscles in the model is reported in table 1. The used approach for solving the inverse dynamic problem accounting for the muscle recruitment strategy is based on an optimization problem.
Defining an objective function in the form: In which ( ) are the muscular forces for which The (2) states the non-negativity constraints on the muscle forces and that muscle can only pull (not push). The upper limit of the i-muscle strength capability is then assumed to .
Once defined the vector of the muscle forces and joint reactions in the form: The dynamic equilibrium equations can be obtained in the form: where C is a coefficient matrix for the unknown forces/moments, while d is a vector of the known applied loads and inertia actions. The most adopted objective function G forms, normalised for each muscle, are the polynomial criteria and the soft saturation criteria [19]: (6) Both (5) and (6) contain a power variable p and a normalizing function for each muscle. In this study we used the approach (4) in which was settled p = 2.
Ankle joint forces were simulated by using a 18 degrees of freedom lower limb made of 7 rigid members, the pelvis and the thigh, the shank and the foot (for each leg). From a kinematical point of view, the hip joint was assumed in the form of a spherical joint while the knee as a revolute joint and the ankle trochlear joint. In this study we used a set of kinematical input data from Vaughan et al. [20]. Human main parameters here adopted were a weight of 64.9 Kg and a height of 1.75.
Results
The output of the model will be presented in the in terms of load components acting on the ankle joint during the gait. With reference to the figure 2, the calculated loads are: the anterior-posterior force (Fx), the proximo-distal force (Fy), the medial-lateral force (Fz), the ankle eversion moment (Mx), the axial moment (My) and the plantar flexion (Mz). About the muscle recruitment, the simulations allowed to calculate all the forces exerted by all the muscles considered in the lower limb model (Table 2) during the gait. Figure 4 shows a schematic image of the model which highlights the predominance during the toe-off the four muscles just listed. In figure 8 is reported a complete activation during the gait of the muscles involved. From an analysis of the obtained results is possible to observe that, despite the muscles involved during the movement of walking are numerous, more than 60% of the total force exerted during the 50% of gait cycle (toeoff) is provided by four major muscles, which are the Soleus (in the back part of the lower leg), the Gastrocnemius (in the back part of the lower leg), Rectus Femoris (one of the four quadriceps muscles of the human body), and the Iliopsoas (combination of the psoas major and the iliacus at their inferior ends). These muscles are distinct in the abdomen, but usually indistinguishable in the thigh.
Discussion
It is well known that loading of the lower limb joints primarily depends on the physical activity (kinematical data) but they are also influenced by body weight (BW) but, in general, they individually differs greatly, even between subjects with the same BW. The simulations show maximum values of the ankle force and moment components in correspondence of about the 50% of gait cycle (toe-off phase) with a prevalence of the proximo distal force Fy with a value in modulus of 2750 N and of the plantar flexion moment Mz with a value of 82 Nm. The obtained behaviour of the loads, however, show good agreement with the others found in literature for example in [21]. The existing discrepancies however should be attributable to the fact that the considered model assumes limited degree of freedom for the joint with the foot considered a single segment. Decreasing the degrees of freedom in the model allow a reduction in the computational time by reducing the complexity of the calculations to predict the muscle and joint contact forces, but on the other hand, this causes approximations in the force calculation which could accumulate on the whole kinematical chain furnishing more consistent discrepancies in the simulation.
Of course, another cause of alterations of the calculated forces is introduced by the anthropometric differences between the human bodies even if the scaling procedure aims to reduce it. Regarding the muscles activation Figure 8 shows the force exerted by each of the 42 muscles implemented in the model. As can be observed despite the muscles involved during the movement of walking are numerous, more than 60% of the total force exerted during the toe-off (50% of gait cycle) is provided by the four major muscles.
From a TKA design point of view, the obtained results allow the optimized design of the prostheses both from a structural point of view both from a tribological one [22,23]. In fact the detailed knowledge of the load acting on the joint permits the accurate finite element modelling of the joint [24] to analyse the stability of the implant contributing to improve its stability and structural performances. Moreover the knowledge of the variation of the loads and of the kinematical quantities during the gait is necessary to the geometrical design of the synovial lubricated gap in order to achieve, according to Medley et al. [25] particular lubrication mechanisms (mixed or full-film) [22,23]. This could be achieved by reaching an optimal value h min (minimum synovial meatus height) divided by the root mean square of roughness values of the prostheses contact surfaces-in order to optimize the prostheses performances in terms of wear resistance.
Conclusion
With the purpose of the optimization of the TKA design, a thorough determination of the loading of the ankle joint is necessary both for the its stability and structural resistance design and for its tribological performances improvement, in terms especially of wear resistance. Unfortunately, in-vivo measurements of joint internal forces are not a simple and allowed task, and then non-invasive insilico approach should furnish a meaning full perspective.
In this paper are presented ankle joint dynamical loading components during the gait, obtained by using the AnyBody Modelling SystemTM (AMS) computer software, adopting kinematical data obtained by Vaughan et al. [20] The obtained results, in terms of Antero Posterior, Proximo Distal, Medio Lateral Forces and Ankle Eversion, Plantar Flexion, Axial moments, shows a satisficing agreement allowing to be used both for detailed prostheses FEM analysis both for the optimized tribological design in terms of synovial lubricating mechanisms. Of course this investigation has limitations regarding the necessary full validation of the proposed model to be executed running several simulations, by varying key parameters of the model and by comparing the results with the ones (few) found in literature from in-vivo testing.
|
2020-04-21T04:03:13.069Z
|
2020-03-21T00:00:00.000
|
{
"year": 2020,
"sha1": "ca43b3211d93aed73f31639e06f44ed16446d984",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/749/1/012008",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "9c309735b4c2f85f0bb4809adf8b0840fd4a2f83",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
}
|
220971459
|
pes2o/s2orc
|
v3-fos-license
|
Automatic Time-Resolved Fluorescence Immunoassay of Serum Alpha Fetoprotein-L3 Variant via LCA Magnetic Cationic Polymeric Liposomes Improves the Diagnostic Accuracy of Liver Cancer
Purpose The aim of this study was to develop an avidin-modified macromolecular lipid magnetic sphere and its application in differential diagnosis of liver disease and liver cancer. Materials and Methods Lectin-modified macromolecular lipid magnetic spheres were prepared by thin-film hydration method using lentil lectin derivatives (LCA-HQ) and cholesterol as raw materials. Alpha-fetoprotein variants (AFP-L3) in serum from healthy people, liver disease and liver cancer patients were isolated using the prepared lectin-modified macromolecular lipid magnetic spheres, and alpha-fetoprotein (AFP) and AFP-L3 were detected by fully automatic time-resolved fluorescence immunoassay. Results The lectin polymer lipid magnetic sphere prepared in this study was superparamagnetic and encapsulated by a lectin derivative. There was no significant difference in the recovery rate of AFP-L3 between avidin magnetic ball-automatic time-resolved fluorescence immunoassay and manual micro-affinity column method (p>0.05). We found that AFP-L3 can be used as a differential indicator between liver cancer and liver disease. The positive rate of AFP and AFP-L3 in liver cancer patients was higher than that in healthy people and liver disease patients (p<0.001). The AUC (95% CI) of AFP and AFP-L3 were 0.743 ± 0.031 and 0.850 ± 0.024, respectively. AFP-L3 AUC value is greater than AFP; therefore, AFP-L3 distinguishes liver cancer more accurately, and the difference is statistically different, p<0.05. Conclusion We proposed a novel method for integration of the lectin polymer lipid magnetic spheres and time-resolved fluorescence immunoassay that enables simple, accurate and rapid determination of AFP-L3 in clinical samples. To be noted, fully automatic time-resolved fluorescence immunoassay compared with the commonly used techniques in clinical practice, the measurement procedure is simple and is expected to be used for the detection and accurate diagnosis of liver cancer.
Introduction
Alpha-fetoprotein (AFP) is a common clinical liver cancer-specific tumor marker, 1,2 but approximately 35% of patients with primary liver cancer have serum AFP concentrations less than 400 ng/mL. 3 In some amphoteric liver diseases such as cirrhosis and hepatitis patients, the serum AFP may also be elevated, and how to identify the two is a clinical problem that needs to be solved. 4 Alpha-fetoprotein isomers have the same amino acid sequence to AFP, but with different sugar chain structure and isoelectric point. [5][6][7] There are a lot of literature reports show AFP-L3 is considered to be a specific marker for liver cancer, and AFP-L3 can be detected in the serum of about 35% of small liver cancer patients (<2 cm) 8-10. According to the different binding ability with lentil lectin (LCA), AFP was divided into LCA binding type and LCA non-binding type. [11][12][13] LCA unconjugated types include Alphafetoprotein variant 1 (AFP-L1) and Alpha-fetoprotein variant 2 (AFP-L2), which are found in benign hepatocytes and pregnant women. 6 Alpha-fetoprotein variant 3 (AFP-L3) is an LCA binding type, mainly secreted by hepatoma cells. 14 Magnetic particles not only have the surface effects, quantum size effects, volume effects, macroscopic quantum-tunneling effects of ordinary nanomaterials, but also have special magnetic properties such as superparamagnetic, high coercivity, low curie temperature and high magnetization efficiency and such special magnetic properties. [15][16][17] The current immunomagnetic beads and magnetic particles with functional groups are mostly prepared by using a coupling agent on the surface of a magnetic sphere having a reactive group. [18][19][20] The surface of the magnetic sphere of the reactive group is prepared by a coupling agent, and the content of the liposome surface streptavidin or lectin is low, and it is easy to cause a decrease in the activity of these substances. [21][22][23] In order to solve this bottleneck problem, this study uses lentil lectin (LCA) to specifically bind to AFP-L3, using a variety of techniques such as polymer, liposome, magnetic separation and bioassay. The lectin derivative directly encapsulates the magnetic particles to prepare a lectin nanomagnetic particle with magnetic polymer liposome characteristics, which can greatly improve the lectin content on the surface of the magnetic particle and achieve convenient and rapid detection of AFP-L3.
Reagents and Instruments
Lentil agglutinin (LCA) was purchased from Sigma-Aldrich; raw material magnetic bead and chitosan cetyl quaternary ammonium salt (HQ) were synthesized and commercial development by Xiaofei Liang, anti-AFP-L3 antibody was constructed and reserved by central Laboratory of Shanghai Cancer Research Institute. DOPC, DSPE-PEG-NH 2 was purchased from Avanti (USA), and cholesterol (Chol), dichloromethane, and other commonly used reagents were purchased from China National Pharmaceutical Corporation. The experimental water was deionized double-distilled water (18.2 MΩ) (Millipore, USA). Mouse anti-human AFP-L3 monoclonal antibody, goat anti-mouse IgG (H+L)-HRP was purchased from lentils lectin quantitative Elisa kit, and ECL was purchased from Santa Cruz, USA. Other biochemical reagents were of analytical grade and were purchased from Sinopharm. Other reagents are synthesized and stored by our laboratory. X-ray powder diffractometer (Advanced-D8, Bruker, Germany), Nicolet 380 Fourier transform infrared spectroscopy (FT-IR), nano-particle size and Zeta potential analyzer (Nano-ZETA1, Malvern, UK), vibration sample magnetic strength Vibrain sample magnetometer, Lake Shore VSM 7407 series. The automated magnetic nanosphere separator was purchased from Suzhou Tianlong Biotechnology Co., Ltd. AFP-L3 was isolated from Beijing Rijing Biotechnology Co., Ltd. (Quasi 10-0020) by Affinity Centrifugal Column Method. Time-resolved fluorescence immunoassay for AFP was provided by Shanghai Youni Biotechnology Co., Ltd.
Preparation of Lectin-Modified Magnetic Sphere
First, the lectin is dissolved in a mixed solution of deionized water and isopropanol (the mass ratio of deionized water to alcohol is 1:0 to 3), and then the chitosan cetyl quaternary ammonium salt is slowly added. In the system, the mass ratio of lectin to chitosan cetyl quaternary ammonium salt in the system is 1:100. After stirring at room temperature for 24 hours, the reaction solution is dialyzed by semi-permeable membrane for 24 hours, after lyophilization, a white powder of lectin-chitosan cetyl quaternary ammonium salt was obtained. The lectin-modified lipid magnetic spheres were prepared by thin-film evaporation method. 2.5 mg LCA-HQ, 0.5 mg cholesterol, and 0.5 mg magnetic particles were dissolved in 15 mL of chloroform and then transferred to an eggplant-shaped flask using a rotary evaporator. The film was evaporated under reduced pressure and dried under vacuum overnight to remove the residual organic solvent. Fifteen-milliliter PBS (pH=7.0, 0.1 M) was added to the membrane of the eggplant bottle, hydrated at 37°C for 12 h, mixed and magnetically separated 3 times to obtain agglutininmodified magnetic particles.
Physical Characterization of Lectin-Modified Magnetic Particles
Particle size, morphology and structure were analyzed by particle size analyzer, Transmission Electron Microscope (TEM) and X-ray powder diffractometer (XRD). UV-vis and infrared-spectral infrared spectroscopy (FT-IR) optical properties of the particles were analyzed, and the magnetic properties of the magnetic particles before and after the modification of the magnetic particles were analyzed by a vibrating sample magnetometer (VSM).
Simulate Recovery Experiment the Magnetic Particle Method and Micro-Centrifugal Column Method for the Recovery of AFP-L3
The concentration of AFP-L3 in different concentrations of AFP-L3 solution was studied by magnetic microspheres and micro-centrifugal column, respectively. The AFP-L3 recovered by the two methods was used for 20 min, respectively, and then flow cytometry analysis of the fluorescently labeled AFP-L3 obtained by the two methods was performed using a BD flow cytometer (Calibur; USA) to evaluate the change of fluorescence strength. AFP-L3 was taken at 200 ng/mL, and AFP-L3 was separated and recovered by magnetic sphere method and microcentrifugal column method. Quantitative comparison magnetic particle method and micro-centrifugal column method for recovery of AFP-L3 by Western blot method.
Clinical Blood Sample Collection
This clinical trial was approved by the Second Affiliated Hospital of Dalian Medical University. All the patient and healthy volunteer consents were written informed consents, and that this was conducted in accordance with the Declaration of Helsinki. A total of 1150 serum samples were selected, including 100 normal subjects, 350 chronic hepatitis patients, 320 cirrhotic patients, 380 liver cancer patients, serum samples of chronic hepatitis patients and cirrhotic patients were regarded as benign liver disease group. Normal human serum samples were obtained from the Shanghai Cancer Institute and the Changzheng Hospital in Shanghai. All patients with liver cancer were confirmed by CT and B-ultrasound, all patients with hepatitis were confirmed by hepatobiliary ultrasound, all patients with cirrhosis were confirmed by CT and B-ultrasound. The sample is the supernatant separated from fresh serum and stored at −20°C until detection. The liver disease and liver cancer serum were obtained from the Second Affiliated Hospital of Dalian Medical University and Yancheng Hospital Affiliated Southeast University. All samples were separated from fresh serum and stored at −20°C until detection.
Sample Test Method Steps
The procedure for sample preparation and testing is described in Supporting information, and the automated separator condition settings are shown in Figure S1. AFP, AFP-L3 detection analysis and reference range, timeresolved fluorescence immunoassay for alpha-fetoprotein (AFP) in serum and alpha-fetoprotein (AFP-L3) in the supernatant. Percent alpha-fetoprotein variant =alphafetoprotein concentration in the separation solution/total alpha-fetoprotein concentration (ng/mL) in the serum sample. Reference range AFP-L3/AFP ≦ 5% negative; 5% <AFP-L3/AFP <10% suspicious; AFP-L3/AFP ≧ 10% positive.
Data Analysis Processing
For multiple comparisons, a one-way ANOVA test was performed. The t test (two-tailed) was used for comparison between the two groups. Data are expressed as mean ± standard deviation (S.D.). Survivors were estimated using a Log rank test. *p<0.05, **p<0.01, ***p<0.001.
Physical Characterization of Lectin Polymer Liposome Magnetic Particles
The preparation principle of LCA-HQ and cholesterol and hydrophobic magnetic beads prepared by a thinfilm method to prepare avidin-modified polymer liposome magnetic spheres (LCA-MMLs) is shown in Figure 1A. The flow chart of LCA-MMLs sorting serum alpha-fetoprotein variation body 3 is as shown in Figure 1B.
In this study, lentil lectin (LCA) was modified by coupling with HQ. The coupled HQ tail increased the hydrophobicity of the polymer to form a lipid bilayer with cholesterol and then encapsulated hydrophobic magnetic beads to prepare LCA macromolecular grease magnetic particle. XRD analysis was performed on HQ, LCA-HQ cholesterol XRD powder, according to the peak map in the Figures S2 and S3 Figure S4, and the saturation magnetization of Fe 3 O 4 is about 60 emu/g, the saturation magnetization of LCA-MMLs is about 40 emu/g, and the saturation magnetization of pure Fe 3 O 4 is higher than LCA-MMLs. It can also be seen from the figure that the particles have no obvious hysteresis loop, and the residual magnetism is basically zero, showing good superparamagnetic. Elisa analysis showed that each milligram of magnetic sphere contained 5.5 micrograms of lectin that is a high lectin content. Figure 2A shows that the particle size of LCA-MMLs in aqueous solution is about 89.52±28.52 nm, and the dispersion coefficient PDI is 0.074, which is more concentrated. Figure 2B indicates the potential of LCA-MMLs in aqueous solution was 14.1±4.84 mV, showing a weak positive charge. Transmission Electron Microscope analysis in Figure 2C showed that LCA-MMLs exhibited a stable and regular globular shape. In summary, this study successfully prepared LCA functional polymer liposome magnetic spheres.
Simulation Recovery Experiment Studies the Results of Detection and Analysis of AFP-L3 by Magnetic Sphere Method and Micro-Centrifugal Column Method
The recovery experiment of 200 ng/mL AFP-L3 was carried out, and the recovery efficiency of magnetic separation method and micro-centrifugal column method was analyzed and compared. As shown in Figure 3A, flow cytometry analysis showed that the fluorescence signals obtained by magnetic separation and microcentrifugation of LCA-MMLs were close to the fluorescence intensity of the original concentration of AFP-L3. The Western blot results also showed that both the two methods had high recovery efficiency of AFP-L3 ( Figure 3B), the molecular weight of AFP-L3 is about 63-75 KD.
The AFP-L3 in different concentrations solution (12.5--1000 ng/mL) was enriched by the prepared lectin magnetic microspheres and the micro-centrifugal column method. The results in Figure 4A showed that the AFP-L3 recovery of the two methods was greater than 90%, and there was no significant difference between the magnetic microsphere method and the micro-centrifugal column method (P>0.05). Correlation analysis of AFP-L3 concentration obtained by two methods, the result in Figure 4B showed that there was a significant correlation between the two AFP-L3 concentrations, with a correlation coefficient γ=0.985, p < 0.001.
AFP and AFP-L3 Test Results of Serum Samples
The clinical serum samples were used for analysis serum biomarkers content of AFP and AFP-L3 by time-resolved fluorescence immunoassay. Serum AFP and AFP-L3 levels in patients with liver cancer were significantly higher other group, the difference was statistically significant (P <0.05), the results shown in Table 1. Serum AFP and AFP-L3 levels in healthy volunteers were significantly lower than patients, there are significant statistical differences (P <0.05), so the serum alpha-fetoprotein of healthy people will not be higher than the detection index. We found that the AFP content of the hepatitis group was significantly higher than that of the cirrhosis group (P <0.05), but there is no difference in the AFP-L3 content between the two groups (Table 1); thus, AFP-L3 can potentially be used for the differential diagnosis of hepatitis and cirrhosis. Meanwhile, the level of AFP-L3 in patients with liver disease is significantly lower than that in patients with liver cancer, so AFP-L3 can also play a key role in the identification of patients whether with liver cancer or liver diseases.
The AFP and AFP-L3 positive rates in serum biomarkers were shown in Table 2. AFP-positive ratio and AFP-L3 positive ratio in the liver cancer group were greater than those in the control group (cirrhosis, hepatitis, and healthy group). The positive ratio of AFP and AFP-L3 in the hepatitis group was significantly higher than that in the cirrhotic and normal groups, the difference was statistically significant (p<0.05) thus AFP-L3 may be used as a differential indicator differentiate between hepatitis and cirrhosis.
Receiver-Operating Characteristic (ROC) Curve of AFP and AFP-L3
Receiver-operating characteristic (ROC) curve was used to the differential diagnosis of liver cancer in Figure 5, the calculated area under the ROC curve (AUC) is shown in Table 3, the AUC (95% CI) of AFP, AFP-L3 were 0.743 ± 0.031, 0.850 ± 0.024, respectively. AFP-L3 AUC value is greater than AFP; therefore, AFP-L3 distinguishes liver cancer more accurately, and the difference is statistically different, p<0.05.
Discussion
The main reason for the high death rate of liver cancer is attributable to it is often in the late stage when it is diagnosed, for liver cancer patients have little sensation at the beginning of the disease. Clinical studies show that detection of AFP-L3 can provide more accurate information about liver cancer than AFP, 24,25 early warning of benign and malignant liver disease, clinical stage of liver cancer, and biological malignancy of liver cancer. 14,26 In our study, we found the serum AFP-L3 can potentially be used for the differential diagnosis of hepatitis and cirrhosis. Meanwhile, the level of AFP-L3 in patients with liver disease is significantly lower than that in patients with liver cancer; thus, AFP-L3 can also play a key role in the identification of patients whether with liver cancer or liver diseases. The ROC curve was used to compare the accuracy and sensitivity of AFP and AFP-L3 in the diagnosis of clinical samples, the AUC (95% CI) of AFP and AFP-L3 were 0.743 ± 0.031 and 0.850 ± 0.024, respectively; therefore, AFP-L3 distinguishes liver cancer more accurately, and the difference is statistically different, p<0.05.
In recent years, scholars have developed a various method for detecting AFP variants, such as immunity electrophoresis analysis, affinity electrophoresis blotting, affinity chromatography spin columns, 6,27-29 but most of which are difficult to meet the needs of clinical testing due to the complicated operation and time-consuming. In our study, the AFP-L3 recovery experiment was carried out for the parallel comparison the of the micro-centrifugal column method and our magnetic microsphere method for the enrichment of AFP-L3, result showed that both of then method with a high enrichment ability, with a correlation coefficient γ=0.985. However, the micro-centrifugal column method is highly dependent on manual operation and requires repeated centrifugation operations. Magnetic polymer microspheres can be easily and quickly separated from the medium under the action of an external magnetic field. Therefore, the microsphere method can meet the needs of a large number of samples clinical testing for the automated processing. The automatic time-resolved fluorescence immunoassay of serum AFP-L3 via LCA magnetic cationic polymeric liposomes method established in this study is a completely new method with following advantages: (1) short time-consuming; (2) automatic operation process; (3) high AFP-L3 enrich ability.
Conclusions
In this study, alpha-fetoprotein variants were pre-separated by nucleic acid extraction separators using polymer nucleus magnetic beads (LCA-MMLs) modified with lens culinaris agglutinin (LCA) then detected by time-resolved fluorescence immunoassay. The magnetosphere method established in this study is a new method for separation and detection of AFP variation with the following characteristics: shorten the detection time, facilitate automated operation, stronger ability to enrich AFP variants, accurate analysis accuracy.
Clinical results show that AFP-L3 is a highly specific marker for the diagnosis of liver cancer. This study provides a new means for rapid detection of AFP-L3 and for the diagnosis and prognosis of hepatocellular carcinoma.
Author Contributions
Xiaofei Liang and Ying Li contributed to the conception and design of the study; Kai Wang and Yuzhong Li performed the experimental work; Xiaowei Wang and Jianpeng Jiao contributed to the analysis and representation of data; Wenyue Gu contributed to the sample selection; Kai Wang and Xiaofei Liang wrote the manuscript. All authors contributed to data analysis, drafting or revising the article, gave final approval of the version to be published, and agree to be accountable for all aspects of the work.
|
2020-07-16T09:04:23.570Z
|
2020-07-01T00:00:00.000
|
{
"year": 2020,
"sha1": "01cf46c8d8f37663a4398bca261f4d8d4467c17a",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=59591",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bcea68c1e63ecbef38b6f2584c2cbcf159520e6b",
"s2fieldsofstudy": [
"Chemistry",
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
}
|
25942464
|
pes2o/s2orc
|
v3-fos-license
|
Asymmetric synthesis and biological activities of natural product (+)-balasubramide and its derivatives
Abstract The natural product (+)-balasubramide (3j) and its derivatives (3a–3i) were synthesized using a two-step asymmetric synthesis, and the biological activities of 3a–3j were determined in vitro. Methyl (2S,3R)-(+)-3-phenyloxirane-2-carboxylate (1h), the asymmetric synthesis of which was described in a previous paper, was selected as the starting material. Compounds 3a–3j were evaluated for their neuroprotective, antioxidative, and anti-neuroinflammatory effects. (+)-Balasubramide and its derivatives with different electronegative groups in the 6-phenyl ring produced little neuroprotection and antioxidation, but induced potent anti-neuroinflammatory effects in BV-2 microglial cells (with the exception of 3g). Compound 3c, with a trifluoromethyl group in its 6-phenyl ring, was a particularly potent anti-neuroinflammatory agent. These results demonstrated that the electronegativity of the 6-phenyl ring of (+)-balasubramide is an important determinant of its inhibitory effect on neuroinflammation. More electronegative substituents result in more potent anti-neuroinflammatory effects. Moreover, cytotoxicity assays indicated no significant effects of the tested compounds.
Introduction
(+)-Balasubramide is an eight-membered lactam compound with an absolute configuration of 5S,6R (Juárez-Calderón et al. 2013) that is extracted from the leaves of the Sri Lankan plant Clausena indica (Riemer et al. 1997) along with its biosynthetic precursor, (+)-prebalamide ( Figure 1). Studies have shown that five-membered lactam compounds extracted from C. indica have extensive pharmacological uses. For example, clausenamide (Figure 1) produces hepatoprotective (Yang et al. 1987) and neuroprotective effects (Xue et al. 2008), inhibits apoptosis (Yao et al. 2001) and lipid peroxidation (Lin et al. 1992), and acts as an oxygen free radical scavenger (Jiang & Zhang 1998). However, eight-membered lactam compounds are rarely reported to have pharmacological activity. Total synthesis of (+)-balasubramide is difficult because of its eight-membered lactam ring and two chiral centers. Thus, the biological activities and preliminary structure-activity relationships (SaRs) of related compounds have not yet been reported.
at present, although there are three synthetic routes regarding balasubramide (Johansen et al. 2007;Yang et al. 2007;Zheng et al. 2009), (+)-balasubramide, a chiral natural product, only could be obtained by one method (Johansen et al. 2007) which was synthesized by chiral resolution and the overall yield was only 17%.
We investigated an additional method of total synthesis of (+)-balasubramide to allow us to study its bioactivity and to perform a preliminary analysis of the SaRs of (+)-balasubramide and derivative compounds. We previously reported a short asymmetric synthetic route of methyl (2S,3R)-(+)-3-phenyloxirane-2-carboxylate (1h) (Xuan et al. 2013). In this study, we used compound 1h as the starting material to synthesize a series of (+)-balasubramide derivatives in order to explore their biological activities, including neuroprotection, anti-neuroinflammation, cytotoxic effects.
In recent years, as a catalytic method with the same status of enzyme catalysis and metal catalysis, asymmetric catalysis has been an important tool for building chiral molecular scaffold owing to the high efficiency and selectivity Wang et al. 2015;Woźniak et al. 2015;Zhao et al. 2015). our group initially reported a model reaction of cinnamaldehyde with organocatalyst, a diphenyl prolinol TeS ether, to synthesize methyl (2S,3R)-(+)-3-phenyloxirane-2-carboxylate (1h) via a one-pot reaction with 73% yield and 95% enantioselectivity (Xuan et al. 2013). In this report, we used compound 1h to prepare the linear amide 2j by amine-ester interchange with N-methyltryptamine, followed by intramolecular cyclization using ytterbium(III) triflate (Yb(CF 3 So 3 ) 3 ) as a catalyst, resulting in (+)-balasubramide (3j) (Figure 2).
In order to introduce substituents with varying electronegativity into the 6-phenyl ring, cinnamaldehydes with different substitutions were employed to prepare (+)-balasubramide analogues using organocatalyst followed by asymmetric epoxidation, oxidative esterification, an amine-ester interchange reaction, and intramolecular cyclization ( Figure 3).
Chemistry
In the conversion of methyl (2S,3R)-(+)-3-phenyloxirane-2-carboxylate into the linear amide 2j, an amine-ester interchange reaction with N-methyltryptamine was carried out at room temperature in CH 3 oH for 10 h, but resulted in only a 25% yield. Therefore, we investigated reaction conditions using temperatures less than −18 °C and added catalytic basic additives such as NaHCo 3, K 2 Co 3, Na 2 Co 3 , t-BuoK, and CH 3 oNa. When CH 3 oNa was used as the additive, the process resulted in a good yield of compound 2j. Despite our best efforts, the resultant amide 2j could not be separated from at least one other unidentified compound, we speculated that the compound in column chromatography may be decomposed due to its instability, but the mixture could once again be subjected to next step without using purification. The intramolecular cyclization of compound 2j was found to be greatly influenced by the use of Lewis acid activities. When alCl 3 , FeCl 3 , or CuCl were used, compound 3j was not obtained, but when LaCl 3 , p-TSa, or Yb(CF 3 So 3 ) 3 was used, compound 3j was obtained with varying yields. Yb(CF 3 So 3 ) 3 produced the best yield with >99% enantioselectivity. although the solvent had no effect on enantioselectivity in the last reaction, a polar solvent was found to increase the yield, as evidenced by the improvement produced by the addition of tetrahydrofuran.
In vitro biological evaluation
The (+)-balasubramide (3j) and its derivatives 3a-3i were evaluated for their in vitro biological activities, including neuroprotective, antioxidative and anti-neuroinflammatory effects. our results in Tables S1-3 showed that compounds 3a-3j didn't exhibit the significant neuroprotective effects in primary neurons against glutamate or nutrient deprivation and antioxidative effects in PC12 neuronal cells against H 2 o 2 incubation. all compounds were further subjected to bioassay to test their in vitro antineuroinflammatory effects against LPS-induced pro-inflammatory cytokine TNFα expression in microglial cells. The results are shown in Figure 4 and Table S4. With the exception of compound 3g (3-Cl-substituted), the target compounds at 10 μM markedly inhibited LPS-induced pro-inflammatory cytokine TNFα release in BV-2 microglial cells. Compound 3c, with a strongly electronegative substituent (4-CF 3 ), at 1 and 10 μM, exhibited significant inhibition of LPS-induced TNFα release in a dose-dependent manner. In addition, compound 3c dose-dependently inhibited LPS-induced TNFα gene expression ( Figure S1). This indicated that a strongly electronegative substituent in the 6-phenyl ring might be required for the anti-neuroinflammatory effects of (+)-balasubramide derivatives. Meanwhile, in the MTT assay for cytotoxicity of these compounds in BV-2 microglial cells ( Figure S2), (+)-balasubramide and its derivatives 3a-3j did not show cytotoxic effects.
Conclusions
In this report, we describe a convenient and efficient method for the synthesis of natural (+)-balasubramide with a 44% overall yield and excellent enantioselectivity (>99%). Substituents with varying electronegativity were introduced into the 6-phenyl ring of (+)-balasubramide and the resulting target compounds 3a-3j were evaluated for neuroprotective, antioxidative, and anti-neuroinflammatory effects. our results indicated that natural (+)-balasubramide and its derivatives had little neuroprotective and antioxidative effects, but significantly inhibited neuroinflammation (with the exception of 3g). These results show that electronegativity in the 6-phenyl ring of (+)-balasubramide and its derivatives is an important determinant of their inhibitory effects on neuroinflammation; substituents with stronger electronegativity produced more potent anti-neuroinflammatory effects. our preliminary SaR study provided information that could facilitate the design of novel anti-neuroinflammatory drug candidates or leads. These molecular structures may be worthy of future study with the aim of developing new anti-neuroinflammatory agents. Further studies to improve anti-neuroinflammatory activity and clarify the molecular mechanism of these compounds are in progress.
Supplementary material
all experimental sections relating to this article are available online, alongside Tables S1-4 and Figures S1-43.
Disclosure statement
No potential conflict of interest were reported by the authors.
|
2018-04-03T04:20:55.636Z
|
2016-04-02T00:00:00.000
|
{
"year": 2016,
"sha1": "2c6c22a762f27a6a37922e0d360b131cd537c357",
"oa_license": "CCBY",
"oa_url": "https://figshare.com/articles/journal_contribution/Asymmetric_synthesis_and_biological_activities_of_natural_product_balasubramide_and_its_derivatives/1568316/files/2350033.pdf",
"oa_status": "GREEN",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "8803ca69a4fca3f1dc29e3c327c3d606dc0c6e63",
"s2fieldsofstudy": [
"Chemistry",
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
119001191
|
pes2o/s2orc
|
v3-fos-license
|
Climbing to the top of the galactic mass ladder: evidence for frequent prolate-like rotation among the most massive galaxies
We present the stellar velocity maps of 25 massive early type galaxies located in dense environments observed with MUSE. Galaxies are selected to be brighter than M_K=-25.7 magnitude, reside in the core of the Shapley Super Cluster or be the brightest galaxy in clusters richer than the Virgo Cluster. We thus targeted galaxies more massive than 10^12 Msun and larger than 10 kpc (half-light radius). The velocity maps show a large variety of kinematic features: oblate-like regular rotation, kinematically distinct cores and various types of non-regular rotation. The kinematic misalignment angles show that massive galaxies can be divided into two categories: those with small or negligible misalignment, and those with misalignment consistent with being 90 degrees. Galaxies in this latter group, comprising just under half of our galaxies, have prolate-like rotation (rotation around the major axis). Among the brightest cluster galaxies the incidence of prolate-like rotation is 50 per cent, while for a magnitude limited sub-sample of objects within the Shapley Super Cluster (mostly satellites), 35 per cent of galaxies show prolate-like rotation. Placing our galaxies on the mass - size diagram, we show that they all fall on a branch extending almost an order of magnitude in mass and a factor of 5 in size from the massive end early-type galaxies, previously recognised as associated with major dissipation-less mergers. The presence of galaxies with complex kinematics and, particularly, prolate-like rotators suggests, according to current numerical simulations, that the most massive galaxies grow predominantly through dissipation-less equal-mass mergers.
INTRODUCTION
The orbital structure is a powerful tracer of the formation processes shaping galaxies. As galaxies acquire gas, accrete satellites or merge with similar size objects, new populations of stars are created and the mass and luminosity distributions evolve. The changes in the gravitational potential have a direct influence on the allowed and realised trajectories, providing for a variety of observed stellar kinematics. As observers, we thus hope to constrain the ingredients (and chronology) which shaped galaxies by probing the spatial variations of the line-of-sight velocity distribution (LOSVD).
Theoretical insights, based on analytical and numerical work, are crucial for the interpretation of the observed stellar kinematics of galaxies (see e.g., . In an idealised E-mail:dkrajnovic@aip.de system with triaxial symmetry, assuming a gravitational potential expressed in a separable form (e.g. Stäckel potentials as introduced by Eddington 1915), there exist a few families of dissipation-less orbits which stars can adopt: box orbits, short-axis tubes, inner and outer long-axis tubes (de Zeeuw 1985). In such systems, symmetry changes, for example between spherical, oblate or prolate axial symmetries, limit the stability of orbital families. de Zeeuw (1985) showed that a purely oblate spheroid should consist of only shortaxis tubes, and therefore show a typical streaming around its minor axis, unless there is an equal amount of stars on both prograde and retrograde orbits canceling out the net streaming. A prolate spheroid allows only inner and outer long-axis tubes, and streaming around the major axis of the galaxy. The argument can also be reversed to state that galaxies with only long axis tubes cannot be oblate and axisymmetric, or even triaxial, and that a galaxy with short axis tubes does not have prolate symmetry.
The velocity maps of triaxial spheroids, viewed at random angles, can exhibit a rich variety of kinematic features. This is a direct consequence, as pointed out by , of the freedom in the direction of the total angular momentum resulting from the orbital mixture, and the momentum vector which can lie anywhere in the plane containing the major and minor axis of the galaxy. This was illustrated by Statler (1991) with models viewed along various orientation angles, and associated with actually observed galaxies with complex kinematics (e.g. NGC 4356 and NGC 5813;van den Bosch et al. 2008;Krajnović et al. 2015, respectively) Observational studies using long-slits were able to investigate velocity features along selected angles (often along the minor and major photometric axes), and revealed that a majority of galaxies exhibit negligible rotation along their minor photometric axis (e.g. Bender et al. 1994), while a few massive elliptical galaxies show more complex rotation indicating the presence of long-axis tubes and significant rotation around their major axis (e.g. Illingworth 1977;Schechter & Gunn 1979;Wagner et al. 1988). A major change in this field came from the proliferation of the integral-field spectrographs (IFS) and their ability to map the distribution of velocities over a significant fraction of the galaxy. The last decade of IFS observations has revealed that the vast majority of galaxies actually has very regular velocity maps within their half light radii (e.g. Emsellem et al. 2004;Krajnović et al. 2011;Houghton et al. 2013;Scott et al. 2014;Fogarty et al. 2015;Graham et al. 2018).
The ATLAS 3D project (Cappellari et al. 2011a) addressed this more specifically via a volume limited survey of nearby early-type galaxies, demonstrating that galaxies with complex velocity maps comprise only about 15% of the local population of early-type galaxies (Krajnović et al. 2011), and that the majority is consistent with oblate rotators (notwithstanding the presence of a bar, see Krajnović et al. 2011;Weijmans et al. 2014). The regular and nonregular rotator classes seem to reflect a significant difference in their specific stellar angular momentum content, allowing an empirical division of early-type galaxies into fast and slow rotators (Emsellem et al. 2007(Emsellem et al. , 2011. Krajnović et al. (2008) also emphasised the fact that axisymmetric fast rotators have regular velocity fields which qualitatively resemble those of disks. The internal orbital structure of these galaxies can, however, be complex, as evidenced by the range of photometric properties (e.g. disk-to-bulge ratio) and the common presence of tumbling bars.
There are several caveats which need to be emphasised. Firstly, the intrinsic shape of a galactic system is seldom well defined by a single number, e.g., the apparent ellipticity varies with radius. Along the same lines, the terms "triaxial" or "oblate" systems may not even be appropriate when the intrinsic ratios and/or the position angle of the symmetry axes change with distance from the centre: the gravitational potential of a galaxy could smoothly vary from oblate in the centre to strongly triaxial or prolate in the outer part, with the main symmetry axes not even keeping the same orientation. Secondly, ellipsoids are certainly a very rough approximation when it comes to describing the intrinsic shapes of galaxies, as they have overlapping components with different flattenings, varying bulge-to-disk ratios, and often host (tumbling) bars. While the observed kinematics of fast rotators (including also higher moments of the LOSVD, Krajnović et al. 2008Krajnović et al. , 2011 seem to indicate that their internal orbital structure is dominated by short-axis tube orbits (and streaming around the minor axis), numerical simulations of idealised mergers and those performed within a cosmological context naturally predict the co-existence of multiple orbital families, the central and outer regions often being dominated by box and short-axis tube orbits, respectively (e.g. Jesseit et al. 2005;Hoffman et al. 2010;Röttgers et al. 2014).
The division of galaxies into fast and slow rotators connects also with two dominant channels of galaxy formation (as reviewed in Cappellari 2016). Present spirals and fast rotators are mostly descendants of star forming disks and their evolution is dominated by gas accretion, star formation, bulge growth and eventual quenching. The slow rotators may also start as turbulent star-bursting disks at high redshift (e.g. Dekel et al. 2009;Kereš et al. 2009), or be hosted by haloes with small spin parameters (Lagos et al. 2017), but the late evolution of most slow rotators is dominated by mergers with gas poor galaxies (De Lucia & Blaizot 2007;Dekel et al. 2009;Williams et al. 2011;Kaviraj et al. 2015). The first channel therefore favours regular kinematics and internal orbital structure dominated by short-axis tubes, while the second channel implies dynamically violent redistribution of orbits and the creation of triaxial or prolate-like systems, which include a significant fraction of long-axis tubes. A strong mass dependence has been emphasised, with more massive galaxies being more likely to follow the second channel (Rodriguez-Gomez et al. 2016;Qu et al. 2017).
A clear manifestation of the triaxial nature of galaxies is a non-zero value of the kinematic misalignment angle, Ψ, the angle between the photometric minor axis and the orientation of the apparent angular momentum vector . In an axisymmetric galaxy, the apparent angular moment coincides with the intrinsic angular momentum and is along the minor axis, hence Ψ = 0. Triaxial galaxies can exhibit any value of Ψ, while prolate galaxies with significant rotation would have Ψ closer to 90 • . Galaxies with large Ψ exist Cappellari et al. 2007;Krajnović et al. 2011;Fogarty et al. 2015;Tsatsi et al. 2017) and are typically more massive than 10 11 M (but for dwarf galaxies see e.g. Ho et al. 2012;Ryś et al. 2013). It is, however, not clear if prolate-like systems feature prominently at high mass and if this links preferentially to a specific channel of galaxy evolution.
Galaxies at the top of the mass distribution are intrinsically rare. They are mostly found in dense environments, often as the brightest members of groups or clusters. Brightest cluster galaxies (BCGs) are usual suspects, and are known to have low amplitude or zero rotation (Loubser et al. 2008;Jimmy et al. 2013;Oliva-Altamirano et al. 2017). Still, current surveys of massive galaxies have so far offered little evidence for large Ψ values or clear-cut signatures for strong triaxiality (e.g. Veale et al. 2017b). In this work, we present the first results from an observation-based survey, the M3G (MUSE Most Massive Galaxies; PI: Emsellem) project, aimed at mapping the most massive galaxies in the densest galaxy environments at z ≈ 0.045 with the MUSE/VLT spectrograph (Bacon et al. 2010). We focus on presenting the stellar velocity maps, emphasising the relatively large number of prolate-like systems, i.e., galaxies with rotation around the major axis. The orbital distribution of galaxies exhibiting large values of Ψ (and having net rotation around the major axis) are thought to be dominated by long-axis tubes: we will thus refer to such cases as prolate-like rotation 1 . However, as mentioned above, and discussed in Section 4, we caution the reader that this does not imply that these are prolate systems. Presenting the complete survey, its data products and subsequent results is beyond the scope of the current publication and will be presented in forthcoming papers.
In Section 2 we briefly report on the observations and the data analysis. We present the main results on the rotational characteristics of the M3G sample in Section 3, which is followed by a discussion in Section 4 and a brief summary of conclusions in Section 5.
OBSERVATIONS AND ANALYSIS
In this section we briefly describe the M3G sample of galaxies, the observations and the extraction of the kinematic information. Further details on these aspects will be presented in a following M3G paper (Krajnović et al. in prep.).
The M3G sample and MUSE observations
The M3G sample comprises 25 early-type galaxies selected to be brighter than -25.7 magnitude in the 2MASS K s −band and found in the densest environments. We created two sub-samples of galaxies: one consisting of the brightest galaxies in the densest known structure, the core of the Shapley Super Cluster (SSC) (Shapley 1930;Merluzzi et al. 2010Merluzzi et al. , 2015, and the other targeting BCGs in rich clusters. We selected galaxies in the SSC using the 2MASS All Sky Extended Source Catalog (XSC, Jarrett et al. 2000;Skrutskie et al. 2006) centred on the three main clusters near the core of the SSC: Abell 3562, 3558 and 3556 (Abell et al. 1989). This selection yielded 14 galaxies, with 3 being BCGs. The complementary sub-sample of BCGs was defined using a parent sample of clusters richer than the Virgo Cluster and observed with the HST (Laine et al. 2003). We included 11 BCGs residing in clusters with richness larger than 40, where the richness is defined as the number of galaxies with magnitudes between m 3 and m 3 + 2 within an Abell radius of the cluster centre (m 3 is the magnitude of the third brightest cluster galaxy). Here we also used the information given in Laine et al. (2003). The full M3G sample therefore consists of 14 galaxies in the SSC, and 14 BCG (three being in the SSC). In this paper we use 2MASS photometry as a reference, but as part of the M3G project, we have collected photometry from other imaging campaigns, which will be described in detail in future papers.
In addition to the visibility requirement that the galaxies are observable from Paranal, we imposed a selection criterion based on the distance and size of the galaxies: these had to be such that the MUSE field-of-view covers up to two effective radii of each target. The effective radii were collected from the XSC catalog, using the k r eff keyword. The most massive galaxies in the SSC have the right combination of parameters to satisfy this criterion, while the additional 11 BCGs were selected to be at similar redshifts. The galaxies span the redshift range 0.037 < z < 0.054, with a mean of z=0.046. The redshift of the SSC is assumed to be 0.048 (Metcalfe et al. 1987). Adopting cosmology H 0 = 70 km s −1 Mpc −1 , Ω M = 0.3, Ω Λ = 0.7, 1 is 904 pc at the mean redshift of the sample, while this scales changes from 735 to 1050 pc between galaxies (Wright 2006).
The observations of the sample were performed within the MUSE Guaranteed Time Observations (GTO) during ESO Periods of discs, while non-regular rotation is used for twisted and complex velocity maps.
94 -99 (starting in the fall of 2014 and finishing in the spring of 2017). The observing strategy consisted of combining a short Observing Block (OB) of exposures during better-than-average seeing conditions (< 0.8 ) to map the central structures, and a set of OBs with longer exposure times to reach a sufficient signal-to-noise ratio (S/N) at two effective radii. The high spatial (short exposure time) resolution MUSE data will be presented in a forthcoming paper. The total exposure time for each galaxy varied from about 2 to 6 hours. The brightest galaxy in the sample (see Table 1 for details) was mosaiced with 2 × 2 MUSE fields, each observed up to 6h. All individual OBs consisted of four on-target observations and two separate sky fields sandwiched between the on-target exposures. On-target observations were each time rotated by 90 • and dithered in order to reduce the systematics introduced by the 24 MUSE spectrographs.
Data reduction and kinematics extraction
Data reduction was performed as the observations were completed. This means that several versions (from v1.2 to the latest v1.6) of the MUSE data reduction pipeline (Weilbacher et al. 2014) were used. Despite continued improvement of the reduction pipeline, given the brightness of the M3G sample, and the nature of the current study, the differences in the reductions do not affect the results and conclusions presented here. All reductions followed the standard MUSE steps, producing the master calibration files of the bias and flat fields, as well as providing the trace tables, wavelength calibration files and line-spread function for each slice. When available we also used twilight flats. Instrument geometry and astrometry files were provided by the GTO team for each observing run. These calibrations files, as well as the closest in time illumination flats obtained during the night, were applied to the on-target exposures. From separate sky fields we constructed the sky spectra which were associated with the closest in time on-target exposure, and from the observation of a standard star (for each night) we extracted the response function as well as an estimate of the telluric correction. These, together with the line-spread function (LSF) and the astrometric solution, were used during the science post-processing. The final data cubes were obtained by merging all individual exposures. As these were dithered and rotated, a precise alignment scheme was required. This was achieved using stars or unresolved sources, and for a few cases in which the MUSE field-of-view was devoid of such sources, using the surface brightness contours in the central regions. The final cubes have the standard MUSE spatial spaxel of 0.2 ×0.2 and a spectral sampling of 1.25 Å per pixel.
As a first step before extraction of the kinematics, we proceeded to spatially bin each data cube to homogenise the signalto-noise ratio throughout the field-of-view via the Voronoi binning method (Cappellari & Copin 2003) 2 . We first estimated the S/N of individual spectra from the reduction pipeline propagated noise, masking all stars or satellite galaxies within the field-of-view. Spatial binning is ultimately an iterative process, in which our goal was to achieve relatively small bins beyond one effective radius, but which provide a sufficient signal for extraction of robust kinematics. The quality of the extraction was measured using the signal-toresidual noise ratio (S/rN), where the residual noise is the standard deviation of the difference between the data and the model (as explained below). S/rN was required to be similar to the target S/N in bins at large radii. As the data quality varies between galaxies, 4500 5000 5500 6000 6500 rest frame [Å] 0.4 0.6 0.8 1.0 1.2 1.4 1.6 Noramlized Flux galaxy pPXF fit residuals mask Figure 1. An example of the pPXF fit to a spectrum extracted within an effective radius from PGC047177, which also shows ionised gas emission. The observed spectrum is shown in black and the best fit in red. Green dots are residuals. Light green shaded areas were masked and not fitted. These include the strongest emission-lines expected between 4500 Å and 7000 Å, as well as potential strong sky lines or telluric residuals. it is possible for some galaxies to have sufficiently small bins in the central regions with S/rN ∼ 100, while for some galaxies S/rN ∼ 50 is the most that can be achieved for a reasonable bin size. For this work, we set the target S/N required by the Voronoi binning method to 50 for all galaxies. Additionally, before binning we removed all spectra (individual spaxels) with S/N less than 2.5 in the continuum (based on the pipeline estimated noise) 3 . In this way we excluded the spectra at the edge of the MUSE FoV, which essentially do not contain any useful signal and limited the sizes of the outermost bins.
Stellar kinematics were extracted using the pPXF method 4 (Cappellari & Emsellem 2004). Our pPXF set up included an additive polynomial of the 4th order, and we fitted a line-of-sight velocity distribution parametrised by Gauss-Hermit polynomials (van der Marel & Franx 1993;Gerhard 1993) with the mean velocity V, the velocity dispersion σ and the higher order moments h 3 and h 4 . We masked all potential emission-lines and a few narrow spectral windows with possible sky-line residuals. Finally, we limited the fit to blue-wards of 7000 Å, to exclude potentially strong telluric and sky residuals. For each galaxy, a pPXF fit was first performed on the spectrum obtained by summing all MUSE spectra within one effective radius (covering an elliptical area equivalent to π×R 2 e ) and using the full MILES stellar library (Sánchez-Blázquez et al. 2006;Falcón-Barroso et al. 2011) as templates. The MUSE LSF significantly varies with wavelength with a full-width half maximum from 2.85 Å at 5000Å to 2.5 Å at 7000 Å (Guérou et al. 2017). We used the parametrisation of the LSF from Guérou et al. (2017) and convolved the MILES templates to the MUSE LSF (varying with wavelength). Possible emission-lines were masked. As an example, we show the fit to the spectrum extracted within the half-light redius of one of our galaxies in Fig. 1.
This first global pPXF fit provides an optimal set of stellar templates, which we propagate for each individual Voronoi-binned spectrum, using the same pPXF set-up. The quality of the fit was checked via the S/rN of each bin, where the residual Noise was the standard deviation of the difference between the data and the bestfit pPXF model. As outlined above, this S/rN was required to be at least 50 over most of the field-of-view. We extracted up to the 4th Gauss-Hermit coefficient, but in this work we will primarily focus on the mean velocity maps of our 25 galaxies. The velocity dispersion and higher order velocity moments maps, as well as the analysis of the angular momentum will be presented in a future paper. The global velocity dispersion values are given in Table 1.
PREVALENCE OF LONG-AXIS ROTATION IN MASSIVE GALAXIES
The velocity maps for the full M3G sample are shown in Fig. 2. The sample is split into BCGs (first 14 maps) and non-BCGs from the SSC. In each subgroup galaxies are plotted in order of decreasing 2MASS K-band absolute luminosity. There are several noteworthy features in these maps, which we interpret within the kinematic classification system of the ATLAS 3D survey (Krajnović et al. 2011). To start with, we note that almost all galaxies show some level of rotation. While the maximum velocity amplitudes reached within the two effective radii covered by our MUSE observations are often low (≈ 30 − 50 km/s), only one galaxy, e) PGC 043900, does not show a clear indication for net streaming motion within the field of view. This is somewhat different from the trend expected from the ATLAS 3D data (Emsellem et al. 2011;Krajnović et al. 2011), where a few of the most massive systems (about 15 per cent for galaxies more massive than 2 × 10 11 M ), can be characterised as having no net rotation. Other studies of massive galaxies (e.g. Veale et al. 2017b) also find a large number of galaxies with negligible net rotation. It is likely that, as in the case of NGC 4486 (Emsellem et al. 2014), our MUSE data are of such quality and extent that the rotation is revealed even in systems such as r) PGC 047590, where the amplitude of the rotation is only 30 km/s 5 . The coverage beyond one effective radius helps to determine the net rotation trend, but also reveals changes in the kinematics. This is especially noticeable among BCGs, where the velocity maps change orientation (e.g. b) PGC 048896), or there is a loss of coherent motions (e.g. h) PGC 065588). Non-BCGs, which we will call satellites in this context, do not show such changes. It might be the case that the changes are found at larger radii (as for some lower- BCGs are plotted in the first 14 panels starting from the top left, followed by satellites (as indicated with "SAT"). The two groups of galaxies are ordered by decreasing K-band absolute magnitude. The values in the lower right corner of each panel indicate the range of the velocities, where the negative are shown with blue and positive with red colours, as indicated by the colourbar. Black dashed contours are isophotes plotted in steps of one magnitude. All velocity maps are approximately 1 × 1 in size. Full red ellipses indicate the size and the orientation of the half-light region, specified by the ellipticity of the galaxy and the semi-major axis length equal to the 2MASS Ks-band effective radius. Green and brown lines indicate the orientation of the kinematic and the photometric major axes, respectively. Letters in upper right corner of each panel ("PRO", "TRI" and "OBL") indicate broad shape-related categories of the galaxy based on the kinematic misalignment (see Fig. 3 for details). Note that PGC 043900 is characterised as "TRI" due to its non-rotation. The letters in front of the galaxy names will be used in text for easier location of the object. mass fast rotators, Arnold et al. 2014), but there is no clear evidence for this within 2 R e . Another striking feature is that there are galaxies which show regular rotation, with peak velocity in excess of 200 km/s. These galaxies are in fact among the lower luminosity bin of our set of massive galaxies, and found within the group of satellites. Galaxies that belong to this class are v) PGC046860, u) PGC047177, y) PGC047273, x) PGC047355 and w) PGC097958. Their dynamical masses (see Section 4) are around 10 12 M , and they are all among the most massive galaxies with regular rotation. Their existence is expected (e.g. Brough et al. 2007;Loubser et al. 2008;Veale et al. 2017b;Lagos et al. 2017), although their number likely decreases with increasing mass (e.g. Krajnović et al. 2011;Jimmy et al. 2013;Houghton et al. 2013;Veale et al. 2017b;Brough et al. 2017). The fact that these galaxies are not found among BGCs is indicative of their less violent evolution maintaining the regular rotation. However, there is also the case of a) PGC 047202, the largest and the most luminous galaxy in the SSC, and a BCG, which shows high level of rotation, albeit non-regular.
Non-regular rotation is the most common characteristics of the M3G velocity maps. It is especially among BCGs, but it also occurs in non-BCGs. The existence of kinematically distinct cores (KDC), counter-rotation, the radial variation of the kinematic position angle, as well as the analysis of the velocity features beyond the effective radius will be discussed in a future paper. Here we quantify the kinematic misalignment angle Ψ as the difference between the position angle defined by the photometric major axis (PA phot ) and the global kinematic position angle (PA kin ) approximately within 1 effective radius. We measure PA kin using the method presented in Appendix C of Krajnović et al. (2006) 6 , which provides a global orientation of the velocity map. PA phot was measured by calculating the moments of inertia 7 of the surface brightness distribution from the MUSE white-light images (obtained by summing the MUSE cubes along the wavelength dimension). At the same time, the method provides the global ellipticity . As we used MUSE cubes for both PA kin and PA phot , they were estimated approximately within the same region. In Table 1 we report the measured photometric and kinematic position angles as well as other relevant properties used in this paper.
Kinematic and photometric position angles are shown in Fig. 2 as green and brown lines, respectively. Systems with regular rotation have almost overlapping lines, while systems with non-regular rotation often show that the kinematic misalignment angle Ψ is close to 90 • . To quantify this, we also present the distribution of Ψ as a function of the galaxy projected ellipticity for the M3G sample in Fig. 3. We split galaxies into BGCs and satellites and draw two horizontal lines at 15 • and 75 • to separate oblate, triaxial and prolate geometries.
The most noteworthy characteristic of Fig. 3 is that galaxies seem to group in two regions, one with low and one with high Ψ. Galaxies with Ψ < 15 • are generally consistent with having oblate symmetries. Their velocity maps look regular, and all galaxies with high rotation amplitudes are found in this group. In order of rising ellipticity these BCGs are: l) PGC 004500, m) PGC 049940, Figure 3. Distribution of the kinematic misalignment angle as a function of ellipticity, both measured within the effective radius of M3G sample galaxies. Red circles show BCGs, while blue squares are non-BCGs in the SSC (we call them satellites or SAT for simplicity). The symbol with an upper limit error bar is PGC 043900, the system with no net rotation and, therefore, no reliable PA kin measurement. Horizontal lines at Ψ = 15 • and 75 • are used to guide the eye for an approximate separation of shapes of galaxies, between mostly oblate (indicated with "OBL"), triaxial ("TRI") and prolate ("PRO"). These divisions are not meant to be rigorous but indicative. Colours on the right-hand side histogram follow the same convention as shown on the main plot and the legend.
kinematic twists and are not regular, but the velocity maps are close to aligned with their photometric axes. Galaxies with Ψ significantly larger than 0 (and lower than 90) cannot be axisymmetric as their net angular momentum is not aligned with one of the principle axes. Very indicative is also that 8 galaxies have Ψ > 75 • , while for one galaxy (e PGC 04390) it was not possible to determine Ψ as it does not show rotation. Among those 8 galaxies a closer examination shows rotation around the major axis within a large fraction of the half-light radius. These galaxies exhibit prolate-like rotation, as it is defined in Section 1, within a significant part of the MUSE field-of-view. The rotation amplitude is, as in the case of other non-regular rotators, typically small, mostly around 50 km/s or lower, and the observed (luminosity-weighted) rotation has to be supported by the existence of long-axis tube orbits.
There are 4 galaxies with 15 o < Ψ < 75 o : h) PGC 065588, p) PGC 047154, g) PGC 015524 and d) PGC 046832 (in order of decreasing Ψ). The first three have similar rotation pattern as other galaxies with prolate-like rotation. Strictly speaking the Ψ values are inconsistent with 90 • , but their velocity maps resemble those of galaxies with prolate-like rotation. We will, therefore, also refer to them as having prolate-like rotation. On the other hand, d) PGC 046832 exhibits a very complex velocity map with multiple changes between approaching and receding velocities, but its velocity map does not resemble prolate-like rotation, and its Ψ is significantly smaller than for other galaxies in this group. Therefore, we will not consider it to have prolate-like rotation. As mentioned before, another special case is e) PGC 043900, which does not have any rotation and its Ψ is not well defined. Therefore, it is plotted as an upper limit.
The prolate-like rotation comes in two flavours. It can be present across the full observed field-of-view (approximately 2 effective radii), for example in n) PGC007748, h) PGC065588 and r) PGC047590, but most galaxies have it within a specific area, Notes: Column 1: names of galaxies; Column 2: Absolute magnitudes; Column 3: kinematic position angle; Column 4: photometric position angle: Column 5: kinematic misalignment error; Column 6: ellipticity; Column 7: velocity dispersion within the effective radius; Column 8: effective radius based on the j r eff 2MASS XSC keyword; Column 9: stellar mass; Column 10: galaxy is a BCG -1, galaxy is a BCG in the SSC -2, galaxy is a "satellite" in the SSC -3; Column 11: the letter referring to the position of the object in Fig. 2. Absolute K-band magnitudes are based on the 2MASS K-band total magnitudes and the distance moduli obtained from NED (http://ned.ipac.caltech.edu). The same distance moduli were used to convert sizes to kiloparsecs. Note that while we report actual measurements for the kinematic and photometric position angles, the kinematic misalignment Ψ for PGC043900 is an upper limit, as there is no net streaming in this galaxy. The stellar mass reported in the last column was estimated using columns 7 and 8, and the virial mass estimator from Cappellari et al. (2006). either outside the central region (but within one effective radius, s) PGC099188), or more typically covering the full half-light radius (e.g. k) PGC047752, f) PGC003342, c) PGC007300 or b) PGC048896). In these cases, the rotation at larger radii either disappears (e.g. t) PGC047197) or there is a change in the kinematic position angle and the rotation is consistent with being around the minor axis (f) PGC003342, c) PGC073000, b) PGC048896). The change in the kinematic position angle is relatively abrupt and occurs over a small radial range. Therefore, such galaxies could even be characterised as having large-scale KDCs, with the central component exhibiting prolate-like rotation. More typical and standard size KDCs are found in a few M3G targets (i) PGC019085 and d) PGC046832), but these will be discussed in more detail in a future paper devoted to the analysis of the high spatial resolution MUSE data cubes.
Finally, for a few galaxies there is evidence for a significant change beyond one effective radius in the properties of the velocity maps: regardless of regular or non-regular rotation within the effective radius, the outer parts show no rotation. They are, however, characterised by a spatially symmetric shift of velocities to larger values compared to the systemic velocity of the galaxy. Examples are the BCGs: m) PGC049940, i) PGC019085 and g) PGC015524. Except for stressing that such velocities at larger radii are only found in the BCGs, we will postpone the discussion of these features to a future paper when it will be put in the full context of the kinematics of M3G galaxies.
DISCUSSION
In Fig. 4 we place the M3G sample on the mass -size diagram. We indicate the type of observed kinematics with different symbols and colours and also add the galaxies from the ATLAS 3D magnitude-limited sample for comparison. Galaxy masses and sizes for ATLAS 3D galaxies were obtained from Cappellari et al. (2013b). For M3G objects we used their 2MASS sizes (XSC keyword j r eff), defining the size as R e = 1.61×j r eff as in Cappellari (2013). Masses of the sample galaxies were approximated using the virial mass estimator M * = 5 × (R e σ 2 e )/G , where σ e is the effective velocity dispersion extracted from the MUSE data within an ellipse of area equal to π × R 2 e . Using the full M3G sample, up to 44 per cent of galaxies have prolate-like rotation (here we include h) PGC 065588, p) PGC 047154 and g) PGC 015524 with Ψ > 60 • , but do not consider e) PGC 043900). The M3G objects located in the SSC form a magnitude-limited subsample within a well defined environment.
This subsample contains 5/14 (35 per cent) galaxies with prolatelike rotation. Of these 5 galaxies one is a BCG, while the other two BCGs in the SSC, including the most luminous and the largest galaxy in the sample, do not show prolate rotation. The fraction of prolate-like rotation is somewhat higher among BCGs. In our sample there are 7/14 BCGs with prolate-like rotation (excluding e) PGC 043900 with an uncertain Ψ), or 50 per cent. A comparison with the ATLAS 3D sample indicates that galaxies with prolatelike rotation are mostly found in massive galaxies and that they are typical for dense environments. This can be quantified using the literature data.
Within the ATLAS 3D sample there are six known galaxies with prolate-like rotation (NGC 4261,NGC 4365,NGC 4406,NGC 5485,NGC 5557 and NGC 4486), while Tsatsi et al. (2017) found 8 new systems in the CALIFA sample (LSBCF560-04, NGC0810, NGC2484, NGC4874, NGC5216, NGC6173, NGC6338, and UGC10695; Falcón-Barroso et al. 2017) 8 . Together with the previously known cases such as NGC1052 (Schechter & Gunn 1979), NGC4589, NGC5982 and NGC7052 (Wagner et al. 1988), this means a total of 17 galaxies with apparent prolate-like rotation were previously known in the nearby universe. The MAS-SIVE survey (Ma et al. 2014) found 11 galaxies with kinematic misalignment larger than 60 • , whereas 7 of those have Ψ > 75 • and can therefore be considered to have prolate-like rotation (Ene et al. 2018). These galaxies are: NGC 708, NGC 1060, NGC 2783, NGC 2832, NGC 7265, NGC 7274, and UGC 2783, where all of them except NGC7274 are classified as BCGs or brightest group galaxy (BGG). A recent study of the kinematic misalignment angle of more than 2000 MANGA galaxies (Graham et al. 2018) finds also a secondary peak at Ψ ∼ 90 • among galaxies more massive than 2 × 10 11 M . Combining the M3G sample of galaxies with prolate-like rotation with those from the literature, we see that such rotation typically does not occur for M * 10 11 M , and that for M * 10 12 M velocity maps with prolate-like rotation correspond to the most populated kinematic category.
Within the M3G sample, the prolate-like rotation is mostly found in BCGs, but is also present in non-BCGs. However, all galaxies in the M3G sample are members of groups or clusters of galaxies. Even when including the literature data, most galaxies with prolate-like rotation have been observed in galaxy clusters or groups. A similar finding is reported by the MASSIVE survey (Ene et al. 2018), where galaxies with prolate-like rotation are almost exclusively found in BCGs/BGGs, and generally misaligned galaxies (Ψ > 15 • ) are rare in the low density environments, but common among the BCGs/BGGs or satellites. As the creation of non-regularly rotating, massive galaxies with low angular momentum (typical hosts for prolate-like rotation) can a priori occur in any environment (e.g. Cappellari et al. 2011b;Veale et al. 2017a), we expect that galaxies with prolate-like rotation, if rare, still exist outside of dense environments. The evidence that this might be so could be seen in recent merger galaxies, such as NGC 1222 (Young et al. 2018) or NGC 7252 (Weaver et al. 2018). These galaxies are in late merging phases, and have not yet fully settled, but show prolate-like rotation of the stellar component. What makes them significantly different from other prolate-like systems, is their richness in atomic and emission-line gas, as well as ongoing star formation, implying that there are multiple ways of creating prolate-like kinematics. Such galaxies seem however rare, as Barrera-Ballesteros et al. (2015) does not report a significant incidence of large kinematic misalignment in mergers. A survey of massive galaxies across various environments could constrain the dependence of prolate-like rotation on the environment, as well as offer new possible scenarios for their formation.
Numerical simulations suggest that prolate-like rotation may be the outcome of binary mergers for specific orbital configurations (e.g Łokas et al. 2014). For example, major (1:1) dissipationless mergers in the study by Naab & Burkert (2003) exhibit rotation around the minor axis. Furthermore, the orbital structure and the shapes of remnants of major collisionless mergers indicate significant triaxiality and dominance of orbits that support triaxial or prolate shapes (Jesseit et al. 2005(Jesseit et al. , 2009Röttgers et al. 2014). Numerical simulations of binary (disk) mergers often end up with mildly elongated and low angular momentum remnants, with triaxial shapes and prolate-like rotation (Hernquist 1992;Naab & Burkert 2003;Cox et al. 2006;Hoffman et al. 2010;Bois et al. 2011;Moody et al. 2014). More specifically, Tsatsi et al. (2017) emphasised that a polar merger of gas-free disc galaxies can lead to a prolate-like remnant. Ebrová & Łokas (2015), looking at a broader set of merging configurations, found that radial orbits are more likely to produce prolate-like rotation, other orbital configurations (specific combinations of orbital and disk angular momentum) not being excluded.
Similar results are recovered in numerical simulations set within a cosmological context. Cosmological zoom-in simulations produce galaxies with prolate-like rotation ). The Illustris (Vogelsberger et al. 2014), EAGLE (Schaye et al. 2015) and cosmo-OWLS (Le Brun et al. 2014) numerical simulations find that there is an increasing fraction of (close to) prolate shapes among the most massive galaxies (Velliscig et al. 2015;Li et al. 2016, for EAGLE+cosmo-OWLs and Illustris, respectively). Major mergers seem to be ubiquitous among galaxies with prolate-like rotation (Ebrová & Łokas 2017). Specifically, a late (almost dry) major merger seems to be crucial to decrease the overall angular momentum and imprints the prolate-like rotation. A recent study by Li et al. (2018) on the origin of prolate galaxies in the Illustris simulation, shows that they are formed by late (z < 1) major dissipation-less mergers: galaxies might have a number of minor or intermediate mass mergers, but the last and late major merger is the main trigger for the prolate shape. Similarly to the findings from idealised binary mergers, most mergers leading to prolate-like systems have radially biased orbital configurations. Lower-mass remnants may allow a broader set of possible orbital parameters, mass ratios as well as gas content among the (higher angular momentum) progenitors leading to prolate-like rotation (Ebrová & Łokas 2017).
Prolate-like rotation does not strictly imply that the galaxy has a prolate mass distribution (or potential). This is nicely illustrated with idealised Stäckel potentials, where prolate systems allow only inner and outer long-axis tube orbits (de Zeeuw 1985). Hence prolate galaxies can have velocity maps that either show prolate-like rotation or no-rotation. This is indeed found for the Illustris prolatelike galaxies; about 51% of actually prolate galaxies (using a tridimensional account of the mass distribution) show prolate-like rotation while the others have no net rotation (Li et al. 2018), presumably as they contain both prograde and retrograde long-axis tube orbits. Nevertheless, galaxies with prolate-like rotation cannot be oblate spheroids.
Velocity maps of the M3G sample objects with prolate-like rotation show spatial variations, sometimes changing at larger radii to rotation around the major axis. This suggests more complex shapes, supporting various types of orbital families (de Zeeuw & Franx . The distribution of the M3G sample on the mass size plane. The M3G sample is shown with symbols that have black edges and dominate the high-mass end. For reference we also show galaxies from the ATLAS 3D sample with coloured symbols. The shape and the colour of the symbol is related to the kinematic type as indicated in the legend. The classification is taken from Krajnović et al. (2011) with the following meanings: RRregular rotation, NRR -non-regular rotation, and PRO -prolate-like rotation (nominally the latter are part of the NRR group, but we highlight them here). Diagonal dashed lines are lines of constant velocity dispersion calculated using the virial mass estimator. The green shaded region shows the expected region where galaxies growing through dissipation-less mergers should lie, assuming major 1:1 mergers (dot-dashed red line) and multiple minor mergers (dotted blue line). The orange hatched region encompasses the mass -size evolution of major merger remnants depending on the merger orbital parameters, as explained in Section 4. 1991). A classical example of such galaxies is NGC 4365 (Bender 1988), which has a large KDC and outer prolate-like rotation. Its orbital distribution is complex with both short-and long-axis tubes responsible for the formation of the observed (luminosity-weighted) kinematics (van den Bosch et al. 2008). This is also a characteristic of high-mass merger remnants, which often contain a large fraction of box orbits, short-and long-axis tubes, varying relatively to each other with radius (e.g. Röttgers et al. 2014). With such caveats in mind, it is worth assuming for a moment that M3G galaxies with prolate-like rotation are actually significantly triaxial and close to being prolate. Prolate galaxies in the Illustris simulation are found only at masses larger than 3 × 10 11 M , and above 10 12 M 62 per cent of galaxies are prolate or triaxial, 43 per cent being prolate (Li et al. 2018). This is coincidentally close to our observed fraction of prolate-like systems (44%) within the M3G sample. The similarity between these fractions should be taken with caution, as we stress the M3G sample is neither complete nor a representative sample, and the number of actually prolate galaxies is certainly lower then the number of galaxies with prolate-like rotation. Notwithstanding the actual frequency and shape of galaxies with prolate-like rotation, they cluster in a special region of the mass -size diagram, as Fig. 4 shows. The M3G sample lies on an extension of the arm-like protuberance arising from the cloud of galaxies at high masses and large sizes. The M3G data extend this arm by almost an order of magnitude in mass and a factor of 5 in size. At masses below 6 × 10 11 M covered by previous surveys, galaxies that were found on this extension were typically old and metal-rich slow rotators characterised by a deficit of light (cores) in their nuclear surface brightness profiles (Emsellem et al. 2011;Cappellari et al. 2013a;Krajnović et al. 2013;McDermid et al. 2015). Specifically, their kinematic properties and the corelike light profiles were used as an indication that the formation of these galaxies was different from other galaxies populating the mass -size plane, which are characterised as star-forming disks, or bulge dominated, oblate and fast rotating early-type galaxies (Cappellari et al. 2013a;Cappellari 2016). The most likely formation process of galaxies populating that extension is through dissipationless mergers of already massive galaxies: these may provide a way to explain their kinematics, low angular momentum content, cores in light profiles (through binary black hole mergers, e.g. Ebisuzaki et al. 1991;Milosavljević & Merritt 2001) and old stellar populations.
The M3G extension of the arm supports this picture in two additional ways. Firstly, it shows that while these galaxies span a large range in both mass and size, their effective velocity dispersions are not very different, as expected in major dissipation-less mergers (e.g. Hopkins et al. 2009;Bezanson et al. 2009;Naab et al. 2009). Following the argument outlined in Naab et al. (2009), if a massive galaxy grows via equal-mass mergers (of progenitors with similar sizes and/or velocity dispersions), both the mass and the size of the remnant will increase by a factor of 2, while it will follow a line of constant velocity dispersion in Fig. 4. We illustrate this path with a red dot-dashed line, where products of consecutive equal-mass mergers would fall, for example starting with systems of M= 6 × 10 11 M and R e = 7 kpc, representative of the most massive galaxies in the local Universe. The same increase in mass achieved through multiple minor mergers (with smaller mass, size and velocity dispersion progenitors) would lead to a size increase by a factor of 4, while the velocity dispersions would typically be reduced by a factor of 2. This corresponds to the blue dotted line in Fig. 4, starting from the same main galaxy progenitor (see also fig. 2 in Bezanson et al. 2009).
Equal mass merger simulations show that the relation between the mass and the size of galaxies also depends on the merger parameters, such as the pericentric distance, type of the orbit and its angular momentum (Boylan-Kolchin et al. 2006). This study showed that depending on the merger orbit, the mass -size relations follows R e ∼ M α * , where α = 0.7−1.3. We add this range of possibilities on Fig. 4 as a hatched region, indicating the possible location for massive galaxies after major mergers, and fully encompassing M3G sample galaxies. A caveat in this simple argument is that some of the massive galaxies today will start merging as more compact objects in the early Universe, as is evident from the evolution of the mass -size relation with redshift (van der Wel et al. 2014) and implied by compact size of high redshift quiescent galaxies and their subsequent evolution (e.g. van Dokkum et al. 2008van Dokkum et al. , 2010. Inevitably the merger history of massive galaxies will be a combination of multiple minor mergers and a small number of major (or even equal) mass mergers (e.g De Lucia & Blaizot 2007;Johansson et al. 2012;Naab et al. 2014). The evidence for such a combination is visible in the differences between the central region (about 1 R e ) and the outskirts, as they often do not share the same kinematics or stellar populations, which will be the topic of future papers. The tightness of the region on the mass -size dia-gram within which the M3G galaxies lie suggests that the growth of the most massive galaxies (> 10 12 M ) and, in particular, BCGs is dominated by major mergers. This would be consistent with the findings by Li et al. (2018) that also links such massive mergers with prolate-like rotation. Given that more than half of the BCGs in our sample exhibits prolate-like rotation, we speculate that indeed most of these experienced a late major (dry) merger, between two massive (possibly both central) galaxies. A radial bias in the orbital configurations for such mergers leading to an increase fraction of prolate-like rotators may naturally emerge from the preset of phasespace distribution of massive galaxies, also relative to the largescale structures (West et al. 1995;Niederste-Ostholt et al. 2010;West et al. 2017).
CONCLUSIONS
In this work, we report that a large fraction of galaxies more massive than 10 12 M show prolate-like rotation. This is shown by the analysis of MUSE data of the magnitude-limited sample of massive galaxies in the Shapley Super Cluster and a matching (in luminosity) sample of BCGs. This M3G sample consists of 25 galaxies, of which 14 are BCGs, 11 are satellites in the SSC and 3 are BCGs in the SSC. We present their stellar velocity maps, and measure their kinematic misalignment angles, showing that 44 per cent of galaxies in the M3G sample have their main rotation around their major axes. Selecting only BCGs the fraction increases to 50 per cent, while in a magnitude limited subsample of satellites, prolate-like rotation is detected in 35 per cent of galaxies.
The prolate-like rotation is suggestive of a triaxial or close to prolate intrinsic shape. For most of our galaxies rotation amplitudes are low, but velocity maps typically shows net streaming. These kinematics indicate a violent assembly history, with at least one major dissipation-less merger. The M3G data support a scenario where the final growth of the most massive galaxies is dominated by late dissipation-less merging of similar mass systems. This could be associated with the prevalence of prolate-like rotation in the most massive BCGs and is consistent with the location of these systems within a mass -size diagram, which we extend by almost an order of magnitude in mass and a factor of 5 in size.
The current sample suggests that there is a rather narrow path for climbing the last rung of the galaxy mass ladder, which would be characteristic of dense cluster environments. Answering whether or not such very massive systems require the merging of already central systems would require a more extended studies and a closer look at relevant simulations. The fact that BCGs seem to show an alignment trend with respect to the larger-scale structures may be a interesting avenue to consider, as it would naturally explain a bias in the orbital configuration for equal-mass massive and late mergers. Interestingly enough, prolate-like rotation is also found in lower-mass galaxies (e.g. as seen by the ATLAS 3D and the CALIFA surveys, as well in some dwarf galaxies). This further suggests that galaxies with prolate-like rotation should be present in low galactic density regions, while the progenitors may be quite different (i.e. gas-rich).
|
2018-04-20T13:17:19.000Z
|
2018-02-07T00:00:00.000
|
{
"year": 2018,
"sha1": "8462b7ba1b91b977bc87478438bb516fac48deb6",
"oa_license": null,
"oa_url": "https://academic.oup.com/mnras/article-pdf/477/4/5327/25101377/sty1031.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "8462b7ba1b91b977bc87478438bb516fac48deb6",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
119241412
|
pes2o/s2orc
|
v3-fos-license
|
Melting of an Ising Quadrant
We consider an Ising ferromagnet endowed with zero-temperature spin-flip dynamics and examine the evolution of the Ising quadrant, namely the spin configuration when the minority phase initially occupies a quadrant while the majority phase occupies three remaining quadrants. The two phases are then always separated by a single interface which generically recedes into the minority phase in a self-similar diffusive manner. The area of the invaded region grows (on average) linearly with time and exhibits non-trivial fluctuations. We map the interface separating the two phases onto the one-dimensional symmetric simple exclusion process and utilize this isomorphism to compute basic cumulants of the area. First, we determine the variance via an exact microscopic analysis (the Bethe ansatz). Then we turn to a continuum treatment by recasting the underlying exclusion process into the framework of the macroscopic fluctuation theory. This provides a systematic way of analyzing the statistics of the invaded area and allows us to determine the asymptotic behaviors of the first four cumulants of the area.
Introduction
The studies of fluctuations of growing interface have held a center stage during the past few decades of the development of non-equilibrium statistical mechanics. Growing interface appears in numerous physical processes like crystal growth, motion of grain boundaries under external field, sedimentation, spread of bacterial colony etc. and is a subject of increasing importance from both theoretical and experimental point of view [1,2,3,4]. Although the microscopic dynamics of the growth process could be very different, macroscopic fluctuations display a lot of universality. This is manifested in continuum descriptions of the fluctuating interfaces in terms of stochastic partial differential equations, such as the Edwards-Wilkinson (EW) and Kardar-Parisi-Zhang (KPZ) equations.
In the last decades, several spectacular advances were made both on the theoretical and experimental sides [5,6], particularly due to an astounding connection between the (1 + 1) dimensional KPZ equations and random matrix theory [7,8,9,10,11,12]. This led to an exact solution of the (1 + 1) KPZ stochastic growth [13,14,15,16,17,18,19]. The universal scaling and the connection to distributions in random matrix theory has been confirmed in a series of beautiful experiments of kinetic roughening [20,21].
Thanks to these recent advances the local fluctuations of a growing interface are now well understood in (1 + 1) dimension. Integral properties are much less explored, however. For instance, one would like to determine the statistics of the area bounded by a growing interface. In experiments, integral characteristics are often more important, and sometimes easier to measure, while from the pure theoretical point of view one might expect that integral characteristics exhibit a Gaussian statistics even when the local characteristics (like the height of the interface) are non-Gaussian. Even in the situations where the latter is true, one still would like to determine at least the first two moments (the average and the variance) analytically.
Our purpose here is to analyze the statistics of the simplest integral characteristic of a growing interface-the area under the interface. To this end, we consider an interface growing inside a corner; this geometry has played a prototypical role in previous theoretical works as it allows to represent the interface as an exclusion process on a line [8]. Equivalent interpretations of the model are the crystal growth inside a corner, melting of a corner [22], and the shape of a Young diagram [23].
More precisely, we analyze the Ising ferromagnet on a square grid endowed with zero-temperature spin-flip dynamics assuming that initially the minority phase occupies the first quadrant and the majority phase covers the remaining space (see Figure 1). The interface separating the phases (initially the surface of the corner) takes a staircase shape and the area A T of the invaded region grows linearly with time T on average. The dynamics is stochastic and the shape of the interface varies from one iteration to the other. For instance, the interface may even return to its initial shape (the infinite corner), although the probability of this events quickly decreases with time, viz. it is a stretched exponent in the large time limit [see Eq. (18)].
At large times the fluctuations of the interface relative to its size become small and a limiting shape emerges. Limiting shapes of domain boundaries under coarsening dynamics are mostly understood in the frameworks of phenomenological macroscopic description, like the Allen-Cahn equation or the Cahn-Hiliard equation [4,22]. The limiting shapes predicted by these macroscopic descriptions [24] differ from the limiting shapes arising in the realm of microscopic descriptions [25,26,27].
In the present work we focus on two observables: the height of the interface along the diagonal d T which involves local fluctuations, and the area A T which characterizes global aspects of the fluctuation. Through the mapping onto the one-dimensional symmetric simple exclusion process (SSEP), the height d T corresponds to the integrated current across one bond and A T corresponds to the total displacement of all the particles. We extract the complete statistics of d T from the work of Derrida and Gershenfeld [28]. The statistical properties of the area A T are hard to compute because spatial correlations within the entire height profile are required. Our main result is the calculation of the first few cumulants of A T . An exact analysis shows that the cumulants exhibit the following long-time asymptotic behavior The computation of the amplitudes C k becomes involved already for the second cumulant, the variance We determined the variance of A T using exact microscopic analysis, namely the Bethe Ansatz. To derive the finer statistics of A T we employed a hydrodynamic approach known as the macroscopic fluctuation theory (MFT) which is a powerful general framework for analyzing large deviations in lattice gases [29,30,31,32,33]. Using MFT we additionally calculated the third and fourth cumulants of the invaded area. The expression for the first four cumulants are We shall present our analysis in the following order. In section 2, we define the dynamics in detail and discuss the mapping to the SSEP which will be used to analyze the interface fluctuations. In section 3 we employ the microscopic analysis which allows us to determine the limiting shape of the interface. Using this analysis we derive exact expressions for the average and variance of the area. In section 4 we present the formulation of the problem in the framework of the macroscopic fluctuation theory. This allows us to calculate the cumulants of A T up to the fourth order. In section 5, we conclude with a brief summary and discuss a few open problems and extensions. Some details of the analysis are relegated to the Appendices. In particular, in Appendix D we outline an alternate derivation of the variance A 2 T c using fluctuating hydrodynamics and in Appendix E we include the analysis of a new observable H T , the "half-area" of the invaded region, which can be defined for more general initial conditions.
The model and its relation to the exclusion process
We consider an Ising ferromagnet with nearest-neighbor interactions on an infinite square lattice at zero temperature. In the initial configuration all spins in the first quadrant are down whereas rest of the spins are up. There is an interface separating the two oppositely magnetized domains (see Figure 1). In the starting configuration, the interface is along the positive coordinate axes, as indicated in the figure. There are two standard spin-flip dynamics for kinetic Ising model-the Glauber and the Metropolis algorithms. At zero temperature, the difference between these two algorithms is small. More precisely, the energy raising flips are forbidden for both algorithms; other flips occur with the same rate according to the Metropolis algorithm, while according to the Glauber algorithm the energy lowering flips proceed twice faster than the energy conserving flips. Staring with our initial configuration, the energy lowering flips never occur and hence the Glauber and the Metropolis algorithms are identical in our setting. In the following, we set the rate of allowed (energy conserving) flips to unity.
For the corner initial condition, the plus phase can invade the minus phase, but not the opposite-the three quadrants which are initially occupied by the plus phase cannot be invaded. The interface separating the plus and minus phases has a staircase Figure 2. Mapping between the SSEP on a line and an interface. The solid discs on the x axis denote particles and the line denotes the interface corresponding to the particle configuration. (a) The initial configuration where the particles fill the negative half chain. The corresponding interface is a right angled wedge. (b) A configuration at a time t where some particles have spilled over to the positive side.
shape (see Figure 1) and it varies from realization to realization. At any moment there is a finite number of 'flippable' spins: The total number N − of flippable minus spins, always exceeds by one the total number N + of flippable plus spins. The area A T (which is nothing but the total number of plus spins in the first quadrant, see Figure 1) at time T is a random variable; A T increases by 1 with rate N − and decreases by 1 with rate N − . Since N − = 1 + N + , we have A T = T at all times. (In contrast, Eqs. (3)-(4) are valid asymptotically in the T → ∞ limit.) In this paper we are interested in fluctuations of the invaded area. The investigation of the interface dynamics is greatly simplified by the representation in terms of the SSEP. This representation is well known (see [34,35,36,37]), so we shall describe it only briefly.
Before proceeding with the mapping, we set the notation for time: we consider evolution within the time window [0, T ] and we denote an intermediate time by t. We also recall that the SSEP is the lattice gas where each site is occupied by at most one particle. In one dimension, each particle hops stochastically with equal unit rate to the neighboring sites on the right and left. Each hopping attempt is successful if the destination site is empty. State of a site x at time t is denoted by a Boolean variable n x (t) which takes value 0 or 1 depending on whether the site is empty or occupied. This system of interacting particles has been studied extensively [35,36,38,39,40].
To make the connection to a fluctuating interface we define height variables h x (t) which are related to the occupation variables by The variable h x (t) represents the height of the interface at position x: pictorially this means that, if a site x is occupied (or empty) then the interface between (x − 1) and x is a straight line going along the co-diagonal (or diagonal) direction. A schematic of this mapping is shown in Figure 2. In the event of a particle hopping between two sites, the height at the associated sites changes by 2. In any configuration, the height at any two neighboring sites differ by at most 1. Note that, there is a unique interface associated with each particle configuration of the exclusion process.
The initial shape of the interface, the corner, corresponds to a step profile in the realm of the exclusion process, namely all sites at x ≤ 0 are occupied, whereas the sites to the right of the origin are empty: The height profile associated to this configuration is h x (0) = |x|. One can verify that starting with this configuration, the height at x = 0 for any time t ≥ 0 has a simple expression in terms of the occupation variables at the right hand side of the origin: In our original problem of the Ising quadrant, the domain boundary is related to the interface {h x (t)} by a rotation of the coordinates. We use the transformation The interface {h x (t)} is defined on the x-y plane, and the Ising quadrant is defined on the u-v plane. This corresponds to a π/4 anti-clockwise rotation and an overall contraction of the metric by a factor √ 2. A schematic of this transformation is illustrated in Figure 3. The contraction in the transformation is to ensure that each square cell in the Ising model on the u-v plane has unit area.
The fluctuations of the domain boundary can be characterized by various quantities such as the distance d T of the domain boundary from the origin along the diagonal and the change in the area A T of the invaded region at time T (see Figure 1). Using the transformation of the coordinates in Figure 2 it is clear that d T = h 0 (T )/ √ 2 which using Eq.(8) yields In the exclusion process the sum corresponds to the total current that has passed through the site at origin up to time T . We denote this current by Q T . Then the diagonal height is given by To define the area A T in the framework of the exclusion process we note that i.e., the total displacement of all the particles. Indeed, for the initial condition (7), the displacement of the first (right-most) particle is equal to the area of the lowest row of the invaded sites on the u-v plane, the displacement of the 2nd particle gives the area of the next row, and so on (see Figure 1 and Figure 2). This proves that the sum on the right-hand side of Eq. (11) is really the molten area in the Ising quadrant. It is more convenient, however, to express the area in terms of the occupation variables n x (t). The corresponding expression reads Indeed, noting that A 0 = 0 and that any particle hopping to the right (left) leads to an increase (decrease) of both A T and the sum in Eq.(12) by one, the formula is established.
Limiting shape and fluctuations
The interface generically grows and fluctuates. The relative amplitude of fluctuations compared to the mean profile of the interface decreases with time, so the re-scaled interface approaches a limiting shape. To determine the limiting shape we employ a hydrodynamic continuum description. For the SSEP, the continuum description is the diffusion equation [36] describing the evolution of the particle density ρ(x, t). The initial configuration corresponds to a step-like density profile, ρ(x, 0) = Θ(−x), where Θ(x) is the Heaviside step function. Solving the diffusion equation with this initial condition, we obtain a solution in terms of the complementary error function Using Eq.(6) in the continuum limit and the transformation (9), the mean profile of the interface at time T can be expressed [26] by the following curve on the (u, v) plane, This interface intersects the diagonal line at u = v = T /π and therefore the average value of the distance d T along the diagonal is The angular brackets denote ensemble average. The average value of the area A T can be deduced from this limiting shape of the interface. The area under the curve in Eq.(15) at time T is The calculation of the statistics of d T and A T requires understanding of the fluctuation of the interface around its limiting shape. In Eq. (10), the diagonal height d T is essentially the integrated current Q T through the origin, Eq. (10). The statistics of the integrated current has been extensively investigated, see e.g. Refs. [41,42,43,44,45]. For the step initial condition (7), Derrida and Gerschenfeld [28] computed the statistics of Q T using the Bethe Ansatz. Their result leads to the cumulant generating function The series expansion of χ T (λ) in powers of λ generates all the cumulants of d T , which all scale as √ T . The average value (16) can be retrieved as well. Compared to d T , little is known about the statistics of the area A T . In particular, A T contains information of the spatial height-height correlation of the interface. In terms of the exclusion process, A T corresponds to the total displacement of all particles. It is simple to verify that A T is also the sum of the total current through all the sites on the lattice. The quantities A T and d T are not directly related, for instance for a fixed d T there is a lower bound on the area, A T ≥ d T , but in principle the area can be arbitrarily large. The only exception is the case of d T = 0 when A T = 0. This leads to the relation . The latter probability can be extracted from [28] to give where ζ(s) = n≥1 n −s is the zeta function. The cumulants of A T are by definition the coefficients of powers of λ in the series expansion of the cumulant generating function In other words we have where, by definition, A k T c denotes the kth cumulant.
Using Eq. (12), all the cumulants of A T can be expressed in terms of the equal time correlators of the occupation variables, as for all k ≥ 2. The k-point correlators in the SSEP can be computed using the Bethe Ansatz. The resulting exact expressions are difficult to analyze, yet the asymptotic behaviors can be extracted using the scaling property [28] Here G k (z 1 , · · · , z k ) is a scaling function. Combining (21) and (22) yields the general asymptotic time dependence (1), but the determination of the amplitudes C k in Eq. (1) requires a real computation. The cumulant generating function which is compatible with (1) must scale as The scaling function g(x) does not depend on T . Equation (23) implies the following large deviation form The large deviation function φ(a) is the Legendre transform of µ T (λ) [46]. It might be possible to determine g(x) or φ(a), but it is a challenging task that has not yet been accomplished. Some properties of φ(a) can be appreciated without an explicit solution. Near the mean value a = 1, the function φ(a) is quadratic, φ(a) ∼ (a−1) 2 . At large values of a, the distribution of A T has a non-Gaussian tail, more precisely φ(a) ∼ a 3/2 . A similar non-Gaussian tail was also found in the distribution of current in the SSEP, see [28,33]. To understand the φ(a) ∼ a 3/2 asymptotic behavior, one can use a heuristic argument [22] which is easier to appreciate using a discretetime version of the SSEP. In this discrete-time version, particles hop simultaneously at time t = 1, 2, 3 · · · and with equal probabilities 1/2 to the left and right (whenever the hopping is possible by the exclusion). The quickest growth of the total area occurs when all the eligible particles always hop to the right, as illustrated below: This maximal area is readily computed using the representation in Eq. (11) to yield A max T = 1 + 2 + · · · + T = T (T + 1)/2. The probability of this event is 2 −T 2 /2 , leading to Comparing this with the large-deviation form (24) we see the manifestation of the non-Gaussian tail of the distribution. We now use Eq. (21) to get A T and to derive an exact expression for the variance A 2 T c at large time T . It has been found in [28] that at large time T , Plugging Eq. (25) into A T as it is given by Eq. (21) we arrive at which agrees with the exact result (2). Similarly, from the general expression in Eq.(21) the variance of A T can be written as Combining this result with Eq.(25) and Eq. (26) we obtain Evaluating these integrals (with the help of Mathematica) we arrive at the announced long time asymptotic (3).
Computing higher cumulants by this technique gets very cumbersome. The scaling functions for the three or higher point correlators of n x (T ) are not known. In the next section, we use a macroscopic approach to characterize the large fluctuations of the interface at a hydrodynamic scale. This will also enable us to calculate the higher order cumulants of A T .
The macroscopic fluctuation theory approach
In this section we use the macroscopic fluctuation theory (MFT) developed by Bertini, De Sole, Gabrielli, Jona-Lasinio and Landim [29,47,48]. This theory provides a general, thermodynamic-like, approach of computing fluctuations and large deviation functions of driven diffusive models (see [29,30,31,32,33] and references therein).
We briefly review the MFT formulation and explain how the MFT can be used to calculate cumulants of the area A T under the fluctuating interface. Then we shall turn to the perturbative analysis.
Application of the MFT to the melting problem
At a macroscopic scale, the time evolution of the particle density ρ(x, t) in the SSEP is described by a Langevin equation [36,38,49,29] Here η(x, t) is a Gaussian noise with mean zero and covariance and σ(ρ) is the mobility which is in the case of the SSEP [28,32]. The MFT framework [47,50,38,29] allows one to assign a probability weight to each history of the density field, evolving according to the Langevin equation (29). Furthermore, the MFT provides a scheme of characterizing statistics of any quantity (observable) which is fully determined in terms of the fluctuating density field ρ(x, t). In our case, the area A T can be expressed in terms of the ρ(x, t) by rewriting Eq. (12) in the continuum limit: Hence, A T [ρ] is a functional of the initial and the final density profiles, it does not depend on the profile at intermediate times. Writing the distribution of the final profile ρ(x, T ) as a path integral over the density field ρ(x, t) and a conjugate fieldρ(x, t), we can express (see Appendix A for details) the generating function of A T as with action Here H[ρ,ρ] represents the Hamiltonian density At large T , the path integral is dominated by the contribution from the path (ρ,ρ) that minimizes the action. Let us denote this optimal path by (q, p). The associated Euler-Lagrange equations are The boundary conditions come from the least action condition and by taking into account that the initial density is a step profile. This yields, where Θ(x) is a Heaviside step function. The saddle point approximation in Eq. (33) shows that the cumulant generating function µ T (λ) = ln [ exp(λA T ) ] is equal to the least action S[q, p]. Using Eq.(32) and the optimal equations (36-37), we obtain Hence, the problem of computing the cumulant generating function and the associated large deviation function is equivalent to solving a pair of coupled partial differential equations (36)-(37) for two conjugate fields (q, p). The same equations (36)-(37) appear in the analysis of the integrated current [45,51], in the calculation of large deviation function of density profile in the SSEP [50], the survival probability of a static target in a lattice gas [52], etc. The MFT equations have also led to the determination of the statistics of a tagged particle in single-file diffusion [53]. In all these cases only the boundary conditions are different. Note that scaling properties can be extracted from Eq. (39) without the need of an explicit solution. For example, one can check that µ(λ) is consistent with the scaling in Eq. (23), confirming that the kth cumulant of A T scales as T (k+1)/2 as stated in Eq.(1).
Perturbative analysis
Exact time-dependent solutions of the optimal equations (36-37) have not been found in general; the tractable settings known so far are those when the optimal solutions are either stationary or traveling waves (see e.g. [32,41,52]). Here, we use a perturbative analysis to solve for few orders in the perturbative expansion in powers of λ. A similar perturbative analysis was recently applied to the calculation of the variance of integrated current Q T in one-dimensional diffusive systems [51].
The series expansion is around λ = 0. Noting that for λ = 0, the optimal density profile q(x, t) is the solution of the diffusion equation, while the conjugate field p(x, t) vanishes, we seek a perturbative expansion in the form The hydrodynamic solution corresponding to the step initial density profile is The fact that the lowest non-vanishing term in the expansion of p(x, t) is of order λ, makes it possible to iteratively solve the optimal equations (37). The corresponding equations to each order has the following general form. The field p k (x, t), at any order k, obeys a time-reversed diffusion equation with a source: where the source term Γ k (x, t) depends only on fields of order strictly lower than k. Similarly, the field q k (x, t) satisfies a diffusion equation where the source term ∆ k (x, t) involves p k (x, t) and fields of order strictly lower than k. We seek q k and p k in the time window [0, T ]. The boundary conditions for q k (x, 0) and p k (x, T ) are determined from Eq. (38). A formal solution can be written in terms of the diffusion propagator g(x, t|y, τ ) = 1 for all τ ≤ t. We obtain Our goal is to use this perturbative solution to compute the series expansion of µ(λ) and therefore the cumulants of A T , see Eq. (20). The kth cumulant A k T c will depend only on the solution of p(x, t) and q(x, t) up to the (k − 1)th order at most; e.g., the average A T is solely determined in terms of q 0 (x, t).
Using Eq.(36) we replace ∂ t q by ∂ x [∂ x q − σ(q)∂ x p] in the first integral (39). Integrating by parts we recast Eq.(39) into Because the lowest non-vanishing term in the expansion of p(x, t) is of order λ, the second integral is of order λ 2 or higher. Thus A T = T and substituting the perturbative expansion (41) into (48) we deduce the following cumulants Here we used that ∂ x p 1 = 1 which is proved later in Eq. (55). Here σ k ≡ σ k (x, t) is the kth order term in the expansion of σ[q(x, t)] in powers of λ. From σ(q) = 2q(1 − q) one finds We now outline the computation of the second, third, and the fourth cumulants.
Variance of the Area:
Since σ 0 = 2q 0 (1 − q 0 ), see Eq. (52), it is clear that the second cumulant A 2 T c in Eq. (49) involves only the zeroth order solution q 0 (x, t) which is given by Eq. (42). Using the rescaled variable ξ = x/2 √ T , we obtain Numerically evaluating the expression yields A 2
Skewness of the Area (Third cumulant):
The expression for the third cumulant requires the knowledge of q 1 (x, t) and p 1 (x, t). Substituting the perturbative expansion into Eqs. (36)(37) yields at the first order in λ. The boundary conditions (38) become Solving (∂ t + ∂ xx ) p 1 = 0 subject to the first boundary condition in Eq. (54) we get Using this result we simplify the governing equation for q 1 (x, t) to whose solution reads using ∂ y σ 0 (y, τ ) = −2 Erf(y/2 √ τ ) g(y, τ |0, 0).
Using Eq.(57) one can determine the third cumulant (50). The computations are a bit lengthy (see Appendix B), but the final result is neat
The non-vanishing of the skewness indicates that A T is a non-Gaussian random variable, whereas the Edwards-Wilkinson equation predicts a Gaussian behavior (see Appendix D for more details).
Flatness of the Area (Fourth cumulant):
The fourth cumulant requires solutions for p(x, t) and q(x, t) up to the second order in λ. Recalling that ∂ x p 1 = 1 [see Eq. (55)], we write these equations as The boundary conditions read p 2 (x, T ) = 0 and q 2 (x, 0) = 0.
Combining the formal solution (47) and the expression (42) for q 0 (x, t) we get Similarly, the solution of Eq.(59) is given by In deriving Eq.(63) we have used the expressions for σ 1 (x, t) from Eqs. (52) and the identity ∂ x g(x, t|y, τ ) = −∂ y g(x, t|y, τ ). Using p 2 (x, t) and q 2 (x, t), Eqs. (62)-(63), and the previously derived q 0 , q 1 , p 1 we can determine the forth cumulant. The computation of the integrals is quite involved, so the details are deferred to Appendix C. Here we just state the final result: the fourth cumulant has a closed form expression Numerically evaluating the expression yields A 4 T c 1.497 T 5/2 . The perturbative expansion could be pushed forward to calculate higher order cumulants. The analysis gets more and more cumbersome and a systematic scheme is required. The fourth cumulant A 4 T c involves six layers of complicated integrals (see Appendix C), so completing the task and establishing (64) was rather unexpected. This suggests that the problem may have some integrable structure that would make it fully solvable. Besides, an intricate recursive structure in the solutions for p k (x, t) and q k (x, t) in terms of graphs emerges [54]. This deserves to be explored further-the hope is to find a pattern in the expression for the cumulants which may help in estimating the expression for the entire cumulant generating function. Nevertheless, the results of this section have shown that the MFT is a powerful computational tool to explore the statistical properties of an observable that can be written as a functional of the solution of a non-linear, fluctuating, hydrodynamic equation.
Discussion
We considered an Ising ferromagnet endowed with zero-temperature spin-flip dynamics. The Ising quadrant melts, and we studied the statistics of the total melted area A T . The total area is a global observable of the melted region that involves the multiple-point correlations of the interface height. We focused on a symmetric dynamics in which deposition and evaporation events occur with the same rates. The local behavior of the height of the interface can be described by the Edwards-Wilkinson growth model, a linear and tractable stochastic equation. However, the statistics of A T requires the knowledge of the spatial fluctuations and correlations of the interface. We calculated the average, the variance, the skewness and the flatness of A T by solving perturbatively the optimal equations of the MFT. We also used exact microscopic calculations based on the Bethe Ansatz to determine the average and the variance and found the same results. The MFT provides a systematic computational scheme that can be carried over to higher orders. Besides, the calculations based on MFT are already simpler at the second order (i.e., for the variance), compared to the Bethe Ansatz.
Our initial goal was to establish a closed expression for the cumulant generating function µ T (λ) of A T . We have only derived the Maclaurin expansion of that function up to the fourth order; by a scaling argument, we also know its leading behavior in the λ → ∞ limit. The expressions (2)-(5) for the cumulants up to the fourth order do not seem to suggest a conjectural form of the higher cumulants. We leave this problem for future investigations. In particular, the presence of a recursive structure in the perturbative analysis of the MFT equations [54] hints at some integrability property that leaves the hope that these equations could be solvable.
The total melted area is the most basic global observable characterizing the melted region. There are other global observables, e.g., the total number of flippable plus spins N + and the total number of flippable minus spins N − . It suffices to consider N + as N − = 1 + N + . The statistics of N + (T ) has not been probed. The average growth is not difficult to deduce [27], The variance is unknown, although one anticipates that N 2 + c = B 2 √ T , and generally N k + c = B k √ T . One can also modify the underlying Ising model. For instance, instead of the Ising ferromagnet with nearest-neighbor interactions, one can consider more general ferromagnets, e.g., with next-nearest-neighbor (still ferromagnetic) interactions. The mapping of the interface problem onto a one-dimensional diffusive lattice gas still holds [27], but the corresponding lattice gas has a density-dependent diffusion coefficient and a rather complicated mobility [55]. The limiting shape and hence the average area are known [27], while even the computation of the variance appears very difficult.
Perhaps a more fundamental change is to study the same Ising ferromagnet with nearest-neighbor interactions, but in the presence of a magnetic field favoring the majority phase. The corresponding particle system is the totally asymmetric simple exclusion process (TASEP). A huge corpus of theoretical results has been derived for the TASEP (equivalently, for the KPZ interface in 1+1 dimensions), there are also experimental realizations (see e.g. [2,5,12,21] and references therein). To the best of our knowledge, the observable corresponding to the total area has not been studied. The average area A T = T 2 /6 is well-known [34], and the growth of the variance A 2 T c ∼ T 7/3 can be estimated using a scaling argument [54,56]. The precise calculation of the variance (and of higher moments) is an open problem. We recall that the MFT scheme can not be applied per se to the TASEP, which is a non-diffusive system.
The extension to higher dimensions is an outstanding challenge. Scaling laws for the average volume and its variance can be expressed in terms of scaling exponents of the continuous growth models (see Appendix D and [56]). Little is known about the scaling exponents above 1 + 1 dimensions [57,58], however, especially in the situation with a magnetic field (when the growth process is in the KPZ universality class). The absence of a mapping of the interface dynamics onto a simple lattice gas is another barrier which currently prevents us from applying the MFT to the computation of the statistics of the growing volume.
Acknowledgments
We are grateful to S. Prolhac for discussions. We thank S. Mallick for a critical reading of the manuscript. The research of PLK was supported by a grant from BSF.
Appendix A. Derivation of the MFT action and the associated Euler-Lagrange equation
At a macroscopic scale the time evolution of the coarse-grained density profile is governed by the fluctuating hydrodynamic equation (29). Considering all possible evolution of ρ(x, t) in the time interval [0, T ], the moment generating function of the area A T can be written as a path integral The Dirac delta function δ(z) is to ensure that contributions only come from the paths that follow the Eq.(29). The average is over the history of the noise η(x, t).
The delta function can be replaced by a path integral over a conjugate fieldρ(x, t), which leads to Using integration by parts and assuming that ∂ x ρ and σ(ρ) vanish at x → ±∞, the expression yields Since η(x, t) is a Gaussian noise with covariance (30), the average is Substituting this in Eq.(A.3) we obtain the announced result, Eq. (34), for the action. At large T , this effective action grows as √ T [45] and the path integral is dominated by its saddle point. We now minimize the action. Denote by (q, p) ≡ (ρ,ρ) the path that minimizes the action and take a small variation (δρ, δρ) around this path. The change in action S[ρ,ρ] in Eq. (34) corresponding to this variation is where the functional derivatives are taken at the optimal path (q, p).
For the action S[q, p] to be minimum, the variation must vanish. Since δρ(x, t) and δρ(x, t) are arbitrary, the terms inside the curly brackets in the last two integrals in Eq. (A.4) must vanish. This leads to the governing equations (36)- (37).
The first integral in Eq.(A.4) vanishes due to the fixed initial profile ρ(x, 0) = Θ(−x), which implies that the variation δρ(x, 0) = 0. This also provides the first boundary condition in Eq. (38). Vanishing of the second integral leads to the second boundary condition. As the final profile ρ(x, T ) is fluctuating, the variation δρ(x, T ) is arbitrary. Thus the term inside the curly brackets in the second integral in Eq.(A.4) must vanish, leading to the condition δq(x, T ) .
Using the expression for A T [q] from Eq.(32) we obtain the boundary conditions (38).
Appendix B. Derivation of the third cumulant of the area A T
To compute the third cumulant (50) we need to know the expansion of q(x, t) up to the first order. Substituting q 0 from Eq.(42) and q 1 from Eq.(57) into Eq.(50) we obtain To simplify the integral on the right-hand side we use new variables z = y/2 √ t , ω = (x − y)/2 √ t − t , and α = t /t and after straightforward manipulations we get where we have used the shorthand notation It turns out that To establish this identity we first note that for α = 0 both sides in Eq. (B.3) vanish. Next, we differentiate both sides with respect to α and show that the outcomes are identical. The derivative of the left-hand side of Eq. (B.3) gives a Gaussian integral convoluted with an error function, The integral over ω is Gaussian integral and we compute it first. The integrals over z are then computed through integration by part. The result is Taking derivative of the expression on the right-hand side in Eq. The expression for the fourth cumulant (51) involves ∂ x p 2 and q 0 , q 1 , q 2 . First we use Eq.(62) to calculate Utilizing the identity ∂ x g(y, τ |x, t) = −∂ y g(y, τ |x, t) and the integration by parts to transfer the partial derivative on the Erf(y/2 √ τ ) we arrive at where we additionally used the identity ∂ y Erf(y/2 √ τ ) = 2g(y, τ |0, 0). The integral over y is a Gaussian integral which we compute and get This last integral can be evaluated, but it proves more convenient to keep the integral form.
We will also need an alternative formula for q 1 (x, t). In Eq. (57) the integration over y can be performed using the following general identity [59] which leads to We now use σ 2 from Eq. (52) and re-write the expression (51) for the fourth cumulant as a sum of three terms Computation of I 1 Using Eq.(63) we re-write I 1 as The term inside the curly brackets is equal to ∂ y p 2 (y, τ ), see Eq.(C.1). This shows that the second integral in (C.10) is indeed equal to 2I 3 . Thus we can re-arrange Eq. (C.10) as We perform the integral over x (using ∂ y g(x, t|y, τ ) = −∂ x g(x, t|y, τ ) and integration by parts) and get ∞ −∞ dx [1 − 2q 0 (x, t)] ∂ y g(x, t|y, τ ) = 2g(y, 2t − τ |0, 0), (C.11) and we substitute q 1 (y, τ ) from Eq.(C.5) to yield We integrate over y by using the identity ‡ and we obtain The T dependence can be extracted by defining a rescaled variable s = τ /t and integrating over t: The last two integrals can be computed in Mathematica to give
Computation of I 2
Substituting q 1 (x, t) from Eq.(57) into I 2 we obtain Computing a Gaussian integral over x we simplify Eq.(C.16) to We extract the T dependence by defining new variables ξ = y 1 /2 √ τ 1 , η = y 2 /2 √ τ 2 , m = τ 1 /t and n = τ 2 /t: dn ‡ Identity (C.12) appears in Ref. [59]. One can also establish the validity of (C.12) using the same method as in the derivation of (B.3). First, one notices that (C.12) is valid for a = 0 or b = 0. For general values of the parameters, it can be proved by showing that the derivatives of both sides with respect to a are equal.
To integrate over ξ and η we use an integral representation of the error function .
This allows us to simplify Eq.(C.18) to .
Substituting the values of ζ and z yields the correct T dependence A 2 T c as in Eq.(3). This simple argument can be generalized to higher dimensions as well, as done for a crystal growth problem in [56].
One can go further and compute the exact expressions of the average and the variance of d T and A T from this Edwards-Wilkinson description Eq. (D.3). A simple analysis [51] yields the correct expression for d T , d 2 T c and A T . However, the variance A 2 T c calculated using Eq. (D.3) diverges at any non-zero time T . This is related to the fact that the noise amplitude Γ is non-vanishing even far from the origin. Hence, a blindfolded application of the Edwards-Wilkinson equation is inadequate for the integral properties of the interface and we must use the correct noise amplitude. This is achieved by taking the noise amplitude to be σ(ρ) in Eq. (31) which vanishes when the density ρ is zero or one (far from the origin in our setting). This qualitatively explains the virtues of the Langevin equation discussed in section 4. We have shown that MFT allows us to calculate perturbatively the cumulants. However, the analysis is tedious and involved. If we are interested only in calculating the variance of A T , it can be obtained in a rather simpler setting by assuming small fluctuations and linearizing Eq.(29) around the hydrodynamic solution. We now explain briefly how this can be done.
Let us write ρ(x, t) = ρ 0 (x, t) + u(x, t), where the deterministic part ρ 0 (x, t) is the solution of the diffusion equation ∂ t ρ 0 = ∂ xx ρ 0 , i.e., ρ 0 (x, t) = 2 −1 Erfc x/ √ 4t in our case of the step initial condition. The fluctuating field u(x, t) satisfies where g(x, t|y, τ ) is the diffusion propagator (45). Note that u(x, t) = 0 because η(x, t) = 0. Using this together with Eq.(32) one can check that Similarly, the variance is given by which is identical to Eq. (49) found previously by using the MFT. We emphasize that the assumption of small fluctuations around the hydrodynamic profile does not give the correct results for higher cumulants. Thus one has to resort to the more detailed perturbative analysis as described in section 4. with symmetric jump rates 1, independent of the others. An advantage of the noninteracting particles is that the analysis is simpler and the cumulant generating function can be determined for both annealed and quenched initial conditions.
To illustrate the derivation, we consider the simplest case of a step initial profile: all sites on the negative half-line including the zeroth site are occupied, i.e, ρ = 1. Let y j (T ) is the position of a particle at time T which started at site −j at time t = 0. By definition, the half-area H T = ∞ j=0 y j (T ) Θ[y j (T )], where Θ(x) is the Heaviside step function. As the particles are independent of each other, the generating function of H T can be simplified as where the angular brackets denote average over the history. The y j (T ) is a random variable depending on the history of the jth particle. It is easy to show that e λy j (T ) Θ[y j (T )] = ∞ y=1 e λy P T (j|y) + 1 − One can extend the analysis for initial state with any general density ρ > 0. The result depends on the type of the initial states considered: quenched or annealed. In the continuum limit, the cumulant generating function yields In the annealed case we considered an equilibrium initial state where a site on the negative half-line is populated following a Poisson distribution of average density ρ.
The results for κ T (λ) is different for other fluctuating initial state.
|
2014-09-08T11:49:22.000Z
|
2014-09-08T00:00:00.000
|
{
"year": 2014,
"sha1": "13861b1d0bcc27d5696c87d2e60204de7f248b09",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1409.2302",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "13861b1d0bcc27d5696c87d2e60204de7f248b09",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
}
|
226237807
|
pes2o/s2orc
|
v3-fos-license
|
The Role of the Patient Information Leaflet in Patients'Medication Therapy: A Case Study within the Kumasi Metropolis of Ghana
One of the tools used in providing comprehensible medication information to patients on their medication use for improved adherence and subsequent optimal therapeutic effect is the Patient Information (PI) leaflet. In Ghana, the patient information leaflet is available through various sources including health-care professionals (HCPs) and electronic forms. The World Health Organization (WHO) estimates that more than 70% of patients, especially in the developing countries, who receive medications do not read the accompanying leaflet. This study assessed the role of the patient information leaflet in Patients' medication therapy in the Kumasi metropolis of Ghana. A random cross-sectional survey was conducted in various hospitals and pharmacies within selected districts in the Kumasi metropolis. The survey revealed that 96.9% of the sampled respondents (n = 300) were provided with PI leaflets on their medicines while only 3.1% of them indicated otherwise. Among the proportion of respondents who were provided with PI leaflets, 66.7% of them read the information on the drug leaflets whilst the remaining 33.3% did not. Ultimately, 62.4% of those who read the PI leaflets were influenced to discontinue their medication. In conclusion, reading of the drug information leaflet was higher than that found in previous studies in Ghana. Reading the leaflet did not increase adherence but aroused anxiety and decreased adherence in some patients. A large number of the patients who were given the PI leaflets indicated that it did not provide them with the needed information.
Introduction
e need for high-quality written information for patients about their medications is well established in literature [1,2]. e Patient Information (PI) leaflet is a piece of standardized written information about the safe and effective use of a prescription or specified over-the-counter medicine prepared by pharmaceutical manufacturers in one or more of the three formats: package inserts, loose leaflets, and electronic. Many countries require Patient Information (PI) leaflets about drug therapy to be included in the medication package, and the content of these is mandated by regulatory guidelines, with Ghana being no exception [3]. e issue of PI leaflets has drawn many criticisms over the years, from consumer representative groups, health-care professionals (HCPs), pharmaceutical manufacturers, and Government bodies. Based on a patient-centered approach, the aim of a written patient information leaflet is to support patients in tasks such as decision making and/or taking medication correctly [4]. More than ever, consumers now want to know about their medicines and their impact in order to make informed choices. Providing patients with informative, wellset out leaflets which are easy to navigate can lead to improved quality of life, reduced anxiety, early recognition of adverse side effects, and clearer understanding of the treatment regimen. e World Health Organization defines medication adherence as the degree to which a person's behavior corresponds with the agreed recommendations from a health-care provider [5,6]. One major factor that influences adherence is the patient's ability to read and understand medication instructions and its accompanying leaflet. Adherence to treatments is a key determinant of effective diagnosis. Failure to conform is a major issue that affects not only the patient but also the health-care system. Nonadherence to medicine in patients leads to severe deterioration of illness, mortality, and increased cost of healthcare. Despite PI leaflets being available in Ghana for over 15 years, research undertaken in 2004 suggested that less than 30% of consumers received a PI leaflet. e formation of committees such as the Patient Engagement Advisory Committee (PEAC), along with the development and availability of usability guidelines and core PI leaflet templates for PI leaflet writers, has improved and standardized the patient information leaflet document [7,8]. However, these have not resulted in a significant increase in utilization and provision by HCPs or consumers. e introduction of remuneration for pharmacists, repeated reports, and exhortations by pharmacy bodies has had little impact on increasing PI leaflet provision. Consumers are still largely unaware of their existence despite consumer health representative bodies lobbying heavily for action to increase consumer awareness. Written medicine information in conjunction with verbal counselling has proven to have a positive impact on consumers and may aid in increasing knowledge, satisfaction, and adherence to medicine therapy [9]. is study, therefore, sought to assess the role of the PI leaflet in Patients' medication therapy in the Kumasi metropolis of Ghana and to provide current information on the prevalence of the PI leaflet use amongst patients.
Study Area.
A random cross-sectional survey was conducted in various hospitals and pharmacies within four selected districts in the Kumasi metropolis.
Period of Study.
e study was conducted between the period of November 2018 and May 2019.
Questionnaire Pretesting.
A pilot survey was carried out at Atwima Manhyia in the Atwima Nwabiagya Municipality. To avoid bias, random respondents (males and females; literates and illiterates) were selected within the municipality. Fifty (50) questionnaires were issued to these respondents and were read out to those who cannot read nor write. e consent of participants was sought prior to administration of questionnaires. e duration, competency, and suitability of the questionnaires (time frame for answering of questions, ease of understanding the questions and answers, appropriateness of questions to survey objectives) were pretested, and these produced Cronbach's alpha of 0.916, an indication that the questionnaires were well structured.
Sample Size Selection.
e sample size was determined based on the population size for the four districts selected [10]. A total of 300 participants were involved.
Sampling.
Participants were randomly selected to avoid bias and included both men and women who were, at least, sixteen years at the time of sampling. e consent of participants was sought prior to administration of questionnaires.
Data Collection Method.
e questionnaires were given to both male and female and were read out in local Akan dialect to patients who could not read nor write. Questionnaires were retrieved the same day from participants to avoid questionnaire loses. Questionnaires were collated and analyzed using Statistical Package for the Social Science (SPSS).
Sociodemographics of Respondents.
We investigated the sociodemographics of the respondents to determine their ages and sex. A majority of the sampled respondents (39%) were between the ages of 21 to 30, giving a fair idea of the age demographics within the metropolis who patronized healthcare facilities. Notably, 52% of these respondents where females while males were 48%. Almost all the age groups had majority of respondents in the group being female with the exception of the respondents belonging to the 31-40 years group (Table 1).
Respondents' Sources of Information on the Prescribed
Medicines. Patients' information on their prescribed medication can be obtained from varied sources. Taking this into account, the sampled respondents were grouped according to four information sources: pharmacy staff, doctor, friend, and relative. e study uncovered that majority (37.6%) of the participants obtained information about medicines from pharmacy staff, 32.3% from friends, and 23.6% from relatives, with doctors being the least source of information with a frequency of 6.3% (Table 2).
Provision and Purpose of the PI Leaflet.
e study also sought information on respondents being given drug leaflets on their current/last medication, as well as their understanding of the purpose of such leaflets (Figures 1(a) and 1(b)). e results (Figure 1(a)) show that 96.9% of the sampled respondents were provided with drug leaflets on the medicines they bought while only 3.1% of them indicated 2 e Scientific World Journal otherwise. A majority (49.1%) of the respondents indicated that PI leaflets provide information on how their medicines will be taken, while only few (5.4%) indicated that the PI leaflet was for a decorative effect (Figure 1(b)).
Reading of the Drug Leaflet and Its Effects on Respondents'
Adherence to Medication. e survey further revealed that majority (66.7%) of the respondents who were provided with the PI leaflets read the information on the drug leaflets whilst 33.3 % of them indicated otherwise. e participants who read the PI leaflets were asked if there is any instance, where they will stop taking their medication because of some information from the leaflet. A majority (62.4%) responded yes with 37.6% of them responding no (Table 3).
To show two components pertaining to respondents recommending that every patient be given a PI leaflet and that this leaflet be written in their local dialect, a majority of the respondents (78.4%) who read the PI leaflet indicated that they will recommend PI leaflets to be given to other patients to enhance their decision on whether to take their medications or not. A minority of the respondents (21.6%) indicated otherwise (Table 4). In terms of recommending information in the local dialect, the results showed that a greater proportion of the respondents (80.4%) recommend that the information on the drug leaflet be written in the local dialect, as compared to 19.6% of them who did not.
Discussion
is current study was to investigate the role of the patient information leaflet in patients' medication therapy. e patient information leaflet serves as a useful document that informs and guides medicine users and/or their caretakers on their medication. It was observed from the study that e Scientific World Journal majority (52%) of the sampled respondents were females. is was because the females availed themselves to provide the needed responses compared to the males who indicated that they did not have time to answer the questionnaires. Furthermore, females are known to frequent health-care facilities more often than their male counterparts [11].
Sources of medicine information include medical professionals, Internet, and relatives among others. e quality of the received information may vary from source to source. It is, therefore, very important to encourage patients to obtain information on their medicines from the right sources. is study revealed that despite individuals having multiple sources of obtaining information about a particular drug, the most dominant source of information was the pharmacy staff. is might be due to the fact that pharmacy staff including pharmacists are well trained medical professionals who have adequate knowledge about drugs prescribed to individuals and, hence, can provide the requisite information for patients [12]. In the hospitals, pharmacists are usually the last health-care professionals to interact with the patient before discharge and, hence, are required to provide the patient with all the needed information on their prescribed drugs. Again, at the community pharmacy, it is expected that the pharmacist remains the sole source of interaction with the patient and the primary source of information on the prescribed drugs [13,14]. us, the pharmacist being the lead pharmacy staff has an unparalleled role to play in ensuring that PI leaflets are appreciated and well utilized by patients. is will reduce the proportion of the populace who rely on friends and relatives for information concerning their medications. Patients may not obtain the accurate and needed information from friends and relatives, and this will have a negative impact on patients' adherence to their medication therapy [15].
ere is an increase in the percentage of patients who are provided with PI leaflets as compared to previous studies where only 30% of patients who visited health-care facilities in Ghana were provided with a PI leaflet [3]. e rapid increase in the provision of PI leaflets to patients can be attributed to strict enforcement of PI leaflets in drug packaging by the FDA, requisition of PI leaflets by patients due to increase in awareness by health-care practitioners and regulatory agencies and a general increase in the literacy rate among the general populace. Again, advertisement and increase in awareness of the presence and significance of PI leaflets by the FDA and health-care professional are responsible for this observed percentage distribution [16]. Although majority of respondents know that the PI leaflet provides information on how to use the medications, a few think it is for decorative purposes. ere is, therefore, the need for regulatory bodies such as the FDA to continue to educate the general population about the function and importance of these leaflets.
Increase in the literacy rate among the general populace is a major reason why a majority of the populace read the PI leaflet. According to the recent census (2010) conducted by the Ghana Statistical Services, the literacy rate for the population 15 years and older stands at 71.5% with 45.1% being literate in both the English and Ghanaian language.
e Ashanti region recorded 15.7% literacy rate for the population 15 years and older with 11.0% being literate in both the English and Ghanaian language [10]. However, illiteracy (5%), lack of time (2%), bulky nature (16.3%), and complexity and unattractiveness (10%) of the PI leaflet were the majors reasons given by respondents for not reading the PI leaflet. An important factor to consider in designing of the leaflets is its readability, as well as the ease of understanding. To ensure this, easily comprehendable legible text should be used in the creation of the patient information leaflet [17].
It was evident that majority of the respondents were likely to stop taking a particular medicine due to the information on the drug leaflet. Reasons provided by the respondents for not taking their medications were the information provided on the leaflet showed that the drug had more side effects compared to its benefits (15%), some of the information on the PI leaflet were scary (40%), and the complex nature of the information on the PI leaflet (7.4%).
is might be as a result of the difficulty encountered in reading or understanding the language in which the information was written. According to [17], in 15% out of 219 specialists who expressed their opinions in the free text field, the frequently criticized problems on patient information leaflets were lack of comprehensibility and clarity. Nearly, one-third of these specialists were of the view that package More guidelines on good patient information leaflet design to help make these leaflets patient centred must be put in place [18]. Again, increase in font size of the content on patient information leaflets can have a positive effect on clarity and legibility of information. is will reduce nonadherence of medication therapy by patients due to the PI leaflet. Furthermore, majority of the respondents who received patient information leaflets recommended that it should be provided to all patients. A majority of respondents also indicated that the information on drug leaflets should be written in the local dialect to ensure ease of understanding.
is further corroborates the importance of PI leaflets in drug therapy.
Conclusions
e study revealed that majority (96.9%) of the respondents were given drug leaflets, and the proportion of patients reading the patient information leaflet is 66.7%, which is higher than in previous studies. e study also showed that majority of respondents who received PI leaflets recommend that these leaflets be made available to patients. Attention should be drawn to that fact that having the PI leaflets written in the local dialect to enhance patient's understanding is desired and recommended.
Data Availability
e data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
e authors declare that there are no conflicts of interest regarding the publication of this paper.
|
2020-10-29T09:04:17.330Z
|
2020-10-15T00:00:00.000
|
{
"year": 2020,
"sha1": "6887c0ae8cb0bdbc9a9f3f4fd71d899f14342378",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/tswj/2020/2489137.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "29557305558d76257b248e8ec52fa841cc75bdda",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
221008253
|
pes2o/s2orc
|
v3-fos-license
|
AUDACITY: A comprehensive approach for the detection and classification of Runs of Homozygosity in medical and population genomics
Runs of Homozygosity (RoHs) are popular among geneticists as the footprint of demographic processes, evolutionary forces and inbreeding in shaping our genome, and are known to confer risk of Mendelian and complex diseases. Notwithstanding growing interest in their study, there is unmet need for reliable and rapid methods for genomic analyses in large data sets. AUDACITY is a tool integrating novel RoH detection algorithm and autozygosity prediction score for prioritization of mutation-surrounding regions. It processes data in VCF file format, and outperforms existing methods in identifying RoHs of any size. Simulations and analysis of real exomes/genomes show its potential to foster future RoH studies in medical and population genomics.
Introduction
Runs of Homozygosity (RoH) are sizeable stretches of consecutive homozygous genotypes that arise in the genome of an individual who receives copies of an identical ancestral haplotype, a situation known as autozygosity [6]. RoHs are present in any human genome, but their size generally reflects the number of generations over which recombination had the chance to operate in breaking up haplotypes descending from a parental common ancestor.
As a consequence, the RoH burden increases in the offspring to consanguineous matings, as well as within isolated populations as a result of elevated levels of population background relatedness [25]. Within such populations, apparently unrelated parents often result to be connected as closely as third cousins or more when analyzed at the genome-wide level [12]. Conversely, RoH number shows distinctive population patterns which seem to follow the ''out of Africa" serial-migration model, being less present in Africans while spreading in the other continental groups because of successive migrations that decreased the effective population size, reducing haplotype diversity and thus favoring the occurrence of RoHs [25].
Due to the fact that RoHs are enriched for rare deleterious variants [34], autozygosity is associated with an increased risk of autosomal recessive diseases [4]. In patients born to consanguineous parents the homozygous disease-causing variant usually resides within long (several to tens of megabases) tracts of autozygosity.
Exploiting this, homozygosity mapping has successfully identified during last decades genes underlying many hundreds of rare recessive diseases [2]. While a few, randomly distributed long RoHs stand out in the genome of inbred individuals, shorter RoHs are frequent also in outbred populations and tend to be relatively concentrated in genomic regions mainly governed by Linkage Disequilibrium (LD). Nonetheless, short RoHs may represent true autozygosity [25], and may surround autosomal recessive genes likely as a result of founder effects [14,24]. Beyond Mendelian genetics, RoHs have been recently investigated in complex conditions and quantitative traits, and have been shown to be indicative of selection signals [8].
The gold-standard technology for RoH detection is still considered to be array Single Nucleotide Polymorphisms (aSNPs), although following the advent of Next Generation Sequencing (NGS) a number of methods have been either adapted from aSNP or originally tailored to Whole Exome Sequencing (WES) data [26]. Arrays have lower genotyping error rates than NGS [36] but give only access to a fixed set of about 1 million common SNPs. As WES and now Whole Genome Sequencing (WGS) are becoming at hand for research and diagnostic laboratories world-wide, RoH studies are making more and more extensive use of NGS data in medical as well as in population genomics [2,3,5,29,8].
However, an approach is lacking that comprehensively address the problem to reliably identify autozygosity by detecting RoH of any size, being as sensitive to the different characteristics of data underlying WES and WGS, as robust to undergo computationally intensive tasks in ever larger data sets.
We thus aimed to develop a rapid and accurate approach for RoH detection and characterization that exploits genotypes in Variant Calling Format (VCF) originated from either WES or WGS. To this end, we modeled NGS genotype calls by means of DIDOH 3 M 2 , a discrete-input, discrete-output Hidden Markov Model (HMM) obtained as a modification of our previous algorithm H 3 M 2 [19], and calculated for each of the so identified RoHs a logarithm of the odds (RLOD) score reflecting its probability to be autozygous (Fig. 1).
We packaged DIDOH 3 M 2 and RLOD score in the AUDACITY (AUtozygosity iDentification And ClassIfication Tool) software tool and we show how such an approach outperforms current strategies to characterize RoHs that, irrespective of their size, are relevant for population studies exploiting WES/WGS data as well as for the identification of genes underlying recessive diseases.
Ethical considerations
Written informed consents were obtained from all patients or their parents/legal guardians who underwent WES for diagnostic or research purposes at the Medical Genetics Unit, Sant'Orsola-Malpighi University Hospital, and analysis of their WES was approved by the local Medical Ethics Committees.
DIDOH 3 M 2 algorithm
The HMM underlying DIDOH 3 M 2 (discrete-input, discreteoutput homozygosity heterogeneous hidden Markov model) is a two-state HMM where the hidden states are the nonhomozygous (S 1 ) and homozygous state (S 2 ) and the observations are the genotypes G i assigned to each interrogated SNV i (G i 2 fG Homr ; G Het ; G Homa g, where G Homr is homozygous reference, G Het is heterozygous, while G Homa is homozygous alternative) along the length of the genome.
The emission matrix, B, has the following form: where R 1 and R 2 are the probabilities of finding a heterozygous SNV in non-homozygous and homozygous genomic regions, respectively. In practice, R 1 models the proportion of heterozygous SNVs in non-homozygous regions, while R 2 the presence of heterozygous SNVs in homozygous regions which results mainly from sequencing and alignment errors. We incorporated the distance between adjacent SNVs (d i ) and the likelihood of each observed genotype (P i ) into the transition probabilities of the HMM by considering a modified transition matrix defined for 1 6 i 6 n À 1 where n is the number of genomic markers: where p 1 (p 2 ) is the probability to shift from S 1 to S 2 (S 2 to S 1 ) in a homogeneous HMM, f i ¼ ðd i =d Norm Þ þ ð1 À P i Þ=P P Norm i Þ; ðd Norm ; d Norm Þ is the distance normalization parameter and P Norm is the genotype likelihood normalization parameter.
As a result, we obtained a heterogeneous HMM, where the larger (smaller) di (Pi) and the greater the probability to shift from one state to another. d Norm (the distance normalization parameter) and P Norm (genotype probability normalization parameter) modulate the impact of d i and P i , respectively, on the transition probability between the two hidden states (S 1 -S 2 ). p1; p2; R 1 ; R 2 , together with d Norm and P Norm , are all set as parameters for DIDOH 3 M 2 instead of being estimated by an expectation-maximization algorithm since they result to be useful to set the resolution of RoH detection in terms of region size and SNV number. Finally, we use the Viterbi algorithm to estimate the best sequence of S 1 and S 2 and to consequently associate each G i to one of the two states, allowing to discriminate between homozygous and nonhomozygous genomic regions and thus identifying RoH.
Evaluation dataset and performance comparison
In its Phase1, the 1000 Genomes Project (1KGP) consortium, by combining low-coverage whole-genome sequencing (WGS) and high-coverage whole-exome sequencing (WES) of 1092 individuals from 14 populations from Europe, East Asia, sub-Saharan Africa and the Americas, identified around 38 million single nucleotide polymorphic positions and 1.4 million short insertions and deletions [1].
In order to test the performance of our algorithm and the other three state of the art methods on real data analysis, we used them to analyze the WGS and WES genotype data of 200 individuals (50 of European ancestry, 50 of African ancestry, 50 of American ancestry and 50 of Asian ancestry) sequenced by 1000GP consortium (see Supplemental materials). For WGS data analyses we considered the complete set of biallelic SNVs (around 38 million), while for WES analyses we included all the SNVs falling within the range of the 1KGP exomic target regions.
To evaluate DIDOH 3 M 2 ability to identify RoH from WES and WGS data and to compare its performance with respect to other three state of the art methods (PLINK, BCFtools, VCFtools, see Supplemental materials), we generated a gold standard RoH dataset by using the 1KGP SNV genotype calls of the aforementioned 200 individuals. To this end, we considered as gold standard RoH all the regions ! 100Kb and containing at least 200 consecutive homozygous SNVs.
To compare the performance of DH 3 M 2 and existing tools we calculated precision and recall as follows: to calculate precision, we considered all the SNVs called within RoH by each of the 4 approaches and we then calculated the fraction of these SNVs called as homozygous also in the gold standard dataset; to calculate recall, we considered all the SNVs called in RoH in the gold standard dataset and we then calculated the fraction of these SNVs called as homozygous by each of the 4 approaches.
Generation of WGS/WES synthetic variant call sets in offspring to consanguineous unions
To simulate realistic WGS/WES data of offspring to consanguineous unions, we created synthetic variant call sets using a gene dropping strategy ( Supplementary Fig. 4). To speed up the process of dropping dense genetic maps made of hundred thousands or million SNVs such as in WES or WGS, we followed the simulation framework of [11]. We generated a genetic map for each of the 22 autosomes by picking up from Rutgers Map v.3a (http://compgen.rutgers.edu/maps) one biallelic SNP having minor allele frequency (MAF) in the range 0.3-0.7 every about 0.05 cM. The so obtained SNP backbone (64 K autosomal SNPs) was used to simulate recombination patterns conditional on a diseaselinked locus with the Markerdrop utility of the MORGAN v. 3.1.1 suitehttps://www.stat.washington.edu/thompson/Genepi/MOR-GAN/Morgan.shtml.
We constructed genealogies with consanguinity loops formed by unions between 1st, 2nd or 3rd cousins unions (1C/2C/3C) and with one single offspring to the consanguineous parents (index offspring). To condition recombination patterns on the presence of a recessive disease-linked locus, we forced a specific locus (chr1: 212677319) to be inherited by the index offspring as 2 copies of an ancestral allele dropping from one of the 2 common ancestors of the parents in the pedigree.
Since Markerdrop associates a founder-tracking label with each dropping haplotype, we were able to trace pairs of adjacent SNPs between which the simulated recombinations took place. To locate each recombination spot to an exact genomic position, we randomly drew a single base pair coordinate according to the hg19 reference genome between every 2 adjacent SNPs with different founder-tracking labels. We assigned to each of the 2 SNP backbone haplotypes in the family founders one among 160 phased 1KGP Phase1 SNV haplotypes of European unrelated individuals [1]. For WGS, we used all SNVs called in the European 1KGP samples (about 16 M SNVs). For WES, to obtain a set of SNV sites that could be representative of most widely adopted exome target enrichment kits, we used the subset of SNV sites (about 470 K SNPs) that had median depth of at least 20X calculated across 5 of our WES performed in-house with each of the following kits: Agilent SureSelect Human All Exon v6 (Agilent Technologies Inc., La Jolla, CA, USA), SeqCap EZ v2.0/v3.0, (Roche NimbleGen, Basel, Switzerland) BGI (BGI, Shenzhen, China) and Nextera Rapid Capture Exome (Illumina Inc., San Diego, CA, USA).
WGS/WES variant call sets were eventually created by superimposing on the SNP backbone haplotypes in the index offspring the corresponding 1KGP haplotypes according to the recombination patterns traced by the founder-tracking labels.
To generate sets of recombination patterns large enough to reproduce a representative spectrum of inbreeding coefficient (F) in the index offspring, we run 100 Markerdrop simulations for each pedigree with 1C, 2C and 3C genealogy loop. To account for the variability ascribable to different SNV sets among the selected 1KGP subjects, we assigned 10 and 5 different combinations of 1KGP founders? haplotype pairs for WGS and WES, respectively.
Definition of true autozygous/non-autozygous RoH
To define true RoH, we wanted to find the minimum number of consecutive SNVs that were not homozygous by chance. To this end, we randomly picked up 100 K stretches of n consecutive SNVs from our WGS/WES call sets and used Hardy-Weinberg's law to calculate the probability that all the n SNVs were homozygous. We included increasing n SNVs and chose the minimum n for which, on average over the 100 K stretches, the probability of finding n homozygous SNVs was 6 0:01.
We found that the minimum n was 50 and 60 for WGS and WES, respectively, and defined each of these stretches spanning P 100kb, as true RoH. We made use of the founder-tracking labels associated with each of the 2 founders' haplotypes to discriminate between autozygosity (same label) and non-autozygosity (different labels).
Estimation of inbreeding coefficient F
We estimated the inbreeding coefficient (F) of the simulated 1C, 2C, 3C index offspring using FSuite [11] with default options, creating 100 random submaps with one marker every 0.5 cM using SNVs in common between WES and WGS and with minor allele frequency P 0:05.
Linkage disequilibrium based SNV pruning
The pruned subset of SNVs was generated by using PLINK [28] with the -indep-pairwise option and the following parameter settings: window size in SNPs (50), number of SNPs to shift the window at each step (5) and r 2 threshold (0.5).
Incorporation of different allele frequency sets into the calculation of RLOD
To calculate RLOD; DIDOH 3 M 2 allows either to derive allele frequencies directly from the batch of samples under analysis or to use global allele frequencies pre-calculated by the 1KGP project. If a batch is relatively small, frequencies based on few subjects may affect RLOD calculation. However, users may be interested in using allele frequencies from the sample under study because they retain that global allele frequencies may not properly reflect the samples' population. We therefore calculated RLOD using sets of allele frequencies derived from subsets of 10, 50, 100 individuals in the 1KGP Phase 1 European population and from the global 1KGP Phase 1 population and evaluated the performance of RLOD to predict autozygosity under the different sets.
RoH clustering
RoHs clustering was performed with a three-component Gaussian mixture models by using the Mclust function from the mclust package (v.3) in R allowing component magnitudes, means, and variances to be free parameters. RoHs were then partitioned in the three classes and boundaries sizes between classes A and B and between classes B and C were estimated using the following formulas: where A i max ; B i min ; B i max ; C i min are the minimum and maximum RoH sizes for the three classes for population i, respectively.
RoH identification
To evaluate the performance of DIDOH 3 M 2 for different parameter settings we performed several analyses based on synthetic data (see Methods and Supplemental materials), and we found that when we want to study only large homozygous segments we should set large values of d Norm (10 5 ; 10 6 ) and small values of p 1 (0.1, p 2 must be set to 0.1). On the other hand, to increase the resolution of the algorithm and detect small RoHs, small d Norm (10 3 ; 10 4 ) and larger p 1 values (0.2, 0.3) are recommended.
As a further step, to test DIDOH 3 M 2 in the analysis of real genotype data for different parameter settings, we leveraged WES and WGS genotype calls of 200 subjects from 1000 Genomes Project (see Methods and Supplemental materials) and we studied the RoHs identified by our method in terms of their cumulative global size and number. We found that while using higher values of R 1 increase both size and number of homozygous segments, the use of smaller values of R 2 increase the number but decrease the cumulative size of RoHs (Supplemental Fig. 4).
These results are a direct consequence of the role of R 1 and R 2 parameters in our heterogeneous HMM. R 1 represents the proportion of heterozygous markers that defines non-homozygous segments and all the segments that have a heterozygous proportion smaller than R 1 are identified as homozygous. For this reason, the larger R 1 and the larger the total size and number of homozygous segments identified by our model. On the other hand, R 2 represent the proportion of heterozygous markers that our HMM tolerates in a homozygous region. Larger values of R 2 allows to identify as homozygous regions with a higher number of heterozygous markers, while for small values of R 2 homozygous regions are called only if they contain a smaller fraction of heterozygous markers.
Hence, increasing the value of R 2 impose the algorithm to split large homozygous regions (with a fraction of heterozygous markers larger than R 2 ) in small segments (with a fraction of heterozygous markers smaller than R 2 ) thus increasing the total number of detected RoHs and decreasing their cumulative size.
By setting the most conservative set of parameters (R 1 ¼ 1=100 To make a comparison with existing tools to identify RoH from VCF data, we applied PLINK [28], BCFtools [23] and VCFtools [10] to the WES and WGS genotype calls of the 200 aforementioned subjects from 1000 Genomes Project. To allow comprehensive evaluation of performance, we tested different combinations of parameters for each of those tools (DIDOH 3 M 2 , PLINK, BCFtools) which allow the user to tune parameter settings (See Supplemental materials).
To estimate the true cumulative individual RoH length and size, we created a gold standard dataset generated using the genotype calls released by the 1KGP consortium for the afore-mentioned 200 subjects. We took as true RoH every region larger than 100 Kb and made by at least 200 consecutive homozygous single nucleotide variants (SNVs) across around 38 million and 1.5 million SNV genotypes called by the 1KGP consortium in WGS and WES, respectively (Supplemental materials).
VCFtools was always characterized by substantial over-calling while the average RoH length/number identified by DIDOH 3 M 2 and PLINK Across different parameter settings varied below and above the true value. BCFtools had a contrasting behavior, displaying excess or lack of true RoH in its results of both WGS and WES analyses (Supplemental Fig. 4). RoHs identified by DIDOH 3 M 2 had the lowest fractions of heterozygous SNVs (Fig. 2, Panels b,d and Supplemental Fig. 3), suggesting less spurious calls. In particular, by setting R 1 ¼ 2=100 (for WGS data) or R 1 ¼ 4=100 (for WES data) DIDOH 3 M 2 outperformed the three existing tools achieving the best trade-off between precision and recall over true RoH in both WGS and WES (Fig. 2, Panels a, c and Supplemental Fig. 3).
Prediction of autozygosity by RLOD
To estimate RoH probability, for each homozygous segment identified by DIDOH 3 M 2 , we then computed a RoH LOD score (RLOD) comparing the probability of the most likely genotype with that of the observed genotype at each of the N homozygous SNVs within the RoH: where g i is the observed homozygous genotype andĝ i is the most likely genotype at SNV i. Pr(g i ) and Pr(ĝ i ), the probabilities of the observed and most likely genotypes, respectively, are calculated following Hardy-Weinberg's law. If A and a are the two possible alleles of any SNV i with frequency f ðAÞ ¼ p and f ðaÞ ¼ q, and p P q, then Prðg i Þ ¼ p 2 when g i is AA and Prðg i Þ ¼ q 2 when g i is aa. On the other hand, Prðĝ i Þ ¼ p 2 when p > 0:66 and Prðĝ i Þ ¼ 2pq when p < 0:66. When Prðg i Þ ¼ Prðĝ i Þ at all the N homozygous SNVs within the RoH, RLOD ¼ 0. Conversely when Prðg i Þ < Prðĝ i Þ at any of the N SNVs, RLOD > 0 and both the g i likelihoods at any SNV and the number of SNVs with Prðg i Þ < Prðĝ i Þ affect RLOD. RLOD therefore inversely reflects the cumulative frequency of the SNV alleles found homozygous within the RoH.
To evaluate how well RLOD predicts autozygosity we created synthetic call sets of autosomal WES biallelic SNVs in the simulated offspring to consanguineous unions. We constructed 100 pedigrees with genealogy loops formed by one among the unions between 1st, 2nd and 3rd degree cousins (1C, 2C and 3C) with a single offspring (''index offspring") to the consanguineous parents (Supplemental Fig. 5).
During simulations, a pair of 1KGP phased haplotypes for WES SNVs was assigned to each of the founder subjects of the synthetic pedigrees and let to drop along the genealogy. A founder-tracking label was linked to the flowing alleles, so that each was inherited by the offspring could be traced back to its founder of origin. By checking the pairs of founder-tracking labels linked to the RoH detected in the index offspring (Supplementary materials) we were able to unambiguously split RoH of the subject into autozygous (same label for both alleles) and non-autozygous (different label for each allele) segments.
We calculated the genomic inbreeding coefficient (gF) of all the 100 simulated offsprings for each 1C, 2C and 3C genealogy loop by FSuite [11] setting parameters as specified in Supplementary materials.
As recognized in the literature [37,20], gF values are dispersed around the mean across pedigrees with identical genealogy loops. According to our simulations, for the 1C, 2C and 3C genealogy loops gF values were distributed as follows: mean = 0.0675 and sd = 0.0231 (1C), mean = 0.0245 and sd = 0.0126 (2C), mean = 0.0104 and sd = 0.00749 (3C), and show substantial overlap between pedigrees with the different genealogy loops. As a result, knowledge of the pedigree is not particularly useful to predict the inbreeding level of the offspring. To group together simulations with similar inbreeding levels, we therefore calculated the median gF value for each genealogy (0.066, 0.023 and 0.0105 for 1C, 2C and 3C, Consortium. Panels a and c report the results of the precision-recall analysis for WGS and WES data respectively. The bar plots of panels b and d report the fraction of heterozygous single nucleotide variants that belong to all ROHs detected by the four algorithms. The performance of the DIDOH 3 M 2 and PLINK algorithms have been reported for the parameter settings the gave the best results in terms of trade-off between precision and recall. For WGS data (panels a and b) DIDOH3M2 obtained the best results with R2 ¼ 1=1000, R1 ¼ 2=100, p 1 ¼ 0:1, p 2 ¼ 0:1, PNorm ¼ 1, dNorm ¼ 100000, while PLINK with heterozygote allowance 1, kb threshold 200 and SNP threshold 1=1000. For WES data (panels c and d) DIDOH3M2 obtained the best results with R2 ¼ 1=10000, R1 ¼ 4=100, p1 ¼ 0:1, p2 ¼ 0:1, PNorm ¼ 1, dNorm ¼ 100000, while PLINK with heterozygote allowance 1, kb threshold 100 and SNP threshold 1/1000. respectively) and created four gF ranges (from high to low inbreeding levels: F1: 0.066-1; F2: 0.023-0.066; F3: 0.00105-0.023; F4: 0-0.0105) into which we classified the simulated offsprings based on their gF value rather than pedigree.
In published population as well as case-control RoH studies [9,13,16,17,[20][21][22], either minimum size threshold or linkage disequilibrium (LD)-based SNP pruning is applied to avoid calling short RoHs that are very common or that are homozygous by chance. We took this into account when we performed DIDOH 3 M 2 analysis. First, in addition to the initial 100 Kb threshold, we introduced two more stringent thresholds representing the rough size limit below which RoH are likely under LD (500 Kb) and above which they are likely autozygous (1500 Kb) [25]. Second, we performed another DIDOH 3 M 2 analysis on a subset of the original marker map after removing SNVs with LD P 0:5 (Supplementary materials). This resulted in 6 RoH-call sets generated by applying any of the 3 size thresholds, alone or in combination with the LD cut-off (100 Kb, 500 Kb, 1500 Kb, 100 Kb + LD, 500 Kb + LD, 1500 Kb + LD). On these we then computed RLOD using SNV allele frequencies of the 1KGP Phase 1 European population. Subsequently we simulated WGS data following the same steps as for WES simulations but using a marker map extended to SNVs outside the coding or the near-coding sequences. Finally, to measure the accuracy with which RLOD predicts autozygosity, we calculated precision and recall for each RoH-call set as follows: As precision, we calculated the fraction of RoH calls by DIDOH 3 M 2 that have any overlap with true autozygous RoH; As recall, we calculated the fraction of true autozygous RoH called by DIDOH 3 M 2 .
We then compared the capability of RLOD to identify autozygosity with that of RoH size, because long RoHs are commonly considered to be truly autozygous regions. We were able to demonstrate that RLOD largely outperforms size in discriminating between autozygosity and non-autozygosity when applied to both WES and WGS data, with more evident gain in performance as gF range decreases (Fig. 3).
For any gF range, the best trade-off between precision and recall is obtained by the combination of RLOD with the most stringent threshold for size (1500 Kb) and LD P 0:5. In WES data, progressive regression of the trade-off point is observed from higher to lower gF ranges (Fig. 3a-d), while in WGS RLOD improved performance even more markedly than in WES without apparent loss in accuracy in lower gF ranges (Fig. 3e-h).
DIDOH 3 M 2 allow users to use allele frequencies (AFs) retrieved from 1KGP or custom AFs calculated directly from genotypes in the VCF file under analysis. Importantly, we did not notice any major change in RLOD performance when using increasing numbers of samples to calculate AFs from 50 samples upwards. As shown in Supplemental Fig. 6, RLOD provided comparable accuracy in identifying autozygosity from WES as well WGS simulated data using 1KGP Phase 1 global or European AFs, as it did using custom AFs calculated from a number of samples of 50 or more.
Prioritization of mutation-surrounding RoH by RLOD.
Most approaches for the prioritization of candidate variants for autosomal recessive diseases rely on the size of the surrounding RoH [37,7,30,2]. However size is not always optimal to predict which, among many long RoH in the patient's genome, is the one containing the causative variant [38], because also short RoH can happen to be autozygous and surround autosomal recessive genes [25,15,24].
Of the few available alternative strategies that use statistical methods and exploit haplotype frequencies [38,15], none is tailored for NGS data. To compare the capability of RLOD to prioritize the mutation-surrounding RoH (msRoH) with that of RoH size, while performing the simulations described above we forced a specific disease-linked locus to be inherited by the offspring as two copies of an ancestral allele (Supplementary materials). We then ranked RoHs of the index offspring's WES simulations per genealogy loop by both RLOD and size, and evaluated which of the two measures was the most efficient in prioritizing the msRoH as follows: we calculated how many times the disease-linked RoH ranked as 1st by any of the two measures; we calculated how many times the disease-linked RoH rank by one measure was higher or equal to that by the other; We found that the msRoH ranks as 1st significantly more times by RLOD than by size, and ranks higher or equal by RLOD than by size both in WES and WGS (Tables 1 and 2, Fig. 4 and Supplemental Fig. 7).
Overall, these results show that RLOD outperforms size in prioritizing the msRoH in a patient's genome, proving to be useful as part of the toolkit for prioritization of candidate variants with recessive effect. The lower is the gF range (Fig. 4), as well as the degree of parental consanguinity (Supplemental Fig. 7), the more significant are these differences (Table 1 and Table 2).
To replicate the conclusions of the simulation analysis in the WES of 15 real patients, we used data where the homozygous disease-causing variant was found to be surrounded by RoH as identified using DIDOH 3 M 2 . This data set included 13 unrelated patients whose parents were closely inbred (1st/2nd cousins), and 2 for whom parental consanguinity was not reported, all undergoing WES for research [27,14,24] or diagnostics.
WES data were processed as in the simulations to identify patients' RoHs, and the ranking position by RLOD and size of the mutation-surrounding RoH was retrieved from DIDOH 3 M 2 results. RLOD conferred the msRoH higher or equal rank than size in 9 out of the 15 WES (60%). The overall distance between ranking positions by size and RLOD across the 15 WES, calculated as the sum of the distances between the ranking positions by size and RLOD (totaling 54), is in favor of this latter.
As shown in Fig. 5, when the ranking position by RLOD is higher than that by size, the distance between them is larger (mean = 8.25 ranking positions) than in the opposite situation (mean = 2 ranking positions), demonstrating that RLOD outdistances size in the majority of cases, while in instances where the size confers the msRoH higher rank than RLOD, the 2 measures achieve comparable ranking positions. Notably, the higher the distance between the ranking positions by size and RLOD, the shorter the size of the mutation-surrounding RoH (r = À0,69) (Fig. 5).
This underscores the capability of RLOD to pick out small msRoHs among the many regions of similar or larger size found throughout the genome, reflecting the simulation analysis showing that RLOD outperformed size especially for low gF ranges. RLOD was able to outdistance size in prioritizing small msRoHs in the WES of patients with autosomal recessive disorders, as for RoHs surrounding disease-causing MYO15A and ATAD3A variants, both smaller than 2 Mb [14,24].
Characterization of RoH across worldwide populations by DIDOH 3 M 2
To show the potentiality of our computational approach to explore genomic patterns of homozygosity in human populations we used DIDOH 3 M 2 to analyze genotypes of 600 individuals from six populations sequenced by the 1000 Genomes Project (100 YRI, Yoruba from Ibadan, Nigeria; 100 BEB, Bengali from Bangladesh; 100 CEU, Utah residents with Northern and Western European ancestry from the CEPH collection; 100 JPT, Japanese in Tokyo, Japan; 100 CLM, Colombians from Medellin, Colombia; 100 FIN, Finnish in Finland).
We performed DIDOH 3 M 2 analysis with p 2 ¼ 0:1; p 1 ¼ 0:1; d Norm ¼ 10 5 ; R 1 =4/100 and R 2 =1/1000 and, following the model proposed by [25], for each population separately we analyzed RoH sizes as a mixture of three normal distributions representing three distinct RoH classes: short RoHs ranging tens of Kb (class A), medium RoHs ranging hundreds of Kb (class B), large RoHs ranging up to tens of Mb (class C) (Fig. 6, Panel a). Class A reflect homozygosity for ancient haplotypes that contribute to local LD patterns, class B result from background relatedness owing to limited population size while class C result from recent parental relatedness.
The mean size of each class and the boundaries between different classes vary across the 6 populations, in particular class B and C RoHs (Fig. 6 Panel b). For each class, we observed the smallest mean size for YRI and the largest mean size for JPT and CLM populations. As a further step we calculated the overall RoH length per individual across the three classes and we studied its distributions within each population and compared the populations with each other.
As shown in Fig. 6, Panels c-f, the total lengths of RoH (Fig. 6, Panels f) generally increase with increasing distance from Africa of the geographical location of the population, where an isolated population such as the Finnish are comparable with the European CEU population and an admixed population such as the Colombian shows greater variability. Class A (Fig. 6, Panels f) and class B (Fig. 6, Panels e) RoH generally follow this pattern, only with decreasing variability in the admixed Colombian population from class B to class A RoH. Total lengths of class C RoH (Fig. 6, Panel d) are not characterized by the same stepwise increase. Instead, they are higher in the Finnish and are more variable in the Colombian population.
Finally, for each individual, we performed pairwise comparisons between the total lengths of class A, class B, and class C RoHs. In agreement with results reported by [25] (Fig. 5, Panels g-i), the total length of class A and B RoH are highly correlated (R ¼ 0:91), while the correlation with class C is much smaller (C-A R ¼ 0:34, Table 1 Statistical tests for assessing significance of disease-linked RoH ranking position in simulations for different consanguinity loops. McNemar test is used to assess significance for how many times the disease-linked RoH ranked as 1st by RLOD rather than size. Wilcoxon test is used to assess significance for how many times the disease-linked RoH ranks by RLOD higher or equal to that by size. Table 2 Statistical tests for assessing significance of disease-linked RoH ranking position in simulations for different gF ranges. McNemar test is used to assess significance for how many times the disease-linked RoH ranked as 1st by RLOD rather than size. Wilcoxon test is used to assess significance for how many times the disease-linked RoH ranks by RLOD higher or equal to that by size. C-B R ¼ 0:37), suggesting that class A and B RoHs, as expected, may have arisen via a different process of class C.
AUDACITY tool
The DIDOH 3 M 2 algorithm and RLOD score calculation described and tested in previous sections have been packaged in the AUDA-CITY software tool. AUDACITY is a collection of perl, R and fortran codes. A schematic representation of its workflow is reported in Fig. 1. AUDACITY takes as input the genotype data of multiple samples in VCF file format, selects all the biallelic SNVs, applies the DIDOH 3 M 2 algorithm, calculates the RLOD and gives as output a. bed file containing coordinates (Chr, Start, End), the ROH length, the number of SNPs and the RLOD score for each detected RoH.
In default setting, the AUDACITY tool calculates the RLOD of each RoH by exploiting the allele frequencies of all the biallelic SNVs discovered by the 1 K genome project Phase 3 dataset. In alternative, the AUDACITY tool allows to calculate custom allele frequency of the individuals from the input VCF file.
On a desktop computer with a 2.5 GHz cpu and 8 GB of ram, it takes two hours to perform the analysis of a VCF file with the genotype calls of ten WGS experiments. AUDACITY is publicly available athttps://sourceforge.net/projects/audacity-tool/.
Discussion
In this study, we created and tested a novel method, AUDACITY, for simultaneous identification and prioritization of RoHs from WES and WGS data. Prioritization was done by computing RLOD, a LOD score that inversely reflects the cumulative frequency of the homozygous alleles forming the RoH diplotype, and then by ranking the RoHs according to their RLOD.
The idea of using allele frequencies to assess the probability of a RoH to be autozygous was firstly postulated by Broman and Weber [6], who proposed a sliding-window method to identify RoHs cou- pling detection and inference about autozygosity. When this method was proposed, genome-wide scans were performed with hundreds of microsatellite markers, far fewer than the million SNVs that can be interrogated in WES and WGS data.
We decided to uncouple the processes of RoH detection and autozygosity prediction, first running the new HMM algorithm to single out the regions of homozygosity, and only after applying the RLOD calculation limited to the so-identified regions. In this way, we avoided the computationally intensive task of doing iterative RLOD calculations over million markers by overlapping windows along the genome, resulting in faster computation at no expense in performance.
By taking multi-sample VCF as input file format, DIDOH 3 M 2 overcomes the major limitation of our former tool H 3 M 2 [19], that was able to deal only with the huge, and thus less manageable for most end-users, single-sample BAM files. Indeed, VCF is by far the most accessible NGS data format for laboratories worldwide, and since it can be populated with genotypes of many samples, it is suitable for DIDOH 3 M 2 analysis of diagnostic series, case-control cohorts as well as for population studies.
By an extensive work of performance comparison, we demonstrated how DIDOH 3 M 2 outperforms popular existing tools in the accuracy to detect RoH from both WES and WGS. In particular, as the previous H 3 M 2 , our algorithm proves to be as more accurate than the other tools as RoH size is smaller. While researches have long been focused on long RoHs unveiling the presence of homozygous recessive alleles in patients from consanguineous families, the increasing availability of WGS will allow to disclose the effect of short RoH on complex disease risk and on the demographic history of human populations [8]. In this perspective, it is of importance to classify RoH based on the allelic composition of their diplotype rather than on their size, because the former has the potential to shed light on the RoH origin and relevance with respect to genomic variables such as recombination rate, positive selection and recessive effect of alleles modulating human traits and diseases. RLOD proved to be able to reliably predict autozygosity from WES and WGS data, and we demonstrated by simulation analysis and application to real WES data that this property can be used to prioritize msRoH implicated in recessive disorders more efficiently than size.
As noted already by other authors [25], using fixed size cut-offs to RoH length such as 500 Kb, 1 Mb or 2 Mb [18,17,32] based on the assumption that RoH below these thresholds are chance homozygosity mainly governed by LD patterns and therefore not biologically relevant, is at risk of overlooking true autozygosity. In view of an increasingly adoption of WGS by clinical and research laboratories, RLOD will be useful in prioritizing RoHs where diseasecausing variants may be not easily tackled, i.e. because noncoding and therefore difficult to single out from the wealth of homozygous candidate variants dispersed throughout the genome. As an anticipation of this, RLOD was helpful in identifying a disease-causing synonymous variant in NARS2 inside the msRoH. As synonymous changes are usually assigned low priority in that they are not predicted to alter the protein product, this variant was initially discarded by our workflow for candidate variant filtering. However it later emerged as of interest when we incorporated RLOD into our variant classification algorithm, since the NARS2surrounding RoH was ranked 7th by RLOD instead of 12th by size, eventually leading to diagnosis when the patient's phenotype resulted to match literature reports of other patients with mutations in this gene [31,33,35].
As shown in our Precision-Recall plots of Fig. 3, RLOD is the major factor to improve autozygosity prediction from both WES and WGS data. Size and LD cut-offs play a role especially in WES and gain relevance for lower gF ranges. Using these cut-offs is safe in population studies where the end-point is to obtain an estimate in terms of RoH number or length of genomic autozygosity. We would however recommend caution especially for gene-mapping and mutation-detection purposes, because their use may lead to loose the msRoH for the sake of autozygosity prediction accuracy.
Since users may perform DIDOH 3 M 2 analysis on samples of different sizes, we wanted to evaluate the extent to which the specification of allele frequencies calculated from smaller to larger sample sizes could affect RLOD. The use of allele frequencies derived from increasing sample sizes indicates that DIDOH 3 M 2 analysis is reliable also when carried out in small cohorts or populations. This is particularly important for analyses involving samples from populations that are not referenced in large variant databases, so that retrieving allele frequencies from such resources may lead to dangerously alter RLOD calculation.
To evaluate the capability of DIDOH 3 M 2 to prioritize the msRoH in patients affected with autosomal recessive disorders, we simulated patients' genomes as offspring to 1C, 2C and 3C consanguineous parents. Since the well-known dispersion around the mean of gF values across pedigrees with identical genealogy loops, we introduced here the median gF values for each genealogy as thresholds to separate groups of pedigrees based on the actual genomic inbreeding rather than relying on pedigrees. Such an expedient was instrumental to extract deeper information on how RLOD and size perform when analyzing data of patients characterized by different inbreeding levels. As shown in our simulations, the RLOD outperforms size in prioritizing the msRoH for any genealogy loop, whether it is of the 1st or of the higher ranking position. Looking at gF ranges, conversely, the performance of the 2 approaches are substantially indistinguishable for the highest interval (0.066-1), suggesting that RLOD does not provide valuable advantage over size in patients with very high inbreeding levels, irrespective of their reported consanguinity. Indeed, these individuals bear multiple RoH extending up to tens of Mb which usually receive the highest RLOD scores of the genome. In such a situation, which is often the rule in highly consanguineous communities, RLOD and size result to be therefore equivalent. Otherwise, our findings clearly show that RLOD improves msRoH prioritization for any other gF range and overall, therefore it could be successfully used as substitute for size in the gene-mapping process.
Medium to small RoHs are thought to contribute to complex diseases and quantitative traits, and their role is increasingly investigated. As RLOD was capable of prioritizing msRoHs that belong to these classes, it may prove useful in prioritizing also the multiple loci that in a patient genome accumulate detrimental homozygous alleles contributing to disease additively.
As for ROH analysis in 6 1KGP populations, we obtained results consistent with previous studies based on SNP arrays [25], demonstrating suitability of AUDACITY to enable reliable analysis of ROH distribution across human populations. Our previous tool H 3 M 2 [19] has been used to profile ROHs in large collections of samples from populations known for their high inbreeding degrees [29]. We believe that AUDACITY, with improved workflow for straightforward processing of multiple sample VCF data, will be of greater help to carry out such large-scale projects.
Conclusion
In conclusion, AUDACITY is a comprehensive approach for the analysis of RoHs from NGS data, either WES or WGS, tailored for applications in medical as well as population genomics. It proved to outperform existing tools in the accuracy to detect RoHs and RLOD, the autozygosity prediction score it incorporates, is suitable to prioritize regions relevant for traits and diseases. Its ability to handle data in VCF format responds to the emerging need of reli-able and rapid RoH characterization in ever larger WGS data sets, that are becoming increasingly available to researchers that aim to enlighten the effect of RoHs in conferring risk for complex diseases and in shaping the genome of human populations.
Data sharing statement
All WES and WGS data used for algorithm validation, simulation and real data analyses, and population study are publicly available as part of the 1000 Genomes Project (ftp://ftp.1000genomes.ebi.ac. uk/vol1/ftp/phase3/data/). WES of the 13 patients were not provided with consent for data sharing.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
|
2020-07-16T09:01:47.553Z
|
2020-07-14T00:00:00.000
|
{
"year": 2020,
"sha1": "1b64fded657b4bad789664c4243b446a476db5d5",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.csbj.2020.07.003",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "dab414404081521dcd73fc8efe8c4a682b39a447",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
}
|
239056756
|
pes2o/s2orc
|
v3-fos-license
|
Empirical Investigation on Compressive Strength of Geopolymer and Conventional Concretes by Nondestructive Method
Department of Civil Engineering, Vignan’s Foundation for Science Technology and Research, Vadlamudi Guntur 522212, Andhra Pradesh, India Department of Civil Engineering, Koneru Lakshmaiah Educational Foundation, Vaddeswaram Guntur 522212, Andhra Pradesh, India Department of Chemical Engineering College of Biological and Chemical Engineering, Addis Ababa Science and Technology University, Addis Ababa, Ethiopia
Introduction
Nondestructive testing (NDT) is an approach for reviewing, testing, or analyzing the elements or components on concrete and concrete members. e major purpose of NDT is to evaluate integrity and quality of concrete members without causing any damage to its functionality and integrity [1]. Acoustic Tap Testing was one of the methods of NDT used earlier to nineteenth century, to detect cracks in railroad wheel [2]. NDT is majorly used to test the structural components of a structure for ensuring safety and serviceability. Certain factors like resolution in both vertical and lateral directions and signal to noise ratio impact NDT [3].
Distinct NDT methods are used in Civil Engineering. NDT surface hardness methods are used to identify the material's strength characteristics. Indentation method and rebound hammer method are the two groupings used to identify concrete surface hardness [4]. Rebound or Schmidt hammer is another nondestructive testing equipment. It is used for finding the concrete or rock strength and elastic property. e rebound number is measured spring-loaded mass. Impacting the hammer on smooth concrete block or rock surface at right angles, the rebound number is obtained.
In the recent past, the use of fly ash offers with cement replacing material gains significant importance for reducing the pollution [5]. It becomes one of the ingredients of concrete. Measurement of strength of concrete through UPV was introduced in USA in the mid-1940s. UPV is one of the NDT methods useful to test quality, homogeneity, and compressive strength of concrete through regression equation.
e UPV test consists of transmission of mechanically generated pulses through electro-acoustic transducers. Applied pulse generates longitudinal waves, whose velocity can be determined by transducers. e velocity of waves determined by UPV is correlated to elastic modulus, strength, and so on.
Rebound or Schmidt Hammer.
e RH is another NDT equipment. It is used for finding the concrete or rock strength and elastic property. e rebound number is measured spring-loaded mass. Impacting hammer on smooth concrete block or rock surface at right angles, the rebound number is obtained.
Depending on age of concrete, water-cement ratio, properties, and type of aggregate and cement influence UPV values [6]. In addition to these factors, reinforcement which was embedded in the path of pulse also shows significant effect on UPV values [7]. As there are various NDT methods used by industries of civil and structural engineers, there exists an ample amount of literatures related to NDT. e major intent of this paper is to obtain UPV and rebound values of conventional, geopolymer concretes and to develop the relation between compressive strength and UPV values.
Materials and Methods
2.1. Cement. 53 Grade OPC (specific gravity � 3.10) is utilized in this experimental study. Based on the data, IS 8112 : 1989 [8], the chemical composition of cement is represented in Table 1. (FA). FA is one of the coal combustion products which consists of fine particles collected from boilers with flue gases. FA was collected from the thermal power plant, Kondapalli, Krishna district, Andhra Pradesh, India. Composition of fly ash is presented in Table 1.
Fine
Aggregate. River Sand. River sand is naturally obtained material from river bank. It is widely used in normal construction works. e fineness modulus of river sand is 2.75 and conforming to zone III according to IS 383 : 1970 [9].
Robo Sand. Robo sand is a waste obtained from crushed aggregates. It is also known as artificial sand. 3.62 is the fineness modulus of robo sand. According to IS 383 : 1970 [9], this robo sand conforms to Zone III. Robo sand properties are represented in Table 2.
Coarse Aggregate.
Coarse aggregate is collected from quarry site. 20 mm and 10 mm aggregates are used in this experiment conforming to Zone III as per IS 10262 : 2009 [10]. In this experiment, 60% of 20 mm and 40% of 10 mm aggregates are used. Table 3 represents properties of coarse aggregates.
Metakaolin.
e dehydroxylated variety of clay mineral kaolinite is termed metakaolin. It provides high strength to concrete [11]. e disordered kaolinite and ordered kaolinites are converted into dehydroxylates at temperatures of 530-570 0 C and 570-630 0 C. A light pinkish metakaolin was employed here whose specific gravity is 2.45.
Alkaline Activators.
For the preparation of geopolymer concrete, chemicals called sodium hydroxide and sodium silicate were used. Sodium Hydroxide. Generally, sodium hydroxide (NaOH) is available in flakes and pellets. Sodium hydroxide flakes are used in this experiment.
Sodium Silicate. Generally, it is available in gel state and is known as water/liquid glass.
Solution Preparation.
Solution NaOH was prepared 24-48 hours priorly. Due to its presence in form of flakes, NaOH pellets were dissolved properly in water for preparation of sodium hydroxide solution. Experimental property studies of solutions can provide significant thermodynamic information under various temperature and pressure circum stances. Oxygenated compounds like alkaline and alcohols have become a very important additive in mix binders for liquids and solids [12][13][14][15][16]. In order to prepare one litre of 12M NaOH, 480 grams of NaOH was dissolved in water at room temperature, approximately 28 ± 2°C. e molarity equation can be written as follows:
2.10.
Curing. Ambient curing is the curing method adopted for geopolymer concrete. For ordinary concrete, curing is done by placing cubes into a water bath for 7, 14, and 28 days. After curing period completion, specimens are tested.
Testing
Ultrasonic Pulse Velocity Test Procedure. Basic principle of the UPV test is measuring pulse of longitudinal vibrations that are passing through concrete. e travel time of UPV wave travelling through the concrete will be measured. Velocity of wave depends on geometry and elastic property of material. BS-4408 part-5, ASTM C 597-71, and BIS 13311 (part 1): 1992 [18][19][20] provided recommendations for utilization of this method. e compressive wave velocity for homogeneous concrete is evaluated by using the following equation: where 2c)], E d is the dynamic elasticity modulus, and ρ is the dynamic Poisson's ratio. Elastic stiffness and mechanical strength are the two influencing factors of UPV. Variations in mix proportions influence pulse velocity. To assess compressive strength, quality of concrete and calibration charts are to be established.
According to BIS 13311 (part 1): 1992 [15,18,19], quality of concrete can be determined by using velocity of ultrasonic pulse waves. Velocity of waves is determined initially. Based on velocity of wave travelling through the concrete specimen, quality of concrete can be identified and is represented in Table 6.
Rebound Hammer or Schmidt Hammer Test Procedure [9]. e rebound hammer with plunger is considered and impacted against concrete surface. Generally, there are different kinds of rebound hammers which are available depending on applications. e impact energy may vary from 0.07-3 kg-m. Number which is obtained from the rebound index is calibrated to compute compressive strength.
e concrete surface on which this rebound test is conducted should be smooth, clean, and dry. Sand paper or stone can be used to rub rough surfaces present on concrete. From edges and discontinuity shapes, the hammer should be impacted 20 mm long. Concrete surface should be maintained perpendicular to the rebound hammer. For each concrete surface, numbers of observations are to be taken. Average of those observations results in strength of concrete. Test procedure for determining rebound values is as per ASTM C-805-85 [21] and BIS 13311 PART 2 [22].
According to BIS 13311 (part 2): 1992 [22], quality of concrete can be determined by using the rebound number. e rebound number is determined initially by impacting the rebound hammer. Based on the number, quality of concrete can be identified and is represented in Table 7.
Results and Discussion
Evaluation tests for finding the concrete strength were conducted with various supplements of cement after completion of curing periods. UPV testing machine and rebound hammer are the equipment used for compressive strength evaluation [15,20,22]. Table 8 contains UPV values of OPC after curing (7, 14, and 28 days). A graph was plotted by considering concrete mix on abscissa and UPV values as ordinates. And the obtained graph is represented as Figure 1. Advances in Materials Science and Engineering Figure 1 represents a plot between UPV values for OPC for different concrete mixes. And it was noticed that UPV values are increasing for increasing curing period. UPV values of Mix 3 are decreasing at all ages (7, 14, and 28 days). Table 9 represents UPV values for GPC concrete of all the three mix proportions after 7, 14, and 28 days of curing.
For Ultrasonic Pulse Velocity Test.
A graph was plotted by considering GPC mix on abscissa and UPV values as ordinates. And the obtained graph is represented as Figure 2.
A plot between UPV values for OPC for different mix proportions is represented in Figure 2. From this figure, it was observed that UPV values increase for increasing curing period. UPV values of Mix 3 are decreasing at all ages (7, 14, and 28 days). ese values are taken after the curing period (7, 14, and 28 days). Figure 3 represents a plot between rebound numbers for OPC for different mix proportions. And it is being noticed that rebound values are decreasing with increase in the curing period. A rebound value of Mix 3 is greater when Table 6: Grading of concrete using pulse velocity [8].
In Table 11, the rebound number for different mix proportions of GPC is represented. ese values are taken after the curing period (7, 14, and 28 days). Figure 4 represents a plot between rebound numbers for GPC for different mix proportions. And it is being noticed that, on increasing the curing period, rebound values are also increasing. Rebound value of Mix 3 is greater when compared with other mix proportions at various ages (7, 14, and 28 days). Along with the above results, relations of compressive strength and ultrasonic pulse velocity values were developed and are represented in Advances in Materials Science and Engineering ere is no specific relation for concrete compressive strength and UPV. From above relations, following equations were determined with respect to mix proportions [16,20,23,24]: where y is the concrete compressive strength and xis the velocity value of concrete.
Conclusion
For this present experimental investigation, an equation is determined for comparison of compressive strength and UPV values obtainedare as follows: (i) e UPV and rebound values increase with increase in the curing period. (ii) For Mix 2 of OPC concrete, the UPV values increase by 3.8% and 6.42% at 7 to 14 days and 14 to 28 days of curing, respectively. For the same mix proportion, the rebound value increases by 9.1% and 5.5% at 7-14 and 14-28 days of curing, respectively. (iii) For Mix 2 of GPC, the UPV values increase with 42.46% and 32.31% at 7-14 days and 14-28 days of curing, respectively. For same mix proportion, the rebound increases by 3.57% and 6.89% at 7-14 and 14-28 days of curing, respectively. (iv) With reduction of fly ash content in GPC, the passing time of longitudinal waves is lesser. (v) Further investigation of this current study is to develop equations for various mix proportions of both conventional and geopolymer concrete. ese are helpful for finding the compressive strength of respective mix proportions.
Future Scope
Further investigation of this current study is to develop equations for various mix proportions of both conventional and geopolymer concrete. ese are helpful for finding the compressive strength of respective mix proportions. Many industrial by-product combinations can be used for production of geopolymers. Structural parameters are investigated using geopolymer. For utilizing geopolymer concrete in large-and smallscale constructions, experimental investigations can be conducted on structural elements. Life cycle analysis (LCA) of concrete can also be used to identify the durability of geopolymer concrete.
Data Availability
e data used to support the findings of this study are included in the article and are available from the corresponding author upon request. Advances in Materials Science and Engineering 9
|
2021-10-21T15:09:14.825Z
|
2021-10-18T00:00:00.000
|
{
"year": 2021,
"sha1": "f60b98f95d5021403510f1ab0d994082c67b287f",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/amse/2021/9575964.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "3653554fb392996933ab1318e2e77a2928f0c090",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"extfieldsofstudy": []
}
|
227179525
|
pes2o/s2orc
|
v3-fos-license
|
Cortical Oscillatory Signatures Reveal the Prerequisites for Tinnitus Perception: A Comparison of Subjects With Sudden Sensorineural Hearing Loss With and Without Tinnitus
Just as the human brain works in a Bayesian manner to minimize uncertainty regarding external stimuli, a deafferented brain due to hearing loss attempts to obtain or “fill in” the missing auditory information, resulting in auditory phantom percepts (i.e., tinnitus). Among various types of hearing loss, sudden sensorineural hearing loss (SSNHL) has been extensively reported to be associated with tinnitus. However, the reason that tinnitus develops selectively in some patients with SSNHL remains elusive, which led us to hypothesize that patients with SSNHL with tinnitus (SSNHL-T) and those without tinnitus (SSNHL-NT) may exhibit different cortical activity patterns. In the current study, we compared resting-state quantitative electroencephalography findings between 13 SSNHL-T and 13 SSNHL-NT subjects strictly matched for demographic characteristics and hearing thresholds. By performing whole-brain source localization analysis complemented by functional connectivity analysis, we aimed to determine the as-yet-unidentified cortical oscillatory signatures that may reveal potential prerequisites for the perception of tinnitus in patients with SSNHL. Compared with the SSNHL-NT group, the SSNHL-T group showed significantly higher cortical activity in Bayesian inferential network areas such as the frontopolar cortex, orbitofrontal cortex (OFC), and pregenual anterior cingulate cortex (pgACC) for the beta 3 and gamma frequency bands. This suggests that tinnitus develops in a brain with sudden auditory deafferentation only if the Bayesian inferential network updates the missing auditory information and the pgACC-based top-down gatekeeper system is actively involved. Additionally, significantly increased connectivity between the OFC and precuneus for the gamma frequency band was observed in the SSNHL-T group, further suggesting that tinnitus derived from Bayesian inference may be linked to the default mode network so that tinnitus is regarded as normal. Taken together, our preliminary results suggest a possible mechanism for the selective development of tinnitus in patients with SSNHL. Also, these areas could serve as the potential targets of neuromodulatory approaches to preventing the development or prolonged perception of tinnitus in subjects with SSNHL.
Just as the human brain works in a Bayesian manner to minimize uncertainty regarding external stimuli, a deafferented brain due to hearing loss attempts to obtain or "fill in" the missing auditory information, resulting in auditory phantom percepts (i.e., tinnitus). Among various types of hearing loss, sudden sensorineural hearing loss (SSNHL) has been extensively reported to be associated with tinnitus. However, the reason that tinnitus develops selectively in some patients with SSNHL remains elusive, which led us to hypothesize that patients with SSNHL with tinnitus (SSNHL-T) and those without tinnitus (SSNHL-NT) may exhibit different cortical activity patterns. In the current study, we compared resting-state quantitative electroencephalography findings between 13 SSNHL-T and 13 SSNHL-NT subjects strictly matched for demographic characteristics and hearing thresholds. By performing whole-brain source localization analysis complemented by functional connectivity analysis, we aimed to determine the as-yet-unidentified cortical oscillatory signatures that may reveal potential prerequisites for the perception of tinnitus in patients with SSNHL. Compared with the SSNHL-NT group, the SSNHL-T group showed significantly higher cortical activity in Bayesian inferential network areas such as the frontopolar cortex, orbitofrontal cortex (OFC), and pregenual anterior cingulate cortex (pgACC) for the beta 3 and gamma frequency bands. This suggests that tinnitus develops in a brain with sudden auditory deafferentation only if the Bayesian inferential network updates the missing auditory information and the pgACC-based top-down gatekeeper system is actively involved. Additionally, significantly increased connectivity between the OFC and precuneus for the gamma frequency band was observed in the SSNHL-T group, further suggesting that tinnitus derived from Bayesian inference may be linked to the default mode network so that tinnitus is regarded as normal. Taken together, our preliminary results suggest a possible mechanism for the selective development of tinnitus in patients with SSNHL. Also, these areas could serve as the potential targets of neuromodulatory approaches to preventing the development or prolonged perception of tinnitus in subjects with SSNHL.
INTRODUCTION
Non-pulsatile tinnitus is a common otologic symptom characterized by conscious auditory perception in the absence of an external stimulus. This is often called a "phantom sound" because there is no corresponding genuine physical source of the sound (Vanneste et al., 2018b;Lee et al., 2019;Han et al., 2020). Although the exact mechanism of tinnitus has yet to be elucidated, peripheral auditory deafferentation has been suggested as the most important factor in increased spontaneous neuronal firing in the central auditory system and cortical maladaptive plasticity between auditory and non-auditory brain regions, leading to the development of tinnitus (Eggermont and Roberts, 2012;Elgoyhen et al., 2015). Hearing loss has been strongly implicated in tinnitus, as demonstrated by a relationship between tinnitus pitch and maximum hearing loss frequency, which suggests that tinnitus is a fill-in phenomenon (Schecklmann et al., 2012). Recently, growing evidence has shown that the brain works in a Bayesian manner to minimize perceptual uncertainty regarding external stimuli. If the brain is deprived of auditory input, it attempts to "fill in" the missing auditory information from auditory memory, leading to the perception of auditory phantoms (i.e., tinnitus) (Friston et al., 2014;Eggermont and Kral, 2016;Lee et al., 2017Lee et al., , 2020a. Specifically, according to the theoretical multiphase compensation model, the brain attempts to overcome missing auditory information input, generating predictions via increasing topographically restricted tones, widening receptive fields, rewiring dendrites and axons, and retrieving auditory memories, resulting in brain reorganization (De Ridder et al., 2014b).
Sudden sensorineural hearing loss (SSNHL), a complex and challenging emergency in the otology field, is typically defined as a sensorineural hearing loss of more than 30 dB across three consecutive frequencies in a pure-tone audiogram occurring within a 72-h period. Importantly, tinnitus was reportedly accompanied by SSNHL in 66-93% of cases (Ding et al., 2018). Similar to ordinary progressive sensorineural hearing loss, the Bayesian brain model may explain how sudden auditory deprivation (i.e., SSNHL) elicits auditory phantom percepts, namely by increasing the need to compensate for prediction errors by upregulating neural firing in specific tonotopic regions and retrieving extant memories from the parahippocampal gyrus (Lee et al., 2017), depending on the amount of hearing loss (Vanneste and De Ridder, 2016). However, why not all patients with SSNHL experience tinnitus remains unexplained. That is, although tinnitus persists in some patients with SSNHL even after treatment, other patients do not experience tinnitus, or tinnitus is perceived temporarily but resolves spontaneously afterward. This, in turn, led us to hypothesize that tinnitus may develop in subjects with SSNHL only if the requisite cortical changes occur secondary to SSNHL.
Zhang et al. demonstrated altered white matter integrity in the auditory neural pathway of patients with SSNHL, which may be associated with the severity of tinnitus (Zhang et al., 2020). Furthermore, a recent study by Cai et al. showed more specific inhibition of neural activity and functional connectivity in patients with SSNHL and tinnitus compared with healthy controls (Cai et al., 2019), shedding further light on the putative association between SSNHL and tinnitus from the perspective of brain activity. However, neural substrates for selective development of tinnitus have thus far not been investigated among patients with SSNHL.
To test this hypothesis, we investigated neural substrates accounting for the development of tinnitus exclusively in patients with SSNHL by comparing resting-state quantitative electroencephalography (rs-qEEG) findings between SSNHL patients with and others without tinnitus (SSNHL-T and SSNHL-NT). Using whole-brain source localization analysis complemented by functional connectivity analysis, we aimed to determine the as-yet-unidentified cortical oscillatory signatures that could reveal the prerequisites for tinnitus development and to discuss the possible mechanism of the selective development of tinnitus in patients with SSNHL. Although this study includes a relatively small number of patients, which may have weakened the clinical implications of the results and statistical power, the results presented herein seem a more significant undertaking than we initially envisioned. Indeed, there is currently no consensus on the neurobiological markers for selective development of tinnitus in patients with SSNHL. Overall, our study stands out in this precision medicine era for incorporating neuroimaging in a tinnitus study to establish a future guide for the treatment of tinnitus in patients with SSNHL that incorporates neuroimaging as the "new normal."
Participants
We performed a retrospective review of the medical records of patients with unilateral idiopathic SSNHL who visited the outpatient clinic at Seoul National University Bundang Hospital (SNUBH) between January 2014 and March 2020. For the SSNHL-NT group, we were able to identify only 18 patients who met the criteria for unilateral SSNHL with no complaint of tinnitus. Two of the 18 were excluded due to an insufficient follow-up period (i.e., <2 months). Of the remaining 16 patients, 3 were disqualified due to delayed emergence of tinnitus or significant hearing improvement during the follow-up period of at least 2 months after the onset of SSNHL. Ultimately, 13 patients were enrolled in the SSNHL-NT group. No patients in this group were diagnosed with Meniere's disease, vestibular schwannoma, or psychiatric/neurological disorders.
As outlined in Table 1, 51 SSNHL-T patients whose Tinnitus Handicap Index (THI) score was ≤36 (grades 1 or 2), which was administered to minimize potential bias caused by distressinduced changes in cortical activity, were initially selected from the SNUBH database (1,196 rs-qEEG-available subjects). Subsequently, 13 subjects matched for sex, laterality, and audiogram (i.e., >70 dB HL in the affected ear and <40 dB HL in the unaffected ear) with the SSHL-NT subjects but blinded to rs-qEEG findings were finally enrolled in the SSNHL-T group. None of the subjects in the SSNHL-T group had a history of objective tinnitus or etiologies such as Meniere's disease, head injuries, brain surgery, or neurological disorders.
Audiological and Psychoacoustic Evaluation
The hearing thresholds for seven different octave frequencies (0.25, 0.5, 1, 2, 3, 4, and 8 kHz) were evaluated using pure-tone audiometry in a soundproof booth. The mean hearing threshold was calculated using the average of the hearing thresholds at 0.5, 1, 2, and 4 kHz (Han et al., 2019;Shim et al., 2019;Bae et al., 2020;Huh et al., 2020;Lee et al., 2020b;Song et al., 2020). At each subject's initial visit, we obtained a structured history of the characteristics of tinnitus including its presence, laterality, and psychoacoustic nature (pure-tone or narrow-band noise).
EEG Recording
We performed qEEG data acquisition and pre-processing procedures according to a previously reported protocol (Kim et al., 2016;Song et al., 2017;Han et al., 2018;Vanneste et al., 2018b;Lee et al., 2019). Prior to EEG recording, we instructed the enrolled patients not to drink alcohol for 24 h and to avoid caffeine on the day of recording to exclude alcoholinduced changes in the EEG signal (Korucuoglu et al., 2016) and caffeine-induced reductions in alpha and beta power (Siepmann and Kirch, 2002). EEGs were recorded with the patient seated upright with the eyes closed for 5 min using a tin-electrode cap (ElectroCap, Eaton, OH, USA), a Mitsar amplifier (EEG-201; Mitsar, St. Petersburg, Russia), and WinEEG software, version 2.84.44 (Mitsar) in a fully lit room insulated from sound and stray electric fields. The EEG data were obtained using WinEEG software (ver. 2.84.44; Mitsar) (available at http://www.mitsarmedical.com). The impedances of all electrodes were maintained below 5 k . Data were obtained at a sampling rate of 1,024 Hz and filtered using a high-pass filter with a cutoff of 0.15 Hz and a low-pass filter with a cutoff of 200 Hz. After initial data acquisition, the raw data were resampled at 128 Hz and band-pass filtered using a fast Fourier transform filter with a Hanning window at 2-44 Hz. After importing the data into Eureka! Software (Sherlin and Congedo, 2005), all episodic artifacts were evaluated manually and removed from the EEG stream. We eliminated additional artifacts using independent component analysis with ICoN software (http://sites.google.com/ site/marcocongedo/software/nica) (Koprivova et al., 2011;White et al., 2012). All subjects' vigilance levels, including slowing of alpha rhythm or emergence of sleep spindles, were meticulously monitored. No patients included in this study exhibited any abnormal EEG patterns during the measurements.
Source Localization Analysis
Standardized low-resolution brain electromagnetic tomography (sLORETA) was employed to estimate the scalp-recorded electrical activity in each of the eight frequency bands (i.e., intracerebral sources). The sLORETA software includes a toolbox for the functional localization of standardized current densities based on electrophysiological and neuroanatomical constraints (Pascual-Marqui, 2002). We identified the cortical sources that generated the activities recorded by the scalp electrodes in each of the following eight frequency bands: delta (2-3.5 Hz), theta (4-7.5 Hz), alpha 1 (8-10 Hz), alpha 2 (10-12 Hz), beta 1 (13-18 Hz), beta 2 (18.5-21 Hz), beta 3 (21.5-30 Hz), and gamma (30.5-44 Hz). sLORETA computes neuronal electrical activity as current density (A/m 2 ) without assuming a predefined number of active sources. The sLORETA solution space consists of 6,239 voxels (voxel size: 5 × 5 × 5 mm) and is restricted to the cortical gray matter and hippocampus, as defined by the digitized Montreal Neurological Institute (MNI) 152 template (Fuchs et al., 2002). Scalp electrode coordinates on the MNI brain are derived from the International 5% System (Jurcak et al., 2007). A total of 5,000 random permutations, with correction for multiple testing (i.e., for tests performed for all electrodes and/or voxels and for all time samples and/or different frequencies) were carried out; thus, further correction for multiple comparisons was unnecessary. The locations of significant clusters were confirmed using a LORETA-KEY toolbox, such as the Anatomy toolbox, and the Talairach and Tournoux atlas (Talairach and Tornoux, 1988).
Functional Connectivity Analysis
As for the functional connectivity analysis, a total of 16 regions of interest, defined by their respective Brodmann areas (BAs) and known to relate to tinnitus according to previously published literature (Vanneste et al., 2018b), were selected as possible nodes. These included the bilateral superior parietal lobule (BA7), the bilateral frontopolar cortices (BA10), the bilateral orbitofrontal cortices (BA11), the bilateral posterior cingulate cortices (BA27), the bilateral pregenual cortices (BA32), the bilateral parahippocampi (BA36), and the bilateral primary auditory cortices (BA41 and BA42).
Statistical Analyses
Statistical non-parametric mapping (SnPM) was adopted for permutation tests for source localization and functional connectivity. To identify between-group differences in restingstate cortical oscillatory activities, sLORETA built-in voxel-wise randomization tests (5,000 permutations) were used to perform nonparametric statistical analyses of functional images with a threshold P < 0.05. We also employed a between-groups t-statistic with a threshold of P < 0.05. Correction for multiple comparisons in SnPM using random permutations has been shown to yield similar results to those obtained from a statistical parametric mapping approach using a general linear model with multiple-comparison corrections (Nichols and Holmes, 2002). For lagged linear connectivity differences, we assessed between-group differences for each contrast using a paired t-test with a threshold of P < 0.05. We also corrected for multiple comparisons using sLORETA's built-in voxel-wise randomization tests for all of the voxels included in the 16 regions of interests for connectivity analysis (5,000 permutations). Although the between-groups t-statistic was used for source localization and the paired t-test was used for connectivity analysis, these are nonparametric analyses based on 5,000 permutations. All analyses were done and illustrated using the R statistical package (version 3.3.2, R Foundation for Statistical Computing, Vienna, Austria). All statistical tests were two-tailed, and P < 0.05 was considered significant.
Demographics and Clinical Characteristics: SSNHL-T vs. SSNHL-NT
The demographic and clinical characteristics of the two groups are summarized in Figure 1. The laterality of the hearing loss and sex distribution were matched between the SSNHL-NT and SSNHL-T groups. No significant differences in age at onset of SSNHL or duration of hearing loss (from the onset of SSNHL to the timepoint at rs-qEEG measurement) were observed between the two groups. Furthermore, hearing thresholds across all frequencies for the affected-and non-affected ears did not differ between the two groups. The median THI score of the SSNHL-T group was 12 (range, 4-36).
Connectivity Analyses: SSNHL-T vs. SSNHL-T
Compared with the SSNHL-NT group, the SSNHL-T group showed significantly increased functional connectivity between the left OFC and the right precuneus (BA7) for the gamma frequency band (P < 0.05) (Figure 3). For the other seven frequency bands, there were no significant between-group differences in functional connectivity among ROIs.
DISCUSSION
This is the first study to explore cortical activity and connectivity differences between SSNHL subjects with and those without tinnitus and to attempt to reveal the cortical oscillatory signatures for selective development of tinnitus among patients with SSNHL. In this study, the SSNHL-T group had abnormally increased activity in the FPC, OFC, and pgACC for the beta 3 and gamma frequency bands compared with the SSNHL-NT group. These findings suggest that auditory phantom percepts may develop when the brain experiences sudden decreased peripheral auditory input as the Bayesian inferential network updates the missing auditory information with the involvement of the pgACC-based top-down gatekeeper system. Furthermore, the lagged linear connectivity between the left OFC and the right precuneus was significantly increased for the gamma frequency band in the SSNHL-T group compared with the SSNHL-NT group, indicating that tinnitus deriving from Bayesian updating seems to involve the default mode network (DMN); thus, tinnitus seemed to be perceived as normal by the SSNHL-T group.
FIGURE 1 | Comparison of hearing thresholds across all frequencies between patients with sudden sensorineural hearing loss with and without tinnitus (SSNHL-T and SSNHL-NT, respectively). Air conduction pure-tone audiometry (PTA) revealed nearly matched hearing thresholds across all frequencies between the two groups in both the affected and the non-affected ear.
FIGURE 2 | Source-localized cortical power comparison in sudden sensorineural hearing loss with and without tinnitus (SSNHL-T and SSNHL-NT, respectively) groups using resting-state quantitative electroencephalography data. The SSNHL-T group showed increased activity in the frontopolar cortex, orbitofrontal cortex, and pregenual anterior cingulate cortex for the gamma and beta 3 frequency bands compared with the SSNHL-NT group.
The Bayesian Inferential Network Updates Missing Auditory Information via Bottom-Up Deafferentation
The Bayesian brain model, an extension of a predictive brain model, has been suggested as an explanation for the development of tinnitus. According to this model, tinnitus is a response to peripheral auditory deafferentation that aims to reduce perceptual uncertainty (Morcom and Friston, 2012;De Ridder et al., 2014b). In other words, deafferentationinduced auditory phantom percepts, namely tinnitus, are preceded by peripheral auditory input-based memory, and tinnitus develops when prediction error occurs due to peripheral hearing loss (De Ridder et al., 2014a;Lee et al., 2017). In the same context, we have recently reported that approximately 70% of patients with unilateral SSNHL experience ipsilesional tinnitus (Lee et al., 2017), indicating that missing auditory information (i.e., prediction error) may stimulate neural circuit interactions between lower-order (peripheral auditory input) and higher-order (prediction-driving process of auditory perception) auditory systems to reduce uncertainty in a bottom-up fashion due to sudden hearing deterioration. In this regard, significantly increased source-localized activity in the OFC and FPC in the SSNHL-T group in the current study may reflect the role of active Bayesian inferential prefrontal cortical processes (Donoso et al., 2014) in tinnitus generation in the context of a sudden decrease in peripheral auditory input. The prefrontal cortices are considered to employ probabilistic inferential processes (i.e., Bayesian inferences), enabling optimizing behavioral adaptations in uncertain situations based on available information (Koechlin, 2016;Parr et al., 2018). In particular, polar to lateral prefrontal cortices such as the OFC and FPC are involved in making probabilistic inferences and exploring new strategies formed from long-term memory in uncertain environments (Donoso et al., 2014). Therefore, increased source-localized activity in the prefrontal cortices (i.e., OFC and FPC) in the SSNHL-T group may reflect the Bayesian inferential processes of updating sensory prediction and thereby adopting new strategies (phantom auditory perception) based on stored auditory memory in the context of suddenly decreased peripheral auditory input. Of note, we have recently revealed significantly increased information inflow in cortical areas associated with Bayesian inference in progressive sensorineural hearing loss patients with tinnitus as compared to those without tinnitus (unpublished data), in accordance with the current findings. The OFC has also been suggested as responsible for the emotional processing of sounds (Blood et al., 1999), and is connected to other limbic areas involved in emotion processing (Beauregard, 2007;. In an integrative model of tinnitus (De Ridder et al., 2014c), once the aberrant activity that causes tinnitus percepts is deemed salient, the autonomic nervous system, the limbic system, and their interaction could be further involved in distributing tinnitusrelated distress signals across the brain. Indeed, the OFC has been reported to play a pivotal role in the top-down modulation of autonomic and peripheral physiological responses accompanying emotional experiences (Ohira et al., 2006), supporting neural activity in the OFC might link to biopsychosocial processes of disease (Hänsel and von Känel, 2008). Furthermore, neural activity in the OFC extending to the FPC in beta 1 and beta 2 has shown to differ between sex during emotional processing and emotional regulation . Additionally, tinnitus perception and tinnitus-related distress are closely associated with these brain areas (Schlee et al., 2009), and correlated with the audiological handicap associated with unilateral SSNHL.
Additionally, tinnitus loudness and distress are correlated with the audiological handicap associated with unilateral SSNHL (Chiossoine-Kerdel et al., 2000). Although we attempted to minimize distress-related cortical changes by recruiting SSNHL-T subjects with only mild distress, distress cannot be completely eliminated in tinnitus. In this regard, the activity changes in the FPC and OFC may also reflect the emotional weight attached to aberrant auditory perception (i.e., tinnitus) in patients with SSNHL.
A Top-Down Gatekeeper System Is Activated to Cancel Internally Generated Auditory Phantoms
Recent studies have suggested that auditory phantom percepts can be associated with bottom-up (ascending) deafferentation as well as with a dysfunctional top-down (descending) noisecanceling mechanism (De Ridder et al., 2014b;Song et al., 2015;Vanneste et al., 2019). This top-down mechanism is a putative central gatekeeper that functions as an "auditory gate, " evaluating the relevance and affective meaning of sensory stimuli and modulating information transmission via descending inhibitory pathways to the thalamic reticular nucleus (Hullfish et al., 2019;Vanneste et al., 2019). In previous pain studies, the degree of improvement after spinal cord stimulation depended on activation of the pgACC (Moens et al., 2012), which is a part of the descending pain inhibitory pathway (Fields, 2004;Kong et al., 2010), the somatosensory analog of the noise canceling system. Additionally, Vanneste et al. demonstrated that altered neural activity of the pgACC likely increases tinnitus loudness in patients who are Met carriers (i.e., COMT Val 158 Met polymorphism), probably due to reduced cancelingout of irrelevant auditory input. Furthermore, increased activity in the parahippocampus and the pgACC for the theta and gamma frequency bands, as well as decreased activity in the auditory cortex, is found exclusively in tinnitus patients with hearing loss compared with those who have hearing loss but without tinnitus (Vanneste et al., 2018a). While the activation of a topdown noise-canceling mechanism works predominantly in the alpha frequency band during the resting state, dysfunctional noise canceling resulting in tinnitus is hypothesized to be linked to the theta and gamma frequency bands . These findings are consistent with our data showing increased source-localized activity in the pgACC for the gamma frequency band in the SSNHL-T group. That is, the pgACC, which normally functions as a central gatekeeper, is activated to abate behaviorally irrelevant phantom auditory signals that stem from Bayesian updating via bottom-up deafferentation.
Overall, our data suggest that auditory phantom percepts may develop in a brain with suddenly decreased peripheral auditory input when the Bayesian inferential network actively updates the missing auditory information. Furthermore, as an attempt to minimize this auditory phantom, the pgACC-based top-down gatekeeper system may be activated in brains with sudden auditory deafferentation.
Tinnitus Percepts May Be Considered the Norm When Bayesian Updating-Based Tinnitus Is Actively Linked to the Default Mode Network As shown in Figure 3, a significant increase in connectivity between the OFC (BA11) and the precuneus (BA7) was observed in the SSNHL-T group compared with the SSNHL-NT group. The posterior cingulate cortex and precuneus are considered critical nodes of the brain's DMN, a specific group of brain regions activated when people are occupied with an internally focused task (i.e., the task-negative mode) . Therefore, our data may indicate that patients with SSNHL perceive tinnitus when Bayesian updating-based tinnitus is actively linked to the DMN. The DMN may regard the salient but irrelevant auditory information (i.e., tinnitus) arising from the Bayesian updating as normal, ultimately leading to continuous tinnitus perception. We have recently shown that localized activation of brain areas involved in the DMN may act as a negative predictor of improvement in tinnitus after partial auditory reafferentation by the use of hearing aids or cochlear implants, as tinnitus perception may already seem normal due to activation of DMN-related brain areas (Song et al., 2013;Han et al., 2020). Collectively, these findings reinforce the existing notion that the brain regions involved in generating tinnitus may become integrated into the DMN in patients with tinnitus (De Ridder et al., 2011;. Based on the literature as well as the current findings, our results justify the evaluation of localized activity and functional connectivity using functional neuroimaging in patients with SSNHL. The rationale behind such an effort lies in the expectation that altered brain activity and connectivity, including that of the DMN, may predict the prognosis with regard to the chronification of tinnitus or treatment responses in subjects with tinnitus.
Limitations and Future Perspectives
Taken together, the results of the present study merit special attention in that they are grossly in line with the recently proposed Bayesian brain model for the generation of tinnitus and offer a key to unraveling the conundrum of the selective development of tinnitus in patients with SSNHL. Our study also raises an important issue that may stimulate further research incorporating customized neuromodulation approaches based on the status of neural substrates responsible for the perception of tinnitus in patients with SSNHL.
Nevertheless, there are some limitations that should be addressed in future studies. First, the results presented here are limited by the relatively small number of subjects in both groups, mainly due to the difficulty of recruiting SSNHL patients without tinnitus. Future follow-up studies in a larger number of subjects should be performed to replicate the current results. Additionally, the current study was designed as a cross-sectional evaluation, which, along with the retrospective study design, may weaken the clinical implications of our results. These limitations require future prospective and longitudinal follow-up studies to determine the origination of these differences of cortical activity and connectivity between SSNHL subjects with and those without tinnitus. Particularly, recruiting patients showing immediate tinnitus following sudden auditory deprivation but improved thereafter, as negative plasticity compensates for itself, would be important to elicit more significant findings. Second, confounding related to distress-induced cortical activity changes was minimized by including SSNHL subjects with tinnitus who had low THI scores; however, such confounding was not completely eliminated because tinnitus with no distress is almost nonexistent. A future prospective study including SSNHL with "very minimally" distressing tinnitus should be conducted to confirm the reproducibility of the current findings. Third, this study did not consider the possibility of combined hyperacusis in the SSNHL-T group. A recent study using rs-qEEG showed that increased "circuit-breaker" activity was associated with hyperacusis-related neural substrates (Han et al., 2018), which suggests that cortical activity may be biased if tinnitus subjects with combined hyperacusis are included. Future studies recruiting a SSNHL-T group without combined hyperacusis should be performed to address this limitation.
CONCLUSION
Our preliminary study explored cortical activity and connectivity differences between SSNHL subjects with and without tinnitus, shedding light on the cortical oscillatory signatures for selective development of tinnitus among patients with SSNHL. These areas could serve as potential targets of neuromodulatory approaches to prevent the development or prolonged perception of tinnitus in subjects with SSNHL.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/supplementary materials, further inquiries can be directed to the corresponding author/s.
ETHICS STATEMENT
The study was approved by the Institutional Review Board of the Clinical Research Institute at Seoul National University Bundang Hospital, and was conducted in accordance with the Declaration of Helsinki (IRB-B-2006-621-105). The patients/participants provided their written informed consent to participate in this study.
AUTHOR CONTRIBUTIONS
S-YL and J-JS led the analysis and interpretation of the results and drafted the first manuscript. BC, J-WK, and DD conceived the investigation and revised the manuscript for important intellectual content. All authors contributed to all aspects of the investigation, including methodological design, data collection and analysis, interpretation of the results, revision of the manuscript for important intellectual content, and approved the final version of the manuscript and agree to be accountable for all aspects of the work.
ACKNOWLEDGMENTS
The English in this document has been checked by at least two professional editors, both native speakers of English. For a certificate, please see: http://www.textcheck.com/certificate/ 8QiQBI.
|
2020-11-28T14:06:21.175Z
|
2020-11-27T00:00:00.000
|
{
"year": 2020,
"sha1": "ec80f0c987e4527bc41b8d56e7bd822fd61d1934",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fnins.2020.596647/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ec80f0c987e4527bc41b8d56e7bd822fd61d1934",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
9864586
|
pes2o/s2orc
|
v3-fos-license
|
Regulatory mechanisms in cell-mediated immune responses. IV. Expression of a receptor for mixed lymphocyte reaction suppressor factor on activated T lymphocytes.
Suppression of the mixed lymphocyte reaction (MLR) by a soluble factor produced by alloantigen-activated spleen cells requires genetic homology between the factor-producing cells and responder cells in MLR. The ability of lymphocytes used as MLR responder cells to adsorb MLR suppressor factor was tested to investigate the expression of a receptor structure for suppressor molecules. Normal spleen or thymus cells had no effect on suppressor activity. Concanavalin A (Con A)-activated thymocytes, however, effectively removed suppressor activity, suggesting that the receptor is expressed only after activation and is not present or not functional on resting cells. Significantly neither phytohemagglutinin- nor lipopolysaccharide-activated lymphoid cells absorbed the factor. Furthermore, only Con A-activated thymocytes demonstrating genetic homology with the cell producing suppressor factor for H-2 regions to the right of I-E were effective absorbants. Alloantigen-stimulated spleen cells syngeneic to the suppressor cell also removed suppressor activity. These data support an hypothesis that subsequent to stimulation in MLR, T lymphocytes express a receptor, either through synthesis or alteration of an existing molecular structure, which then provides the appropriate site for interaction with suppressor molecules.
Proliferative responses in mixed lymphocyte reactions (MLR) 1 are suppressed by a soluble factor released by alloantigen-activated splenic T cells (1,2).We have established that such suppressive T-cell factors derived from one strain only suppress responses of strains histocompatible for regions of the H-2 complex between I-E and D (1,2).H-2-dissimilar MLR responder cells are unaffected by active suppressor factors.The data suggest that a receptor specific for that factor and required for an active suppressive interaction is expressed by genetically homologous cells.H-2-dissimilar responder cells would lack the appropriate genetically determined receptor and upon exposure to suppressor factors would be unaffected.A similar "acceptor" site for primed helper T-cell factor has been postulated on B cells (3).Genes mapped in the I region control functional expression of the acceptor, as well as interacting factor molecules.Thus it may be through association of such molecules, either released or on cell membranes, with B-cell surface structures that appropriate cell interactions in antibody synthesis are achieved.
Accordingly we have tested the ability of cells, used as MLR responder cells, to adsorb suppressor factor activity as an indication of a receptor for MLR suppressor molecules.The results suggest that such a receptor structure is dynamically expressed by a subpopulation of T lymphocytes only after a triggering antigenic or mitogenic signal.The receptor is not present or perhaps not functional on resting cells.H.2 control of the receptor molecule is suggested by the inability of activated H-2-dissimilar T cells to interact with and remove suppressor activity.
Materials and Methods
Mice.BALB/c and DBA/2 mice were obtained from the Department of Cell Biology, Baylor College of Medicine, Houston, Texas.C57BL/6 mice were obtained from TIMCO Breeding Labora-tory Inc., Houston, Texas.C3H/He and A/J mice were purchased from The Jackson Laboratory, Bar Harbor, Maine.Experiments were performed with 6-to 14-wk-old male animals.Concanavalin A. Twice recrystallized concanavalin A iCon A) (ICN Pharmaceuticals Inc., Life Sciences Group, Cleveland, Ohio) was dissolved in Hanks' balanced salt solution (HBSS) (Microbiological Associates, Bethesda, Md.) at i mg/ml, stored at 4°C for not more than i wk, and diluted to desired concentration immediately before use.
MLR. MLR were prepared as previously described (4), with the exception of the culture medium employed.Briefly, responder and stimulator cell populations were cultured in equal numbers, 1 × 108 cells of each, in 0.2 ml cultures in supplemented Eagle's minimal essential medium (MEM) (5,6) with 10% fetal calf serum (FCS) (Reheis Chemical Co., Kankakee, Ill) and 50 ~g/ml gentamicin (Schering Corp., Kenilworth, N. J.).Stimulator cells (designated throughout by subscript m) were treated before addition to MLR with mitomycin C (Sigma Chemical Co., St. Louis, Mo.).MLR cultures were incubated in an atmosphere of 10% CO2, 7% 02, and 83% N2 at 37°C.DNA synthesis in MLR was assayed by adding 1.0 ~Ci of tritiated thymidine (sp act 2.0 Ci/ mmol; New England Nuclear, Boston, Mass.) to cultures for the final 18 h of a 72 h incubation period.Exceptions to this protocol are subsequently detailed.
Data from separate experiments are expressed as mean counts per minute of four to six replicate cultures with the standard error of the mean.Net counts per minute (E-C) were calculated by subtracting counts per minute of cultures with syngeneic stimulating cells (C) from counts per minute of cultures with allogeneic stimulating cells (E).E-C from grouped replicate experiments represent mean E-C from three to five experiments.Percent MLR response was calculated according to the following formula: Preparation of Suppressor and Control Supernates.Suppressor supernates were produced as previously described (1).Briefly, normal mice were injected into hind foot pads with 2 × 107 allogeneic spleen cells.4 days later alloantigen-activated spleen cells were co-cultured in supplemented Eagle's MEM with 2% FCS with equal numbers of mitomycin C-treated allogeneic spleen cells of the strain used for in vivo sensitization.Supernates were harvested 24 h later.Control supernates were similarly prepared from co-cultures of normal spleen cells with equal numbers of mitomycin C-treated syngeneic cells.
Preparation of Adsorbing Cells.Cellular adsorbants were prepared from fresh normal thymocytes and spleen cells, and from thymocytes or spleen cells activated in vitro with mitogen or allogeneic cells.Purified phytohemagglutinin (PHA) was obtained from Burroughs Wellcome & Co., Triangle Park, N. C.; lipopolysaccharide W (LPS) from Escherichia coli 0127:B8 was purchased from Difco Laboratories, Detroit, Mich.Mitogen-stimulated cells were prepared by incubating spleen or thymus cells with Con A (3 ~g/ml), PHA (1 ~g/ml), or LPS (100 ~g/ml) at 107 cells/ml in supplemented MEM containing 5% FCS under Mishell-Dutton conditions (5) for 48 h.
Alloantigen-stimulated adsorbing cells were prepared by co-culture of normal spleen cells and allogeneic or syngeneic mitomycin C-treated spleen cells at 107 cells/ml final concentration of each population under conditions as described above.Normal unstimulated thymocytes or spleen cells were prepared as single cell suspensions and washed extensively.At the time of adsorption, cultured cells were harvested and washed four times in HBSS or, in the case of Con A-activated cells, with 0.15 M methyl-a-D-mannoside in HBSS.
Adsorption of Supernates.Suppressor and control supernates were incubated with 2.5-3.0 × l0 s packed prepared adsorbing cells/ml fluid at 4°C for 30 min with frequent mixing.Thereafter the cells were removed by centrifugation.
Results
Removal of Suppressor Factor Activity by Activated Thymocytes.Since normal splenic lymphocytes are used as responder cells for MLR, these were used in initial attempts to adsorb suppressor activity (Fig. 1).As previously described (1) strongly suppressed MLR responses in a dose-dependent fashion.Incubation of suppressor factor with normal BALB/c spleen cells or thymocytes before addition to MLR had no effect on its suppressive activity.Since indirect evidence exists that T-cell-mediated suppression affects primarily the proliferative phase of the response rather than blocking initial antigen recognition (1, 7), the possibility was then investigated that the receptor for suppressor factor is expressed only after antigenic triggering.As a model, thymocytes activated by Con A were tested for their ability to remove or inactivate suppressor activity.
In contrast to the lack of effect of normal thymocytes, suppressor activity was significantly reduced by exposure to Con A-activated thymocytes.Inhibition of MLR occurred only at the highest concentration of activated thymocyte-adsorbed suppressor factor.It is important to note that control factor, similarly incubated with Con A-activated thymocytes had no effect, enhancing or otherwise, on MLR responses.
Adsorbing Capacity of Various Activated Lymphocyte Populations.The subpopulation of cells which is able to interact with suppressor factor was characterized by studying the adsorbing capacity of various lymphocyte populations activated by several mitogens (Fig. 2).While thymocytes stimulated with Con A effectively removed suppressor activity, thymocytes stimulated by another T-cell mitogen, PHA, showed no adsorbing capacity.Surprisingly, Con Aactivated spleen cells were also ineffective adsorbants.In addition, an activated B-cell population derived from LPS-stimulated spleen cells did not affect MLR suppressor activity.All of the described adsorbing cell preparations possessed 77-94% blast cell forms.Also tested for suppressor factor adsorption was a neoplastic cell line of the same H-2 haplotype as both the cells producing suppressor factor and the Con A-activated thymocytes which adsorbed factor activity.Incubation of factor with P815 mastocytoma cells resulted in no reduction of suppressor activity; in fact, slight enhancement of suppression was observed.
Genetic Restriction of Suppressor Factor Adsorption.As described previously, factor which suppressed syngeneic MLR responder cells had no effect on cells which did not possess the appropriate H-2-region homology (1, 2).Consequently, it was of interest to test H-2-incompatible Con A-activated thymocytes with regard to their ability to interact with suppressor molecules.Suppressor factor produced by alloantigen-activated BALB/c (H-2 ~) spleen cells was incubated with normal or Con A-activated thymocytes of various H-2 haplotypes and tested in MLR with BALB/c responder cells (Fig. 3).
Normal thymocytes of all strains utilized showed little ability to remove suppressor activity.Again, activated BALB/c thymocytes effectively removed suppressor activity.H-2-identical DBA/2 (H-2 ~) Con A-activated thymocytes pared and combined only at the time of adsorption.BALB/c spleen cells cultured with syngeneic cells showed no adsorbing effect.In contrast, BALB/c cells stimulated by culture with C57BL/6 cells significantly removed suppressor activity.The same cell pair combined only at the time of factor adsorption was inactive.Finally, C3H/He (H-2 k) spleen cells also stimulated by C57BL/6 cells did not reduce BALB/c (H-2 d) suppressor activity.The C57BL/6 target cell in the various adsorbing mixtures was not a primary participant, as demonstrated by the absence of suppressor activity adsorption in groups 6 and 7.
Effect of Suppressor Factor Adsorption on Subsequently Cultured MLR Responder Cells.Although suppressor activity was clearly unaffected by exposure to normal thymocytes or spleen cells, it nevertheless was possible that normal cells functionally bind suppressor molecules and that the strength of remaining suppressor activity masks a relatively slight reduction of suppressor molecule concentration.Thus MLR responder cells, either alone or in combination with syngeneic or allogeneic stimulator cells were preincubated with control or suppressor fc.ctors under adsorbing conditions, washed, and cultured in MLR (Fig. 6).MLR cultures were assayed at 72, 96, and 120 h after culture initiation.One set of cultures was prepared in the usual fashion with control and suppressor factors present throughout the entire culture period to serve as an assay of suppressor activity in the supernates utilized.Preincubation (0 h adsorption) of responder cells alone (not shown) or in combination with stimulator cells had no significant effect on proliferative responses of these cells.Incubation of responder cells at 0 h with suppressor or control factors at 37°C rather than 4°C similarly had no effect (data not shown).Since it appeared that normal cells did not functionally bind suppressor factor, it was then of interest to determine the point in MLR culture at which such binding might be identified.At various times after culture initiation MLR were harvested, exposed to suppressor and control factors under adsorbing conditions, washed, and returned to culture.MLR cultures exposed to suppressor factor for 40 min 4 h after culture initiation showed significantly (P < 0.005) reduced proliferation at all assay periods in contrast to cultures incubated with either control factor or medium.Cells similarly treated after 2 h in MLR showed equivocal results (not shown).Exposure of MLR cultures to suppressor factor for 40 min at 24 h produced inhibition which was quantitatively similar but delayed in comparison to that expressed after treatment at 4 h.
Discussion
Suppression of MLR requires genetic homology between alloantigen-activated suppressor T cells, from which a suppressive factor is derived, and the MLR responder cell (1, 2).Therefore we have postulated that a receptor specific for suppressor molecules is expressed by appropriately homologous responder cells and is lacking on H-2-dissimilar cells.We have attempted to identify such a receptor through adsorption or inactivation of suppressor factor activity by exposure to various normal and activated lymphocyte populations.The present results suggest that only after activation by mitogens or alloantigen are requisite receptors expressed by a subpopulation(s) of T lymphocytes which allow interaction with MLR suppressor molecules.Restriction of factor-target interac- tion only under conditions of genetic identities in the H-2 complex to the right of /-E is consistent with H-2 control of receptor display.The structure allowing interaction of responding cells with suppressor molecules is not present or perhaps not functional on resting, potentially alloantigenreactive lymphocytes.Repeated experiments confirmed the observation of minimal or no effect on suppressor activity after exposure to normal freshly prepared or cultured, unstimulated thymocytes or spleen cells.Furthermore, under modified conditions of adsorption with unstimulated cells, using three-to fourfold greater concentration of adsorbing cells or various temperatures of incubation, suppressor adsorption did not occur (unpublished observations).Interaction with suppressor molecules, identified by loss of activity after exposure to target populations, occurred only when target cells had been first stimulated by alloantigen or mitogen.
Although a variety of mitogens were tested as stimulating agents with different lymphoid populations, only Con A-activated thymocytes were active adsorbants.It was important to determine if residual membrane-bound Con A was directly involved in suppi~essor factor inactivation, since this and other lectins are known to bin.d major histocompatibility complex gene products (8).This possibility was considered unlikely, however, since other cellular adsorbants prepared with Con A, either spleen cells or thymocytes histoincompatible to the factor-producing strain, did not affect suppressor activity.In addition, histocompatible lymphocytes activated by allogeneic cells showed analogous suppressor factor inactivation.
It was surprising that Con A-activated spleen cells did not adsorb factor activity since MLR responder cells are prepared from spleen cells and are affected by suppressor factors.However, Con A-activated peripheral T cells have been reported to have a significantly lower density of certain cell surface antigens per cell (9) than Con A-activated thymocytes (10).A similar quantitative difference in display of the structure relevant to interaction with suppressor factor may also be identified in this system.
The mitogen studies suggest the subpopulation of immunocompetent cells which serves as the suppressor factor target.In MLR between heterogeneous populations of responder and stimulator cells, B cells may participate in a secondary fashion in response to signals generated by T-cell factors (11,12).In previous studies (1, 2) using unfractionated responder and stimulator cell preparations, suppression of responding B cells, as well as T cells was possible.However, the LPS-stimulated, B-cell-enriched adsorbing population was completely ineffective in suppressor factor adsorption, thus suggesting that B cells are not the primary target of suppressor factor activity.In contrast, suppressor factor does interact with T cells, more specifically a particular subset of T cells.Thymocytes stimulated by the T-cell mitogen PHA, perhaps a more mature subset of thymocytes (13), showed no adsorptive capacity.Thus the ability to express structures capable of interacting with suppressor molecules appears to reside in that subpopulation of T lymphocytes which is characterized by Con A responsiveness.In the context of this demonstration of T-cell heterogeneity, it is interesting that PHA responsiveness and proliferation in MLR are also properties of distinct T-cell subsets (14).
Studies with alloantigen-stimulated adsorbing cells are pertinent to investigation of suppressor interaction with responding cells in MLR.Again stimulation of histocompatible cells, in this instance through alloantigen presentation, was required before inhibition of suppressor activity could be observed.Furthermore these studies demonstrated that loss of suppressor activity subsequent to exposure to certain activated cell preparations reflected functional binding of Tcell suppressor molecules to receptor structures on responding cells.Thus, MLR responder cells incubated with suppressor factor before culture initiation were not suppressed.However, after a short period of culture with allogeneic stimulating cells in MLR, responding cells become susceptible to the inhibitory effects of MLR suppressor factor.Nonsynchronous events ofresponder cell sensitization and perhaps of periodic display of suppressor receptor during the cell cycle may contribute to the moderate degree of suppression observed after a single brief exposure to suppressor factor.
In addition to an activating event, genetic compatibility between the suppressor factor-producing cell and the target cell was required for successful interaction.Thus, suppressor factor produced by alloantigen-activated BALB/c spleen cells (H-26) was adsorbed not only by Con A-activated BALB/c thymocytes but also by H-2-identical Con A-activated DBA/2 thymocytes.In contrast, the same factor retained its full suppressive activity after exposure to Con A-activated C3H/He thymocytes (H-2k).Similar results were obtained when adsorbing cells were activated with alloantigen rather than mitogen.
Suppressor activity was also adsorbed from BALB/c factor exposed to activated A strain thymocytes (H-2a), which share the I-C, S, and D regions with BALB/c.Failure of C3H/He cells to remove suppressor activity did not reflect inability to interact with suppressor molecules since those target cells could adsorb suppressor activity of a factor produced by genetically identical H-2 k spleen cells.Thus identity for the regions to the right of I-E in the H-2 complex was sufficient for interaction with and removal of suppressor molecules.Consequently these data indicate that the receptor structure for MLR suppressor molecules is controlled by a gene(s) in the right-hand regions of the H-2 complex, probably within the I region.These observations are consistent with our previous demonstration of I-C-subregion control of suppressor cell-responder cell interaction in MLR (2).
The studies presented here suggest that after alloantigenic or mitogenic stimulus, T lymphocytes express a receptor, either through de novo synthesis or alteration of an existing structure, which then provides the appropriate site for interaction with suppressor molecules.An alternative but less likely mechanism of receptor display might be through passive acquisition by the activated T cell of molecules liberated by other cells.T cells activated in MLR bind both Kand/-region alloantigenic products of stimulator cells (15), as well as immunoglobulin (15,16).However, unless it is postulated that the adsorbing capacities of mitogen and alloantigen-activated cells derive coincidentally from two entirely different mechanisms, the results of adsorption by Con A-activated thymocytes would be difficult to reconcile with the notion of receptor molecules passively acquired from a stimulator population.Similarly, genetic restrictions on factor adsorption by different alloantigen-activated adsorbing cells using the same stimulator cell strain are inconsistent with this postulate.
Both alloantigen and Con A activation of T lymphocytes induce or enhance the expression by these cells of Fc receptors (17)(18)(19)(20) and Ia alloantigens (10, 20).Lack of B-cell adsorption, as well as genetic restriction of suppressor-target interaction suggest that Fc receptors are not a primary component of the suppressor target structure.Present observations are, however, consistent with the possibility that Ia molecules may be part of the receptor.Ia antigens are not expressed on PHA-stimulated lymphocytes (21) nor on P815 mastocytoma cells (22) and these populations do not adsorb suppressor molecules.Since Ia specificities are identified on normal (23) and mitogen-stimulated (9, 10, 21) B cells, it would be necessary to suggest a T-cell-restricted Ia expression, such as has been demonstrated for certain stimulator T cells in MLR (24)./-regiongene control of other acceptor structures critical to regulatory T-cell interactions has been demonstrated (3,25).Although MLR responders have been described as functionally Ia negative by the criterion of anti-Ia serum-mediated cytotoxicity (26), it is possible that Ia is present but not defined under these conditions.Alternatively, MLR responder T cells may require a triggering signal before they exhibit full expression of Ia specificities.
Determination of the functional character of the subset of T cells which bear the MLR suppressor receptor will be of great interest.Although the issue of the suppressor target is controversial, the work of Taniguchi and co-workers (27) indicates that it is the helper T cell which is directly affected by suppressor molecules, with consequent inhibition of antibody synthesis.Gershon similarly suggests the requirement for some helper activity to be present in order for suppression to be manifest (28).Since it has been demonstrated that Con A-
(
E-C) of MLR with supernate × 100 = % control MLR response.(E-C) of MLR without supernate Data were analyzed statistically by the Student's t test.
FIG. 3 .Fro. 4 .
FIG. 3. Activity of suppressor factor after adsorption with normal orCon A-activated thymocytes from various mouse strains.Supernates from C57BL/6-activated BALB/c spleen cells (suppressor) were adsorbed with lymphocyte preparations as indicated and tested in MLR of BALB/c responder and BALB/c or C57BL/6 stimulator cells (final concentration in MLR, 20%).Data represent mean responses -+ SEM of 3-12 experiments.showed similar adsorbing capacity.In contrast, suppressor activity remained intact after exposure to H-2-dissimilar activated C3H/He (H-2 k) thymocytes.Finally, BALB/c suppressor factor was adsorbed by activated thymocytes of strain A, which share H-2 regions to the right of I-E with the H-2 d haplotype of BALB/c.Supernate-target cell interactions showed similar genetic restrictions when the suppressor factor tested was produced by cells of another H-2 haplotype, C3H/He (H-2 k) (Fig.4).Again, Con A-activated C3H/He thymocytes, but not normal or Con A-activated C3H/He spleen cells, adsorbed suppressor factor.Normal syngeneic thymocytes also partially abrogated suppressor activity.Neither normal nor Con A-activated H-2-dissimilar BALB/c (H-2 d) thymocytes were effective adsorbants.Suppressor Factor Adsorption by Alloantigen-Activated Lymphocytes.Since these adsorption studies were initially predicated on the effects of suppressor factor on cells which were responding to alloantigens in the context of the MLR, it was pertinent to use, as adsorbing cell populations, spleen cells which were stimulated by alloantigen rather than by mitogens.BALB/c or C3H/He spleen cells were incubated for 48 h with syngeneic or allogeneic mitomycin C-treated cells; they were harvested, washed extensively, and prepared as adsorbing cells for BALB/c factors (Fig.5).As a control for the effects of alloantigen-induced activation, BALB/c and mitomycin C-treated C57BL/6 (H-2 b) cells were pre-
Fro. 5 .
Suppressor factor activity after adsorption with alloantigen-activated lymphocytes.Supernates from C57BL/6-activated BALB/c spleen cells (suppressor) were adsorbed with syngeneic or allogeneic cell mixtures which had been cultured for 48 h or prepared and combined at the time of adsorption (0 h).The supernates were tested in MLR of BALB/c responder and BALB/c or C57BL/6 stimulator cells.Data represent mean E-C counts per minute of three experiments.Figure in each bar represents percent control MLR response.
Effect of suppressor factor adsorption on subsequently cultured MLR responder cells.Supernates present for entire MLR culture period (final concentration in MLR, 20% [left panel]) or incubated for 40 min at 4°C with MLR cells harvested at 0, 4, or 24 h after culture initiation.Thereafter the MLR cells were washed and returned to culture.MLR responses were assayed at 72, 96, or 120 h of incubation.Supernates from C57BL/6-activated (suppressor) or normal (control) BALB/c spleen cells were tested in MLR of BALB/c responder and BALB/c or C57BL/6 stimulator cells.Data represent mean responses of three experiments.
|
2014-10-01T00:00:00.000Z
|
1976-11-02T00:00:00.000
|
{
"year": 1976,
"sha1": "f0b08bd96c0f55c2da72bedc14f9e0b091e2b65d",
"oa_license": "CCBYNCSA",
"oa_url": "http://jem.rupress.org/content/jem/144/5/1214.full.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "f0b08bd96c0f55c2da72bedc14f9e0b091e2b65d",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
40251158
|
pes2o/s2orc
|
v3-fos-license
|
Religion and Higher Education Achievement in Europe
Although religion has historically been a structuring dimension of higher education systems in Europe, very little research interrogates the contemporary link between religion and higher education. But why should that be done? Building from the European Social Survey data, we show that it helps understanding the roles played by higher education in a given society. Furthermore, it grasps religious belongings as a potential indicator of inequalities, along with ethnic and socio-economic background. It thus underlines the cognitive dimension of inequalities, and calls for a broader taking into account of individual belongings in their analysis.
between religion and higher education.Such an analysis would be of interest, at two levels.First, it is about understanding the role played by higher education in a given society.Are there some specific religious contexts, in which higher education appears more or less developed, and what do we learn from these contexts' comparison?Second, it is about taking religious backgrounds or belongings into account, in the reading of inequalities of access to higher education.Historically, some groups have been refrained from accessing higher education, and European societies are, today, still more or less organized along religious lines.This calls for the consideration of religion as a potential indicator of inequalities, along with an ethnic and socioeconomic background.
We then built an original research design to compare tertiary-degree holders to the rest of the population, looking at their religious background.
RELIGION, EDUCATION, AND SOCIETY
The first striking result consists in a global trend: In Europe, the most secular societies tend to be those with a higher level of education.Comparing the two groups of societies-the most secular ones with a higher level of tertiary education and the most religious ones with a lower level of tertiary educationanother trend appears: Countries of a Protestant tradition are more likely to have a high level of tertiary education, compared with countries of Catholic tradition.
How can one explain these trends?Some research shows that Protestantism has not only generated a high level of economic prosperity, as Max Weber identified, but also a high level of literacy and more education necessary for reading the Bible.Indeed, based on the history of Protestantism and Catholicism, one finds a major difference regarding these religions' role in society: In Protestantism, the individual relationship to knowledge is straight, the Bible has early been translated into German (by opposition with the long-lasting domination of Latin in Catholicism), and the development of schooling was supported during the Reformation.So, today's differences of higher education system development can be interpreted, at least partly, as the consequence of historical choices; in this case, the choice of a common language of religious instruction, which came with a less hierarchical structure of Protestantism, compared with Catholicism.This is coherent with the fact that, in 1900, countries with a majority of Protestants had nearly reached a universal level of literacy, which was not the case of any Catholic countries.This shows how a choice made by the religious institution at some point of history can have long-lasting effects on the development of education.It also calls for the development of a societal and historical approach, to explore the complex link between higher education and religion.
RELIGION, EDUCATION, AND INDIVIDUALS
The second important results concern the weight of religious background on the individual probability to access tertiary degrees, everything else being equal.To address this issue, the impact of the religious background has been investigated on access to higher education-for each country-controlling for age, gender, the parental level of education, parental profession, parental and respondent's country of birth, citizenship, sense of belonging to an ethnic minority, or a discriminated group as well as language spoken at home.Is there a residual impact of religion, once these variables are controlled for?
First, it appears that individuals without any religious belonging are often more likely to hold a tertiary degree, in countries where a majority of respondents declare a religious belonging.For example, in Portugal, Spain, Poland, Austria, and Slovakia-countries where the majority of the population is Catholic-the respondents who declare themselves "without religion" are more likely to hold a tertiary degree than those who declare a religion.It is also the case in Greece and Russia, two countries with a majority of the population being of Orthodox faith.
Second, in countries where most respondents declare no religious belonging, respondents who affirm a religious belonging, tend to have more probability to hold a tertiary degree.This is, for example, the case for Catholics in the United Kingdom, Sweden, or Belgium and for Protestants in the United Kingdom, Sweden, and Latvia.
Third, if based on the access to tertiary education of different religious minority groups by comparison with the largest groups, Muslims appear less likely to hold a tertiary degree in at least five countries (Austria, Belgium, Germany, Greece, and Switzerland) and Orthodox in one (Switzerland).
Furthermore, regarding different age groups of national populations, changes are observed in the representation of various religious communities holding tertiary degrees.This means that the impact of religious belongings changes overtime.
RELIGION AS AN INDICATOR
So why dig in the burning societal issue of religion, when questioning access to higher education?The trends previously underlined are obviously hard to explain, as they are the product of complex and obscure processes.Still, digging further seems worthwhile for at least three reasons.At a theoretical level, interrogating the multicausality of the relation between religion and higher education should help understanding the dynamics at play between higher education and society.At a more pragmatical level, this examination offers an opportunity to analyze how societal dynamics are intertwined with individual ones in education trajectories.What is the role of higher education in the building up of nation states integrating diverse religious communities?Finally, it also underlines the interest of not limiting an analysis of inequalities in education to the classical socioeconomic and ethnic background but of enlarging it to the different belongings individuals express as part of their world.
|
2017-09-07T17:25:49.318Z
|
2015-03-17T00:00:00.000
|
{
"year": 2015,
"sha1": "b4ee6fdc4556219e7b5d6e370edf5e4b41899de7",
"oa_license": "CCBY",
"oa_url": "https://ejournals.bc.edu/index.php/ihe/article/download/6084/5329",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "b4ee6fdc4556219e7b5d6e370edf5e4b41899de7",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Sociology"
]
}
|
143787351
|
pes2o/s2orc
|
v3-fos-license
|
The role of metrology in mediating and mobilizing the language and culture of scientific facts
The self-conscious awareness of language and its use is arguably nowhere more intense than in metrology. The careful and deliberate coordination and alignment of shared metrological frames of reference for theory, experiment, and practical application have been characteristics of scientific culture at least since the origins of the SI units in revolutionary France. Though close attention has been focused on the logical and analytical aspects of language use in science, little concern has been shown for understanding how the social and historical aspects of everyday language may have foreshadowed and influenced the development and character of metrological language, especially relative to the inevitably partial knowledge possessed by any given stakeholder participating in the scientific enterprise. Insight in this regard may be helpful in discerning how and if an analogous role for metrology might be created in psychology and the social sciences. It may be that the success of psychology as a science will depend less on taking physics as the relevant model than on attending to the interplay of concepts, models, and social organization that make any culture effective.
Introduction
Language plays a widely recognized and visible role in culture and cultural identities. Geographic associations-sometimes quite specific ones-can often be easily inferred from linguistic clues, such as dialects or accents. Subtle matters of decorum, economic status, and social mores, as well as more obvious connections with clothing, hair styles, and adornments, may also be closely allied with variations in language.
But where language is often used in a relatively unreflective way in everyday life, language in science and engineering is explicitly oriented toward carefully crafted precision and clarity [1]. New words for systematically implemented and detailed component processes, methods, phenomena, effects, etc. need to convey very specific meanings to be useful. Consistent and routine usage of standardized metrological terms in educational, laboratory, industrial, and other contexts inevitably contributes to the shaping of social organization and cultural identity [2][3][4][5][6][7][8][9][10][11].
Conceptual complexity has increasingly well-documented interactions with social organization, with demand for metrological uniformity in science and commerce having a long history of consequences in government, academic, research, and business institutions [6]. Recent research on the 2 To whom correspondence should be addressed. intertwined processes of concept formation and social organization [7-11] suggests a basis for resituating scientific language, concepts, and thinking relative to their origins in everyday processes. New possibilities for informing the theory and methods of psychology and the social sciences emerge as the salience of existing work into invariance, instrument calibration and equating, and the creation of shared frames of reference becomes apparent [12][13][14][15].
Text and technology
Over many millennia, a wide variety of technologies, from prehistoric discoveries of fire and the wheel to contemporary telecommunications and computing devices, have made it possible for persons lacking advanced skills and resources to take advantage of difficult and complex operations that would otherwise be unavailable to them. Technologies embody concepts and things in ways that make their associations accessible to end users unable to perform or invent for themselves the operations involved in producing a desired effect. As Whitehead [16] put it in 1911, It is a profoundly erroneous truism, repeated by all copy-books and by eminent people when they are making speeches, that we should cultivate the habit of thinking of what we are doing. The precise opposite is the case. Civilization advances by extending the number of important operations which we can perform without thinking about them. In performing this function, technique and technology show their conceptual origins in the ancient Greek term, techne, meaning "to make" [17][18]. Other words sharing this root include text and textile, which conceptually overlap in the way they entail related spheres of weaving, yarns, threads, spinning, etc. Though the concrete media required for written text are obvious forms of technology, spoken language's phonemes, phonetics, syntax, semantics, etc. also exhibit the features of techne.
Technique and technology embody the advance work done by language in the sense of codifying elaborate routines in a compact and portable method or tool. People who learn a word in a language typically have no or very limited knowledge of the experiences that led to the formation of the relevant concept but nonetheless accept the association without wondering very much about it. Just as is the case with technologies like electrical appliances, automobiles, or computers, we are born into and enculturated within social groups with long traditions of capitalizing on the practical knowledge of language and tool use while knowing little or nothing of the technical knowledge that makes the concepts and methods involved effective. Written text is in a sense the paradigmatic definition of technology in that it is the earliest obvious example of how learning was codified in a form later generations could access, whether or not they knew anything about the origins of the concepts, tools, or methods learned.
Whenever a newly salient event in the world is noted and remarked upon, an analogous word, sound, image, or sign of some kind (a text, broadly applied) is metaphorically situated in relation to other signs within a pre-existing referential system. The text of language is a model for everything technical in the sense that semiotic associations of words-things-concepts give rise to analogous associations of tools-things-models. The making of meaning via these kinds of associations is the fundamental creative act, and sets up the process by which anything else is made. The root pattern of methodical thinking lies in the way words come into language as the medium embodying the unity of an ideal concept and something in the world.
Hence we have the Greek root of method as meta-odos, a following along after (meta-) on the path (odos) [19] traced by the "activity of the thing itself" that thought experiences [20]. The word "method" and the concept of methodical thinking converge here, making an explicit claim to return for evidence to 'the things themselves' or 'phenomena,' i.e., to things as they show themselves before the work of abstraction and theorizing has carved out a language of fixed essences for them removed from human praxis, history and culture [21]. When a name is given by an individual to something notable or remarkable repeatedly experienced and re-cognizable in the world, that thinking process methodically and logically coordinates and aligns a word, a concept, and a thing. This semiotic coordination might be shared with another person, and the word, or a related sign, and its position in a larger system of shared signification may then become distributed throughout a community as a part of that society's cultural fabric [2][3][4][5][6][7][8][9][10][11][22][23].
If that happens, the word and its associations effectively pre-think the world for those who learn what the word means from others. Those who learn the word from others do not have to acquire enough experience with the phenomenon to conceive an idea of it, do not have to give birth to new meaning embodied in a word of their own, and do not then have to translate words across personal idioms relating the same concepts and things. This process is implicated by Gadamer [20] when he writes of the way formal logic inevitably begins from "the logical advance work language has done for it." The advance work performed by language pre-thinks the world for us to the point of bringing about new efficiencies in the making of meaning. The to-and-fro play of conversation relieves speakers of the burden of taking the initiative by building on pre-existing shared meanings to the point of facilitating fluid, satisfying and pleasurable shared experiences. Not only does the back-and-forth of question and answer absorb interlocutors into elucidating the object of the conversation, but this may occur to the extent that their horizons or frames of reference fuse in relation to that object. This fusion is itself the moment at which both a common language and an awareness of different perspectives are created, and mutual understanding is achieved, however provisional and circumstantial it may be.
Advancing civilization
The prototypical metrological processes performed in the standardizations of spoken and written language pave the way for science's deliberately arranged shared cultural frames of reference. That is, the reasoning processes used in scientific problem solving are not qualitatively different from the reasoning processes people use in other areas of life [9]. The question that arises, then, is how civilization might be advanced via psychology and the social sciences: how might we increase the number of important operations in these areas that we can perform without thinking of them? That is, instead of taking physics as the standard of success in the conduct of science, might it not make more sense to understand the social and linguistic processes through which physics has succeeded, with the aim of extending those processes into psychology and the social sciences?
Almost all state-of-the-art measurement in education, health care, performance assessment, etc. plainly depends almost entirely on the active participation of people able to think about the important operations that must be performed. Absent skilled experts, state of the art measurement simply does not usually happen in psychology or social sciences research and practice. Even when experts are involved, the complications and expense of high quality measurement are often enough to prevent it from taking place.
Why? Might it be because, with a very limited number of exceptions [12][13][14][15][24][25][26][27][28], measurement in psychology and the social sciences lacks virtually any methods and traditions concerned at all with metrological traceability? With uniform unit standards? With consensus processes for determining standard product definitions? With the power of metrology for simplifying processes, for reducing costs, for streamlining communication, and for amplifying collective intelligence? Little or no attention is being focused on metrology even in the wake of recent developments that would seem to make its relevance unavoidably evident: networked communications, item banking, instrument equating, adaptive instrument administration, and the predictive control needed for on-the-fly automated item generation [14][15][24][25][26][27][28]. Inevitably, however, increasing pressure to put two and two together will be applied as the human, economic, In wondering how to advance civilization by simplifying end use functions, Whitehead is suggesting a model of a different kind of person than the rational Cartesian subject usually assumed as the agent of scientific and economic activity. No one, no matter how brilliant or economically advantaged, has the time and resources to be completely informed about every important factor affecting the decisions of daily life, much less in the complex operations of science. Even the simplest communication would be prohibitively cumbersome if each person was burdened with the tasks of completely re-inventing for themselves every word in a language and always being completely logical in their decision making. If everyone had to formulate their own vocabularies, grammars, etc., and then translate between them to effect any communication at all, the flow of conversation would be so obstructed as to be changed utterly in its basic character.
Even though few speakers of a language seek out any significant degree of understanding of the origins of the alphabet, script, words, grammar, orthography, etc. comprising that language, this does not prevent proper, comprehensible usage and successful communication. The capacity of language to point at things not present and to make intended meanings comprehensible even in the absence of any overt expository skill in etymology and grammar brings efficiencies to communication that make shared knowledge possible. Explaining these efficiencies is, according to Hayek, the central question of all social sciences: How can the combination of fragments of knowledge existing in different minds bring about results which, if they were to be brought about deliberately, would require a knowledge on the part of the directing mind which no single person can possess? [29] The continuing relevance of Hayek's question is noted by Hancock [30], who emphasizes the ways decentralized decision-making processes enable societies to employ multiple methods for exploring alternative approaches to solving problems. Others have similarly lately remarked, quoting Hayek [29], that "the marvel of the market thus resides in 'how little the individual participants need to know to be able to take the right action'" [31]. The qualification of "little" here is, of course, relative. Massive amounts of low quality information may never support right action, and the social investments made in creating high quality scientific information may be a major portion of the total economy [2]. Bringing technical processes and objects into everyday use thus requires the coordination and alignment of a wide variety of domains of expertise, with the added problem of making each domain's technical aspects transparent to all of the others. Without financing, accounting, management, sales, marketing, human resources, metals mining, cable manufacturing, rubber and plastics insulators, and consumers, the electrical industry would be as nonexistent as it would be if inventors, scientists and engineers had never created electrical concepts and tools in the first place.
In the terms of contemporary social studies of science [2][3][4][5][6][7][8][9][10][11], the problem is one of translating each area of stakeholders' perspectives on technical boundary objects into the languages of each other area of stakeholders (Fig. 1) [32][33]. Ideally, alliances advance each stakeholder group's interests further than would otherwise be possible, but this is not, of course, always the case. In education, for instance, psychometricians, statisticians, learning theorists, curriculum designers, teachers, parents, students, testing agencies, publishers, principals, researchers, accountants, and others may be allied or alienated depending on whether they successfully define a common boundary object and translate their interests in it into terms incorporated and advanced by each other area of stakeholders. The pace and spread of innovations depends in large part on being able to express technical effects in ways that capture the imaginations and interests of stakeholders across domains enough for them to coordinate and align their processes and outcomes [32][33][34][35][36]. Attending to the semantic role performed by technological objects within and across stakeholder communities may provide productive new paths for research and development in psychology and the social sciences [37][38].
Future directions
Rephrasing the question to make the interplay of concepts and social organization plain [2-11], how can we create a world in which the facts of psychological and social measurement can survive? What kind of environment would be required to build networks in which the outside world has the same form as the instruments in the laboratory? What kinds of continuous trails can be created to tie all of the literacy measures together, all of the numeracy measures together, and all of the respective relationship quality, physical functioning, and health status measures together? Can individual differences be appropriately understood and qualified? What opportunities for such networks can be envisioned, and are there any approximations of such networks already in place in education, psychology, or the social sciences? And quite importantly, how can the interests of each group of stakeholders in a given area be satisfied and represented? Can divergent and even conflicting interests be productively mediated within and between various organizations and institutions [35][36]? Can the rules, roles, and responsibilities constituting efficient market economics [34] be brought to bear on exchanges of human, social, and natural capital value [37]? Even negative answers to these questions will give clearer assurances about viable paths for productive science and economics than could be obtained if the questions had never been raised at all.
|
2019-05-04T13:08:36.787Z
|
2015-03-01T00:00:00.000
|
{
"year": 2015,
"sha1": "a2367838891480110d4b7fce0d71afb22dd1acc6",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/588/1/012043",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "94b8ee4bf5649154bf043714513f3abdb5638490",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Physics",
"Psychology"
]
}
|
119346208
|
pes2o/s2orc
|
v3-fos-license
|
Pseudogaps in Underdoped Cuprates
It has become clear in the past several years that the cuprates show many unusual properties, both in the normal and superconducting states, especially in the underdoped region. In particular, gap-like behavior is observed in magnetic properties, c-axis conductivity and photoemission, whereas in-plane transport properties are only slightly affected by the pseudogap. I shall argue that these experimental evidences must be viewed in the context of the physics of a doped Mott insulator and that they support the notion of spin charge separation. I shall review recent theoretical developments, concentrating on studies based on the t-J model. I shall describe a model based on quasiparticle excitations, which predicts the doping dependence of T_c and anomalous energy-gap-to-T_c ratios. Finally, I shall outline how the model may be derived from a microscopic formulation of the t-J model. After a brief review of the U(1) formulation, I shall explain some of the difficulties encountered there, and how a new SU(2) formulation can resolve some of the difficulties.
I. INTRODUCTION
It has become clear in the past seveal years that the cuprates show many highly unusual properties both in the normal and superconducting (SC) states. These unusual features are related to the fact that the cuprates are doped Mott insulators. It is then not surprising that the unusual behaviors are most striking in the underdoped region, when the concentration of doped holes, x is small. In the normal state a pseudogap is obseved in a temperature range considerably above the SC transition temperature T c . The gap is seen in NMR relaxation rate 1/T 1 , Knight shift [1] and specific heat. [2] It is also seen in c-axis conductivity [3] and in photoemission experiments [4,5] which reveal that the pseudogap is roughly of the same size and k depdendence as the d-wave SC gap. Furthermore, the gap size is essentially independent of x and even increases slightly when T c is reduced with decreasing x. This observation is also supported by tunneling data. [6] On the other hand, the in-plane transport properties are only slightly affected by the pseudogap. The resistivity shows a small decrease which may be interpreted as a decrease in the scattering rate. [7] More importantly, the spectral weight of the Drude part of σ(ω) is proportional to x [8] and there is no evidence that it is strongly reduced by the presence of the pseudogap. [7,9,10] We believe this is strong experimental evidence supporting the notion of spin-charge separation [11] in these materials. It was pointed out by P.W. Anderson [11] early on that the Néel state is not the best way to accommodate the competition between the hole kinetic energy and the spin exchange energy. He envisioned another possibility, i.e., the spins form a liquid of singlets, which he termed the resonating valence bond (RVB) state. The reason is that the energy to form a singlet −JS(S + 1) is particularly favorable for S = 1 2 . The holes can more freely among the liquid of singlets and are responsibile for the charge transport. This no- tion of spin charge separation naturally accounts for all the qualitative features of the spin gap state noted above. The spins form RVB singlets so that it costs energy (spin gap) to make triplet excitations. However the in-plane conductivity is carried by x holes, which remain gapless. In c-axis conductivity and photoemission, a physical electron is removed from the plane, which carries both spin and charge. It then follows that the spin gap should appear in these experiments. This picture is illustrated in Fig. 1.
We note that an alternative model which exhibits the above phenomenology is the model of preformed pairs above T c . There are two versions of this class of model; the first suggests that strong phase fluctuation [12] destroys long range order over a large temperature range above T c , and the second assumes that we are in the short coherence length limit of the pairing state [13], so that essentially pair "molecules" are first formed and then Bose condensed. [14,15] The phase fluctuation model predicts that the pairing amplitude is responsible for the pseudogap and would seem to predict that other manifestations of superconductivity such as conductivity and diamagnetism fluctuations should be observable, particularly at short distance and short time scale. A recent high frequency conductivity experiment in underdoped BISCO [16] shows that while Berenzinskii-Kosterlitz-Thouless (BKT) type fluctuations are observed near and above T c , the short distance (bare) superfluid density extracted from these measurements vanishes above 100 K, much below the temperature range associated with the pseudogap. These data are difficult to understand within the phase fluctuation model. Similarly, in the short coherence length model, charge transport is by charge 2e pairs in the pseudogap state and it is difficult to understand the insensitivity of the transport properties to the appearance of the pseudogap with underdoping. Furthermore, it is not at all clear that the coherence length is short in the underdoped limit. In section III we shall in fact argue that the coherence length increases with underdoping, and that one is not in the short coherence length regime. In any event, in both these models a superconducting state with a large energy gap is postulated to exist, without any indication of the origin and the energy scale of the gap. The RVB picture is fundamentally different from these preformed pair pictures in that spincharge separation plays a crucial role. The pseudogap is a spin gap with an energy scale set by J, which becomes the superconducting gap with the onset of coherence in the charge degrees of freedom. The superconducting state is characterized by spin-charge recombination, forming superconducting quasiparticles which are quite conventional in the BCS sense.
We model the cuprate with the t-J model, which we believe contains the essential physics of the doped Mott insulator. The t-J Hamiltonian is is subject to the constraint that double occupancy of a site by two electrons of opposite spins is not allowed.
The t-J model is the strong coupling limit of the Hubbard model and the difficulty of its solution lies in enforcing the no double occupancy constraint. For the cuprates the parameters are known to be J ≈ 0.13 meV and t/J ≈ 3. When holes are doped into the insulator, there is a gain in kinetic energy per hole proportional to t due to hopping. However, the spin correlation is destroyed, costing an energy of approximately J per site. Thus we can consider the doping problem as a competition between the energy xt (kinetic energy per site) and J. When xt << J, the AF state with its doubled unit cell is retained and the holes form small pockets around the top of the single hole dispersion, which is known to be at (π/2, ±π/2) from photoemission [17] (see Fig. 2). This problem belongs to the same class as the doping of a band insulator (or semiconductor). The only difference is that the coherent part of the band has a reduced spectral weight of J/t and a bandwidth of order J. This can be understood in terms of a spin-polaron picture, i.e., the hopping hole is surrounded by a cloud of spin excitations. [18] On the other hand, if xt >> J, the spin correlation becomes unimportant, AF order is destroyed, and the holes should form a metallic state, describable by Fermi liquid theory. The Luttinger theorem then dictates that the area of the Fermi surface is given by 1 − x, as shown in Fig. 2c. The important point is the electrons which form the local moments in the Mott insulator are now mobile and should be counted as part of the Fermi sea. The change in Fermi surface area from x to 1 − x between the low and high doping limit is a special feature of the doping of a Mott insulator, associated with the liberation of the local moments. The question then arises: how does the system evolve between these two limits? The intermediate state is apparently the spin gap state, with gaps in the one electron spectrum in the vicinity of (0, π) and segments of the Fermi surface near (π/2, π/2) (see Fig. 2b). As doping is increased, these segments grow in length and eventually join to form the Luttinger Fermi surface. This intermediate state is clearly not a Fermi liquid because in band theory, gapping of parts of the Fermi surface is not permitted without symmetry breaking. However, the breakdown of Fermi liquid theory is not a sharply posed issue at finite temperatures. The existence of the superconducting ground state at intermediate doping means that this question cannot be investigated experimentally at present. It is worth noting that the transition region occurs near x = 0.2, when xt and J are comparable.
We emphasize once again that an important aspect of the doping of the Mott insulator is that the resulting metal must remember that x holes are responsible for the electrical conductivity. If the AC conductivity σ(ω) is characterized by a Drude-like component at low frequency, we may characterize the conductivity by the scattering rate 1/τ and the spectral weight (n/m) effective . For underdoped samples, this spectral weight is propor-tional to x. [8] This is very natural in that the weight must vanish when x → 0. It follows that a superconductor that forms out of the underdoped metal must have a superfluid density ρ s given by this spectral weight (in the clean limit), so that ρ s is proportional to x. This simple observation will play a prominent role in our subsequent discussion. On the other hand, when xt > J we have a Fermi liquid state with electron density 1 − x. The question then arises as to how (n/m) effective ≈ x can be accommodated within Fermi liquid theory. Within Fermi liquid theory we can write where m * is the effective mass and F 1S is a Landau parameter. It describes the deviation of the current carried by the quasiparticle from −ev k due to dragging of other quasiparticles where α = (1 + F 1S /2). [19] Here we have made the simplifying assumption that only the ℓ = 1 Landau parameter (in 2d) is important and the correction is independent of k. From Eq. (1) we see that there are two ways to obtain (n/m) ef f = x. The first is to generate a heavy mass so that m * = 1/x. This is in fact the case for the system La 1−x Sr x TiO 3 which is a Mott insulator for x = 0 with a Néel temperature of T N = 150 K. With doping x > 0.02, a metallic state is formed with the spin susceptibility χ and the specific heat coefficient γ both scaling as 1/x and a Wilson ratio of order unity. [20] The Hall coefficient R N ∼ 1/(1 − x) as expected for a Fermi surface dictated by Luttinger theorem. This is clearly a realization of the Fermi liquid state expected for xt > J. Unlike the cuprates, LaTiO 3 is a three dimensional system, so that the exchange constant J can be deduced from the ordering temperature. Thus the ratio J/t is very small and we believe this is the reason why the Fermi liquid state persists to low doping. For x < 0.02, disorder effects become important and we are not able to explore the xt < J limit in this system. A second route to achieve a spectral weight of x is for 1 + F1S 2 ≈ x. It turns out this is the route followed by the mean field slave boson theory described below. [21] We shall see that in the underdoped region this route is not followed in the cuprate system. We have strong evidence that the factor α in Eq. (2) is not proportional to x and is in fact of order unity.
II. MICROSCOPIC MODEL AND MEAN FIELD THEORY
The physics of spin charge separation appears naturally in a class of theory which starts with the t-J model and enforces the constraint of no double occupation by decomposing the electron into a fermion and a boson, The fermion f iσ carries spin index and the boson b i keeps track of the charge degrees of freedom. The constraint is replaced by the requirement that which can be enforced by introducing a Lagrangian multiplier so that field theoretic methods may be applied. This decomposition (called the slave boson method) is not unique and one could just as well associate the spin with the boson (the Schwinger boson theory [22]). If the theories are solved exactly they should give identical results. However, different factorization leads naturally to different approximation schemes. Our strategy is to explore the different schemes to see which correspond most closely with experiment. In particular, while the Schwinger boson method gives an excellent description of the antiferromagnetic state at half filling, [22] it does not produce a large Fermi surface for large doping. Since we are mainly interested in the regime of intermediate doping, the slave boson is a more promising starting point.
The exchange term can be written in terms of the fermions only [23] which invites the following mean field decoupling These parameters describe the formation of a singlet on the bond ij. The mean field phase diagram [24,25] is shown schematically in Fig. 3. As the temperature is lowered, χ ij = 0, so that the fermions now acquire an energy band and a Fermi surface. At a lower temperature, the fermions form a pairing state with d-wave symmetry. The bosons become essentially Bose condensed (with exponentially large correlation length with decreasing T ) below a cross-over temperature T the boson field can be treated as a c-number. In the overdoped region this gives rise to a Fermi liquid phase, similar to the theory of heavy fermion systems. In the intermediate doping range, the simultaneous presence of ∆ ij and < b > gives rise to a pairing order parameter for physical electrons c i↑ c j↓ which is of d-wave symmetry. Above T (0) BE spin charge separation occurs at the mean field level. In the pairing state a d-wave type gap occurs in the spin excitation spectrum, but not in the charge excitation, and it is natural to identify this as the spin gap phase. Finally, the region IV in Fig. 3 is a non Fermi liquid state which may be referred to as a "strange metal." We can go beyond mean field to include fluctuations about the mean field solution. The most important fluctuations are the phase fluctuations of the order parameter χ ij = |χ ij |e iθij . Particles hopping around a plaquette acquire a phase related to θ ij , just like electrons in the presence of a magnetic flux. These low lying excitations are U(1) gauge fields. [26] We shall refer to this theory as the U (1) formulation. When coupled to the fermions and bosons they enforce the constraint locally, not just on average as in mean field theory.
III. PHENOMENOLOGICAL DESCRIPTION OF THE SUPERCONDUCTING STATE
Before continuing with the microscopic theory, we digress to review a phenomenological description of the superconducting state. [27] The idea is to start at low temperature where the nature of the elementary excitations is well understood, and calculate the reduction of the superfluid density with increasing temperature. This can be done by making two assumptions: A) the superfluid density is given by x, and B) the quasiparticle (qp) dispersion in presence of an external electromagnetic gauge potential has a BCS form: for k near the nodes, where j(k) is given by Eq. (2). Note that v F ≡ ∂ k ε is the "normal state" Fermi velocity and that the vector potential A couples only through the "normal state dispersion" ε(k) and has nothing to do with the SC gap ∆(k). The physical reason for this is that this quasiparticle is a superposition of an electron with momentum k and a hole with momentum −k, and both these objects carry the same charge current j(k). Mathematically Eq. (5) is easily derived by noting that A enters only in the diagonal elements of the BCS matrix in the form ǫ(k + A) − µ and −ǫ(−k + A) − µ, which is diagonalized to give Eq. (5) to linear order in A.
With these assumptions we can calculate how the superfluid density is reduced by thermal excitation of quasiparticles. We found that where v 2 = ∆ 0 a/ √ 2 is the velocity of the quasiparticle in the direction of the maximum gap ∆ 0 , i.e., in the direction from the node towards (0, π). The ratio v F /v 2 thus measures the anisotropy of the massless Dirac cone which characterizes the d-wave qp spectrum.
Based on numerical calculations and theoretical considerations, [28,29] we expect the mass m in Eq. (6) to correspond to a tight-binding hopping integral of order t so that m −1 ∼ t. Experimentally it is found that m is about twice the electron mass [8], which happens to correspsond to a hopping integral of J ≈ 0.13 eV. We believe that the theoretical expression for the hopping integral is t/3 which happens to equal J in our case. On the other hand, the Fermi velocity v F is proportional to the coherent bandwidth, which is given by J. To keep our expression general, we keep track of the distinction between t/3 and J, even though numerically they are equal.
We see that for small x, the quasiparticle excitation is an effective way of destroying the superconducting state by deriving ρ s to zero. By extrapolating Eq. (6) to ρ s = 0, we can estimate T c as being of order Note that this is a violation of the BCS ratio 2∆ 0 /kT c = constant. Presumably the real transition is driven by critical fluctuations including phase fluctuations and vortex unbinding in the 2d limit, but the underlying (bare) superfluid density should be driven to zero by quasiparticle excitations in the way we indicate. There is experimental support for this point of view from high frequency conductivity measurements. [16] If we further assume that ∆ 0 is independent of x for underdoped cuprates, we see that T c is proportional to x (or more precisely to ρ s (T = 0)/m), thus providing an explanation of Uemura's plot. [14] It is worth noting that in the slave boson mean field theory ∆ 0 is proportional to J, since that is the only energy scale relevant to the formation of spin singlets. Then Eq. (7) predicts that T c is proportional to xt. Apart from numerical coefficients, this has the same functional dependence as the Bose condensation temperature T (0) BE , as well as the transition temperature based on pairing of bosons to be discussed later in the SU (2) formulation.
Another important implication is that superconductivity is destroyed when only a small fraction of the quasiparticles (with energy ≤ x∆ 0 ) are thermally excited. Thus the gap near (0, π) must remain intact in the normal state, leaving a strip of thermal excitations which extend a distance proportional to x from the nodal points. This is qualitatively in agreement with the photoemission experiment. Of course our phenomenological picture does not provide a description of the normal state. It simply states that the normal state gap is an inescapable consequence of a finite ∆ 0 and a vanishingly small superfluid density as x → 0.
The fact that dρ s /dT is independent of x and that both ρ s and T c are proportional to x means that a scaled plot of ρ s (T )/ρ s (0) vs T /T c should be independent of x for small T /T c . In fact, such a scaled plot for YBCO 6.95 and YBCO 6.60 shows a remarkable universality over the entire temperature range. [30] We can use the data to extract the ratio α 2 v F /v 2 using Eq. (4). Using the YBCO 6.95 data, we obtain a velocity anisotropy v F /v 2 = 6.8 if we assume that α = 1. [27] Alternatively, by comparing the measured slope dρ s /dT of the YBCO 6.95 and YBCO 6.60 samples, we see that the slopes are almost the same, showing that α 2 v F /v 2 is almost independent of doping. From tunneling data we know that the maximum gap ∆ 0 slightly increases with underdoping. [6] This implies that α is almost independent of x. This is the experimental evidence that the Fermi liquid scenario α = x does not apply to the underdoped cuprates.
It is useful to compare Eq. (4) with the standard BCS expression which is usually written in the form [31] This expression does not include Fermi liquid correction and should be compared with Eq. (6) with α = 1. We note that in BCS theory ρ s (0) is independent of x and the second term in Eq. (8) is in exact agreement with the second term in Eq. (6) as it should be, because the derivation leading to Eq. (6) is completely general. The first terms in Eq. (8) and Eq. (6) do not agree because the standard BCS theory does not apply to a doped Mott insulator and does not include the physics leading to a spectral weight proportional to x. It is clear that this feature of Eq. (8) does not agree with experiment on underdoped cuprates. If one ignores this and fits the normalized data ρ s (T )/ρ s (0) to Eq. (8a), one would reach the incorrect conclusion that the energy gap ∆ 0 is proportional to T c in underdoped cuprates. [32] We emphasize that Eq. (6) includes Eq. (8) as a special case and must be used in place of Eq. (8) for a correct analysis of the data. We can also estimate the size of the vortex core using this picture. The idea is to identify the core size as the point where the critical current is reached. If we replace −eA/c in Eq. (2) by the gauge invariant superfluid velocity v s = 1 2 ∇θ − 2e c A , we see that the quasiparticle energy shifts up or down in the presence of v s and quasiparticles are generated at the Fermi energy, contributing to a normal fluid density. Near the vortex core, v s grows as 1/R, so that the normal fluid density grows and eventually drives the critical current to zero. This allows us to estimate the core size to be Note the factor x appears in the denominator. We note that in BCS theory, the coherence length can be written either as v F /πT c or v F /∆ 0 . The two forms are equivalent because the ratio 2∆ 0 /kT c is a constant. In our case this ratio depends on x and it is not clear a priori which form is correct for the coherence length. Equation (9) shows that v F /πT c is the correct form for the coherence, and not v F /π∆ 0 . One consequence of this is that the underdoped cuprates are in fact not short coherence length superconductors. [15] The number of holes per coherence volume actually grows as x −1 with decreasing doping. A second consequence is that H c2 (due to orbital effects) is predicted to scale as x 2 . Within this picture it is also clear that in underdoped cuprates the state inside the vortex core should retain the large gap ∆ 0 , just as the normal state above T c . We can now estimate the condensation energy using the relation ∆E = H 2 c /8π and H 2 c = H c1 H c2 . Noting that H c1 is proportional to ρ s (0)/m ≈ xt while H c2 = φ 0 /R 2 1 is proportional to x 2 , we find that ∆E is proportional to x 3 , i.e., This is in contrast to the BCS expression ∆E ≈ T 2 C /ǫ F . Equation (10) also follows from a picture where only the quasiparticles with energy less than T c are affected by the transition to the normal state. The area of the Brillouin zone occupied by these excitations is of order (T c /J)(T c /∆ 0 ), so the total energy change per area is of order T 3 c /J∆ 0 which agrees with Eq. (10). Thus even when expressed in terms of T c the condensation energy is much less than the BCS value in the underdoped system. There is evidence for this suppression of the condensation energy from specific measurements. [2]
IV. THE SU(2) FORMULATION OF THE T-J MODEL
We now return to discuss the microscopic theory. While the mean field phase diagram is in qualitative agreement with experiments, the U (1) formulation suffers from a number of deficiences if we try to improve the mean field theory by including gauge fluctuations at the Gaussian level. In the spin gap phase the problem lies with the fact that the MF theory is a pairing theory of fermions and carriers with it some features of superconductivity. For example, the gauge field is gapped by the fermion pairing via the Anderson-Higgs mechanism. This leads to a reduction of gauge fluctuations which actually destabilize the pairing phase. [33] A second problem is that if we introduce residual interaction between the fermions and bosons to form an electron, the electron spectrum will always have nodes. This is because the node structure in the pairing state is tied to the Fermi level and is very resilient to interactions. Thus we have difficulty reproducing the "Fermi surface segments" which are apparently observed in photoemission experiments. In the superconducting phase we have condensation of the bosons and the quasiparticles become well defined. While this feature is in agreement with experiment, the current carried by the quasiparticles turns out to be reduced so that in Eq. (2), α = x. As we have seen, this leads to a serious disagreement with the doping dependence of the temperature coefficient of the London penetration depth. In order to circumvent these difficulties, we were led to a new formulation of the t-J model which is designed to be more accurate near half filling. We briefly outline the SU (2) formulation below. [34,35] In this new formulation we introduce an SU (2) doublet of boson fields b T = (b 1 , b 2 ), in addition to the fermion doublet ψ † = (ψ ↑ , ψ † ↓ ). The physical electron is represented by the SU (2) singlet formed out of these two doublets, c ↑ = 1 We are motivated by the observation made by Affleck et al. [36] that at half-filling (x = 0) the fermion representation of the t-J model has the SU (2) symmetry in that a spin-up electron can be represented by a spin-up fermion or the absence of a spin-down fermion. In the U (1) formulation this symmetry is broken as soon as x = 0, and out of a infinte degeneracy of states, the d-wave fermion pairing state is picked out as the MF solution. In contrast, even at the mean field level, the low lying states which are missing in the U (1) mean field theory are included in the new SU (2) formulation. For example, the spin gap state can be described equally well as the dwave pairing state, or a staggered flux phase, where the fermions see gauge fluxes which alternate from plaquette to plaquette. The SU (2) gauge transformation relates these states and guarantees that there is no breaking of the translational symmetry. The fermion spectrum exhibits a d-wave type gap, with maximum gap at (π, 0) and nodes at (π/2, π/2). We compute the physical electron spectral function, which at the mean field level, is a convolution between the fermion and boson spectra. We further introduced a residual interaction between the fermions and bosons. The resulting spectra can be compared with photoemission experiments and have the following features. The spectra consist of a coherent part with spectral weight x and dispersion of order J and a broad incoherent part. The coherent part closely resembles the fermion dispersion. The residual interaction broadens and shifts the nodes at (π/2, π/2) so that we obtain a "Fermi surface segment" near (π/2, π/2). Away from this segment a gap appears in the excitation spectrum which grows to its maximal magnitude near (0, π). This behavior is in qualitative agreement with the angleresolved photoemission experiment. [4,5] We have also studied the fermion spectrum and how it is affected by gauge fluctuations. We found a logarithmic correction to the fermion velocity and we successfully fitted the magnetic susceptibility and the specific heat in the spin gap state. [37] In the superconducting state we need to address the issue of the current carried by the quasiparticles. To expand on this point further, we note that in the original U (1) gauge field formulation of the t-J model, the prediction for ρ s (T ) takes the form of Eq. (6) with α = x and therefore is in strong disagreement with experiment. This follows simply from the Ioffe-Larkin rule which states that the inverse of the response function of the fermion and boson should add to give the physical inverse response. In the superconducting state, the fermion and boson acquire superfluid densities ρ F and ρ s so that where ρ F ≈ (1 − x) and ρ B ≈ x. However, only the temperature dependence of ρ F depends on the qp gap structure and is expected to be of the form ρ F (T ) ≈ (1 − x)(1 − T /∆ 0 ), whereas the temperature dependence of ρ B arises only through the excitation of sound mode and should be higher power in T , which can be ignored. Inserting these into Eq. (11) we see that ρ s (T ) is predicted to be x−x 2 T /∆ 0 . Basically in the U (1) gauge theory the mismatch of the Fermi surface area and the Drude spectral weight (or ρ s in the superconducting state) is solved by a Landau parameter, so that α = x. Thus we may conclude that it is not sufficient to treat the gauge fluctuation only to quadratic order as in the Ioffe-Larkin theory.
We believe this difficulty is tied to the notion of Bose condensation as a way of achieving superconductivity. The reason is the following. The electron operator c k is a convolution of the fermion and boson operator in momentum space. Let us suppose that the external A field couples only to the boson (this is true in the SU (2) formulation and is approximately true in some gauge choice in the U (1) formulation). In the presence of A, b q → b q+A so that after the convolution c k → c k+A and ǫ k → ǫ k+A as expected. Thus j k = −e∂ǫ/∂A = −e∂ǫ/∂k. Let us see what happens in the superconducting state. If we assume that the fermions are already paired, superconductivity can be driven by the condensation of bosons < b k=0 >= 0. However, in the presence of A, the Bose condensate remains rigid and stays in the k = 0 state. This is clearly seen in the Ginsburg Landau theory for the free energy | (∇ − 2eA/c) b| 2 where < b k=0 > = 0 in the presence of A is responsible for the Higgs mechanism and the London penetration depth. Upon convolution, we see that for the electron operator, k is not shifted by A so that ǫ(k) is independent of A. The qp now carries no current! In the U (1) formulation, the gauge field a causes a small shift in the Fermion spectrum and leads to Eq. (2) with α = x. This is clearly an unacceptable situation and can be seen most acutely for the qp at the Fermi surface along the (π,π) direction. Here the energy gap vanishes so that the qp in the superconducting state is basically the same state above T c . Yet, according to the Bose condensation scenario, the current carried by this qp drops abruptly below T c . Now that we have identified the problem, we can see that there are two possible ways to avoid it. The first is to argue that due to fluctuations, only a small fraction of the bosons are in the condensate and we can reduce the problem, but not eliminate it. We call this the single boson condensation (SBC) scenario. The result is that α can lie anywhere between x and 1, and most likely somewhere in between. A second possibility is allowed in the SU (2) formulatin but not in the U (1) formulation. In SU (2) theory there are two species of bosons b 1 and b 2 and we can pair them to form a gauge singlet pair < b 1 (i)b 2 (j) > = 0. We shall call this the boson pair condensation (BPC) scenario. Since < b 1 >=< b 2 >= 0, the problem is avoided and we find that α = 1. This is really a consequence of continuity because in this scenario the superconducting qp along (π,π) is smoothly connected to the electron state above T c . This result comes out of an explicit calculation which we outline below. [38] In SU (2) theory we go beyond MF theory by calculating the electron propagator through a ladder diagram [34,35] to include effects of pairing between the boson and the fermion. Here we will consider only the simplest on-site interaction V c † ↑ c ↑ + c † ↓ c ↓ , which, when written in terms of bosons and fermions, generates an attraction between bosons and the fermions if V > 0. There are also other pairing interactions, but they will not modify our results qualitatively. The resulting electron propagator is given by We first consider the second scenario where there are no SBC, but there is a nonzero F 0,A proportional to whereẼ . In order to interpret those results, let us first consider the normal state which is recovered by setting x pc = 0 in Eq. (14) and Eq. (15), yielding the normal state dispersion E N ± ≡Ẽ ± (x pc = 0). This corresponds to a massless Dirac cone initially centered at (±π/2, ±π/2) when V = 0 which is the MF fermion spectrum of the staggered-Flux (s-Flux) phase. The effect of V (the boson-fermion pairing) is two-fold. Thẽ µ inside the square-root shift the location of the node towards (0, 0) by a distance ∆k = −μ/v F while the last term shift the spectrum upwards. The cone intersects the Fermi energy to form a small Fermi pocket with linear dimension of order x. As shown in Fig. 4(a), the spectral weight is concentrated on one side of the cone, so that only a segment of FS on the side close to the origin carries substantial weight. This is the origin of the notion of "FS segment" introduced in Ref. [34,35]. Now let us see what happens in the SC state when x pc = 0. Equation (14) takes the standard BCS form if E ± is interpreted as the normal state dispersion. However,Ẽ ± differs from the normal state spectrum E N ± by the appearance of the term −(x pc ∆/x) 2 in Eq. (15). Close to the node this term is small so that qualitatively the spectrum develops from the normal state in a BCS fashion, as shown in Fig. 4(b). This is particularly true if the higher energy gap between the two branches is smeared by lifetime effects. Thus we see that the "FS segment" is gapped in a BCS-like fashion. However, the velocity v 2 in the (1, −1) direction, being proportional to x pc /x, does not extrapolate to the gap at (0, π) (which is essentially independent of x pc ), but crosses over to it at the edge of the FS segment. It is worth remarking that in the special case x pc = x, E (sc) ± reduces to the standard BCS form with the normal state dispersion ε(k), a chemical potential 2μ and a SC gap ∆(k). The high energy gap closes and spectral weight on one branch vanishes, yielding a BCS spectrum as shown in Fig. 4(c).
We have also calculated the effect of constant A on the qp dispersion, to linear order of A. This adds a term 1 c j ± · A to Eq. (14) where j ± is interpreted as the current carried by the qp. We recall that in standard BCS theory, the current is given in term of the normal state spectrum by c∂ A ε A = e∂ k ε because ε A (k) = ε(k + e c A). Remarkably this is almost true in our case in the sense that j ± is given by c∂ AẼ±,A , whereẼ ±,A is obtained by replacing k by k + e c A in ε,μ and ∆ everywhere in Eq.15 except for the term xpc x ∆ 2 , which is kept independent of A. Near the node, ∆ is negligible so that the current is very close to e∂ kẼ ≃ e∂ k E N (which becomes exactly e∂ k ε along the diagonal), thus reproducing Eq. 5. We have checked numerically that even away from the node in the region of the "FS segment", the current is remarkably close to e∂ k E N , which can be quite different from the BCS value e∂ k ε near the edge of the FS segment. From Eq. (6), the temperature dependence of the London penetration depth gives a direct measurement of α 2 vF v2 . Density of states measurements using the T 2 coefficient of the specific heat yields v F v 2 . The Fermi velocity can be estimated from transport measurements or high resolution photoemission experiment. Thus in principle the quantities α, v F and v 2 can be measured. It is of course of great interest to establish how close α is to 1, or whether v 2 is reduced with respect to that extrapolated from the energy gap at (0, π) measured by photoemission or tunneling. Crude estimates made in Ref. [27] suggest that α is consistent with 1 but a more precise measurement is clearly called for.
Finally we comment on finite temperature behaviors. In addition to the reduction of superfluid density due to thermal excitation of qp, we expect x pc to decrease with increasing T , leading to a reduction of v 2 : v 2 (T ) = xpc(T ) xpc(0) v 2 (0). As T reaches T c , x pc = v 2 = 0 and the nodes of E (sc) become the "FS segment" while the spin gap near (0, π) remain finite. We see that x pc plays the role of the order parameter of the transition, so that we may expect the temperature dependence of x pc to be described by a Ginzburg-Landau theory with X-Y symmetry near the transition.
V. CONCLUSIONS AND OPEN ISSUES
We believe the SU (2) slave boson theory captures the basic physics of the underdoped cuprates. The many anomalous properties associated with the spin gap formation are explained in a natural way. Superconductivity with d-wave pairing symmetry emerges naturally, with quasiparticle excitations which are remarkabley similar to BCS theory. However, the microscopic mechanism is completely different in that the SC state is not formed out of pairing of normal state quasiparticles via exchange of some effective interaction. Instead, it is the coherence of the charge degrees of freedom which converts the spin gap phase to the SC state. Many open issues remain, however, and we list a few of them below. 1) Our discussion of the electron spectrum in the normal state is still at a crude level. We treat the bosons as "nearly" bose condensed with a relatively narrow spectral function. Thus we do not have a theory of the lineshape. One of the most important features of the photoemission experiment is that a narrow qp peak forms out of a broad lineshape as the SC state develops out of the normal state. We are unable to describe this evolution at present. A narrow spectral line is very natural in the single boson condensation scenario but not as obvious in the boson pair condensation scenario. Thus we have not achieved a quantitative description of the recombination of spin and charge to form quasiparticles in the superconducting state. A related issue is that in our theory the spin gap state and the SC state share the same energy scale, i.e., the energy gap ∆ 0 at (0, π). Empirically ∆ 0 ≈ J/3, in rough agreement with the gap calculated in mean field theory. Recently, Shen and collaborators [39] have focused on a higher energy scale (of order J to 2J) which characterizes the location of the peak in the ARPES spectrum, and argued that it is the peak energy which is smoothly connected with the insulator at half-filling. In this scenario one would need a separate mechanism to produce the leading edge shift and the SC energy gap. In our scenario we have only one energy scale ∆ 0 and the burden upon us is to show that the lineshape may exhibit a peak at high energy of order J.
2) We do not have a satisfactory theory of the transport of the normal state. This is related to the still lack of understanding of how the spin-charge separation state in the normal state evolves to the well defined qp in the SC. We can only provide a phenomenological picture of gradual binding between holons and spinons to form physical holes as the temperature is decreased. [40] 3) The mean field theory underestimates the spin fluctuation near (π, π). While inclusion of gauge fluctuations leads to a satisfactory fit of the specific heat and uniform spin susceptibility, [33] it is expected [41] that gauge fluctuations will strongly enhance the spin fluctuation near (π, π) but detailed calculations have not been carried out. This strong enhancement is needed to explain the strong peak in the Cu NMR relaxation at a temperature T * which is low compared with the spin gap energy ∆ 0 .
As an intermediate step, we recently carried out a RPA calculation of the spin fluctuation near (π, π).
[42] By tuning a single parameter (the effective exchange coupling in RPA) we are able to account for the resonance peak seen in neutron scattering in the SC state and its evolution with reduced doping. [43] However, at present we cannot explain the neutron scattering and the copper NMR within the same RPA theory.
The work reviewed in this paper has been done in close collaboration with X.G. Wen and I have benefitted from collaboration over the years with N. Nagaosa, T.K. Ng, Derek K.K. Lee, Don H. Kim, and J. Brinckmann. This work was supported by NSF through the MRSEC program DMR 98-08941.
|
2019-04-14T02:18:42.263Z
|
1998-12-14T00:00:00.000
|
{
"year": 1998,
"sha1": "22b86c39583b4ea47bcd3a2986de5d010b9d3b72",
"oa_license": null,
"oa_url": null,
"oa_status": "CLOSED",
"pdf_src": "Arxiv",
"pdf_hash": "4ff4dff943a7dc505256c3ac587e3ac406337e2c",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
235827409
|
pes2o/s2orc
|
v3-fos-license
|
Qualitative and quantitative analyses of aconite alkaloids in Aconiti kusnezoffii Radix, and NO inhibitory activity evaluation of the alkaloid extracts
Aconiti kusnezoffii Radix is a traditional Chinese medicine (TCM) (commonly called “Caowu”) (National Pharmacopoeia Committee, 2015) and Mongolian medicine (commonly called “Benga”) (Ao & Buhe, 2013). In both traditional medicinal systems, Aconiti kusnezoffii Radix is commonly used owing to its analgesic (Liu et al., 2013) and anti-inflammatory effects(Li et al., 2019). The main active constituents of Aconitum spp. are aconite alkaloids (AAs), which have a C18-, C19-, or C20-diterpenoid skeleton (Wang et al., 2009; Wang et al., 2010; Wang & Liang, 2002), and lipo-alkaloids (Borcsa et al., 2011), which are a type of C19-norditerpenoid alkaloids; these AAs are both active and toxic. The effective therapeutic doses of AAs are close to their toxic doses (Wang et al., 2018b). During processing, these alkaloids retain their analgesic properties, although their toxicity is reduced by approximately 100-fold (Zhang et al., 2015; Zhi et al., 2020).
To ensure the safety and effectiveness of AAs in clinical application, it is imperative to develop convenient, quick, and effective methods to identify and characterize AAs in raw and processed Aconiti kusnezoffii Radix.Liquid chromatography (LC) has a better separation ability and high-resolution mass spectrometry (MS) has stronger analysis and identification abilities.MS/MS can provide extensive information of fragment ions, which is of significance for the identification of complex and same-type compounds.LC-MS/MS can effectively separate and help identify the chemical components of TCM (Pang et al., 2016;Song et al., 2015).
Currently, there is a lack of comprehensive qualitative and quantitative analyses of the chemical constituents of Aconiti kusnezoffii Radix.Therefore, we used an HPLC-ESI MS/MS and HPLC-DAD approach to investigate the types and content of components in raw and processed Aconiti kusnezoffii Radix.We then treated Raw264.7 cells with lipopolysaccharides (LPS) to establish a cell inflammatory reaction model (Wang et al., 2018a;Wang et al., 2016), and the release of nitric oxide (NO) was measured by adding alkaloid extracts of Aconiti kusnezoffii Radix (AECs) at different concentrations.This model can be tentatively used to evaluate the anti-inflammatory mechanism of AECs.
Sample preparation and extraction
Stock solutions of the six alkaloid reference standards were prepared with DMSO, and then diluted with 70% methanol to obtain the following concentrations: 5.85 mg/mL BAC, 2.18 mg/mL BMA, 2.24 mg/mL BHA, 2.79 mg/mL AC, 1.56 mg/mL MA, and 3.93 mg/mL HA.The stock solution was stored at 4 °C until use.
All Aconiti kusnezoffii Radix samples were air-dried, ground, and sieved (60 mesh) to obtain a homogeneous powder.Five volumes of 70% methanol (g/mL) was added to 10 g of the powder, and the samples were soaked for 1 h and extracted for 30 min using an ultrasonicator (250 W, 40 kHz).After cooling the sample to 25 °C, the lost weight was replenished with 70% methanol.Finally, each extract was filtered through a 0.22-μm nylon membrane before HPLC-DAD and UHPLC Q-Exactive Orbitrap MS/MS analyses.
The five raw Aconiti kusnezoffii Radix samples were powdered to a homogeneous size using a mill, sieved (60 mesh), and further dried at 60 °C in an oven for 6 h to a constant weight.Each powdered sample (40 g) was mixed and macerated with 1000 mL of 70% methanol for 24 h.The extracted solution was filtered.The first filtered residue was extracted with 400 mL of 70% methanol for 30 min by ultrasonic treatment.The extracted solution was filtered, and the residue was extracted with 300 mL of 70% methanol by Soxhlet extraction for 2 h.The three filtrates were combined, and the methanol was recovered by rotary evaporation and cooled at 4 °C for 24 h.Thereafter, 250 mL of water was added to the extract at 25 °C, and the pH of the solution was adjusted to 1-2 with dilute hydrochloric acid and then to 9-10 with ammonia water.This pH-adjusted solution was extracted three times with an equal volume of dichloromethane, yielding total alkaloids (AEC, 3.8 g) and water (non-AEC, 32.8 g).
Instrumentation and operation conditions
A UHPLC system, including an on-line degasser, a column oven, an autosampler, and a diode array detector (Thermo Fisher Scientific, Bremen, Germany), was used for quantitative data acquisition.A UHPLC system (Ultimate 3000) coupled with Q-Exactive Orbitrap MS (Thermo Fisher Scientific) was used for qualitative data acquisition.
The qualitative and quantitative methods are shown in Table 1.First, the qualitative analysis was performed to identify alkaloids in raw and processed Aconiti kusnezoffii Radix.A linear gradient elution program was developed as follows: 0-8 min, 5% B; 8-38 min, 5%-95% B; 38-45 min, 95% B. Second, the quantitative analysis was simultaneously performed to assess the six alkaloids in raw and processed Aconiti kusnezoffii Radix.The eluting conditions were optimized as follows: 0-13 min: 5%-25% B, 13-30 min: 25%-40% B. An ultraviolet absorption wavelength of 230 nm was used to determine these six alkaloids.At the end of each run, the initial composition of the mobile phase (5% B) was allowed to run for 10 min to re-equilibrate the entire system.
Optimization of the MS system
A Q-Exactive Orbitrap MS/MS system equipped with heated ESI was used.As alkaloids were determined in the positive ion mode, the mass axis of the instrument was calibrated with positive ion mass calibration solution before each experiment.The optimum operating conditions are shown in Table 2.All parameters of the HPLC-MS system were controlled using Thermo Scientific TraceFinder software version 3.2 (Thermo Scientific).
Cell culture and viability RAW264.7 cells (Shanghai Institute of Cell Biology, Chinese Academy of Sciences) were cultured in DMEM containing 10% heat-inactivated FBS, 100 U/mL penicillin, and 100 µg/mL streptomycin at 37 °C with 5% CO2.Logarithmic phase cells were used in the follow-up experiments.
Step 1: Cell viability was evaluated using RAW264.7 cells plated at a concentration of 105 cells/mL in a 96-well plate.Step 2: After the cells attached to the wall, they were treated with 0, 25, 50, 100, 200, or 400 µg/mL AECs and non-AECs.Simultaneously, the normal control group (without AECs and non-AECs) was established.
Step 3: The experiment was repeated in triplicate.After 24 h of culture, freshly prepared 5 mg/mL MTT solution (10 µL) was added to each well, and the plate was incubated at 37 °C for 4 h.Step 4: At the end of cell culture, the supernatant from each well was removed and 150 µL of DMSO was added to each well.
Step 5: For cell viability analysis, the absorbance (OD 570 nm) of sample in each well was measured using an enzyme-labeled instrument after 10 min of shock treatment.
NO assay
The Griess reaction was used to assess NO production in cells.RAW264.7 cells were seeded in 96-well plates and allowed to adhere to the wall for 24 h.The experimental and treatment groups were as follows: (1) the cells in the blank control group were cultured for 24 h without any treatment; (2) the cells in the model control group were treated with 1 μg/mL LPS for 24 h; and (3) the cells in the experimental group were treated with 1 μg/mL LPS and AEC and non-AEC at different concentrations (6.25, 12.5, 25, and 50 µg/mL) for 4 h.Subsequently, cell culture and viability steps 3, 4, and 5 in section 3.5.1.were used.
Qualitative analysis
Through previous and preliminary study findings (Yue et al., 2009;Liang et al., 2016;Zhang et al., 2019;Zhi et al., 2020;Zhang et al., 2016;Xu et al., 2014), the positive ion mode was found to be more suitable for the detection of alkaloids and the stable detection of fragment ions with sufficient abundance.The normalized collision energy (NCE) mode was chosen as there is no need to optimize the collision energy.Approximately 155 peaks were detected within 40 min in the mass spectrum of the 70% methanol extract of raw and processed Aconiti kusnezoffii Radix (Figure 2A and 2B) in the positive ion mode.The compounds were tentatively identified based on the HPLC retention time (t R in Supplementary Materials Table S2).
The 155 compounds were identified by elemental composition of MS, and the MS/MS fragment ion data were compared with data available in the literature, using mzCloud and chemSpider.
The data of the identified AAs, including the t R , elemental composition, calculated molecular mass, measured molecular mass, error (ppm), MS/MS information, and identification results are presented in Table S2.These data were used for predicting the AAs in raw and processed Aconiti kusnezoffii Radix.The mass error of M+H] + was less than 5 ppm, which indicated that the measured molecular formula matched the calculated molecular mass.The reliability of elemental composition was relatively higher, which is a prerequisite for identification.
The linear range, linearity equation, correlation coefficient, limit of detection, and limit of quantification for the quantitative analysis of the six reference alkaloids are shown in Table 3.
The precision, repeatability, stability, and recovery of the analysis method are shown in Table 4.The findings showed that the proposed HPLC-DAD method could be used for the quantitative analysis of AAs.
Sample analysis
The proposed method was applied to quantify the aforementioned six alkaloids in raw and processed Aconiti kusnezoffii Radix from five sources (R1-R5 and P1-P5, respectively) (Table S1).
Protective effects of AEC and non-AEC on LPS-treated RAW264.7 cells
Compared with the LPS model group, the AEC-treated groups (12.5, 25, and 50 μg/mL) (Figure 3D) showed significantly improved inflammatory response, and a concentration-dependent effect was observed (P < 0.05).The results showed that AEC exerts good protective effects in the concentration range of 12.5-50 μg/mL.Moreover, protective effects were observed in the non-AEC groups (same concentration) (Figure 3C) only at a concentration of 12.5 μg/mL.The NO assay clearly showed that LPS increased NO production, whereas AEC decreased NO production in LPS-treated RAW264.7 cells.
Discussion
Currently, studies on the chemical constituents of Aconiti kusnezoffii Radix are relatively simple (Zhi et al., 2020).These studies have only reported that Aconiti kusnezoffii Radix contains alkaloids and that the content of alkaloids changes before and after processing to reduce toxicity.There has been no further research Table 3. Linear range, linearity equation, correlation coefficient, limit of detection (LOD), and limit of quantitation (LOQ) of the six analytes.On this basis, our study findings can be further expanded to identify the chemical constituents of Aconiti kusnezoffii Folium, Aconiti kusnezoffii Flos, and Aconiti kusnezoffii buds, which are commonly used in Mongolian medicine and have the same origin as Aconiti kusnezoffii Radix, in order to promote the clinical use of Aconiti kusnezoffii Radix in TCM and Mongolian medicine.The qualitative and quantitative analyses of the raw and processed Aconiti kusnezoffii Radix revealed that the kind, quantity, and content of alkaloids were different before and after processing (see Table S2).There are 64 alkaloids in both raw and processed Aconiti kusnezoffii Radix, 74 alkaloids in only raw Aconiti kusnezoffii Radix, and 17 alkaloids in only processed Aconiti kusnezoffii Radix.The analysis results of six alkaloids revealed that before processing, the content of DDAs (AC, MA, and HA) was higher than that of MDAs (BAC, BMA, and BHA).
After processing, the content of DDAs decreased whereas the content of MDAs increased.It can be inferred that some DDAs may be converted to MDAs after processing, which implies that processing Aconiti kusnezoffii Radix reduces its toxicity.
In the process of identification of chemical constituents of Aconiti kusnezoffii Radix by HPLC-MS/MS, we found that the retention time is related to the molecular weight and the molecular weight is closely related to the structure.The chromatographic peaks are divided into three parts, ADAs with m/z ranging from 300 to 500 and retention time from 1 to 15 min, MDAs and DDAs with m/z ranging from 500 to 700 and retention time from 15 to 22 min, and LDAs with m/z ranging from 700 to 900 and retention time from 22 to 40 min (see Figure 2).The polarity of LDA is relatively low due to the presence of one or two fatty acid chains in the structure.Thus, the LDA is eluted last from the reversed-phase C18 column.The structure of ADA, MDA, and DDA contain a large number of methoxyl and hydroxyl groups, and MeOH (32 Da), H2O (18 Da), and CO (28 Da) are often absent in the secondary fragment ions.However, the LDAs are different from ADAs, MDAs, and DDAs, and are formed by the combination of alkaloid skeleton and fatty acid chain.Therefore, the LDA molecular composition can be hypothesized with the following rule.
Orbitrap MS, as a high-resolution MS, provides accurate information on the elemental composition of molecular ion peaks, which is the basis for the identification of isomers.The presence of an isomer is a common phenomenon in MS.Only a small part of the isomers can be distinguished by tR, polarity, reference, precise molecular weight, and secondary fragment ions, but distinguishing a major part of the isomers using MS data remains a challenge.In our study, same molecular weight peaks occurred at 29.78 (compound 127), 31.45 (compound 128), and 33.39 min (compound 141) Aconiti kusnezoffii Radix has the functions of dispelling wind, removing dampness, and relieving pain.Similar to Aconiti Radix, it is used to treat rheumatism and arthritis.Aconiti Radix has significant anti-inflammatory activity.To clarify whether Aconiti kusnezoffii Radix has anti-inflammatory activity and identify chemical components that lead to the anti-inflammatory activity, further in-depth research is required.In the analysis of the chemical constituents of Aconiti kusnezoffii Radix, we found that it contains numerous alkaloids.Therefore, it is not clear whether the alkaloid part or the non-alkaloid part exerts the anti-inflammatory effects.Here, we isolated AECs and non-AECs and examined the effects on cell survival and NO release in a LPS-induced Raw264.7 inflammatory cell model.The results showed that AEC at concentrations between 12.5 and 50 μg/mL could significantly reduce the release of NO.In the future, we will verify the anti-inflammatory activity using zebrafish in vivo.
Conclusions
Here, using the established UHPLC-Q Exactive MS/ MS and HPLC-DAD method, we identified 155 and quantified 6 components of raw and processed Aconiti kusnezoffii Radix.This method can be used to control the quality of Aconiti kusnezoffii Radix.Moreover, the method has the following advantages: high speed, simplicity, and low solvent consumption.Therefore, this novel method can be used for the qualitative and quantitative analyses of Aconitum products, such as Aconiti kusnezoffii Radix, Aconiti Radix, and Aconiti Lateralis Radix Praeparata.Furthermore, AEC exhibited good anti-inflammatory effects.Our findings provide a theoretical basis for the use of Aconiti kusnezoffii Radix to treat inflammatory diseases.However, further studies on the action mechanism and targets of Aconiti kusnezoffii Radix are required.
Figure 1 .
Figure 1.Chemical structure of the six alkaloid reference standards.
Figure 2 .
Figure 2. Total ion chromatograms (TIC) of the raw (A) and processed (B) Aconiti kusnezoffii Radix in the positive ionization mode.
Table 2 .
Ion source, full MS, and dd-MS2 parameter of the Q-Exactive Orbitrap MS/MS.
Table 1 .
Qualitative and quantitative conditions (column, column temperature, flow rate, injection volume, and mobile phases A and B).
Table 4 .
Precision, recovery, repeatability, and stability of the six analytes.
|
2021-07-15T16:47:54.296Z
|
2021-06-28T00:00:00.000
|
{
"year": 2021,
"sha1": "cb53c9b861008a12d584eb20005bbe106c814e3e",
"oa_license": "CCBY",
"oa_url": "https://www.scielo.br/j/cta/a/ZDZ8tGdZJfyHJwZ4hwFZ4Pk/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "cb53c9b861008a12d584eb20005bbe106c814e3e",
"s2fieldsofstudy": [
"Medicine",
"Chemistry"
],
"extfieldsofstudy": []
}
|
13686330
|
pes2o/s2orc
|
v3-fos-license
|
Bang-Bang Optimal Control of Large Spin Systems: Enhancement of $^{13}$C-$^{13}$C Singlet-Order at Natural Abundance
Using a Bang-Bang optimal control (BB) technique, we transfer polarization from abundant high-$\gamma$ nuclei directly to singlet order. This approach is analogous to algorithmic cooling (AC) procedure used in quantum state purification. Specifically, we apply this method for enhancing the singlet order in a natural abundant $^{13}$C-$^{13}$C spin pair using a set of nine equivalent protons of an 11-spin system. Compared to the standard method not involving polarization transfer, we find an enhancement of singlet order by about three times. In addition, since the singlet magnetization is contributed by the faster relaxing protons, the recycle delay is halved. Thus effectively we observe a sensitivity enhancement by 4.2 times or a reduction in the overall experimental time by a factor of 18. We also discuss a possible extension of AC, known as heat-bath algorithmic cooling (HBAC).
Introduction
Not many experimental architectures allow as elaborate control on quantum dynamics as that of NMR. Several powerful RF control techniques such as composite pulses [1], adiabatic pulses [2], band-selective/broadband pulses [3][4][5][6] are being routinely used in NMR spectroscopy. Numerical methods such as strongly modulating pulses [7], GRadient Ascent Pulse Engineering (GRAPE) [8], Krotov [9], etc have also been used for specific purposes in spectroscopy as well as quantum information. Here we describe an application of Bang-Bang (BB) optimal control that utilizes a sequence of full-power RF pulses with variable phases separated by variable delays [10][11][12]. Generally, the numerical complexity of optimal control techniques scales rapidly with the size of the spin system, thus limiting their applications. On the other hand, BB relies on onetime matrix exponentiation to build basic unitaries and hence it's complexity scales much slower, and therefore is applicable also for fairly larger spin systems [11]. In this work we utilize the BB control to directly transfer polarization from a set of ancillary spins to the long-lived singlet-order in a spin-pair.
and symmetric triplet states are In NMR, the basis states are usually the spin eigenstates in the Zeeman magnetic field, and under normal conditions it is hardly possible to prepare any pure quantum state. However, it is often possible to prepare an excess population in one of the quantum states relative to uniformly populated remaining states. Thus an excess population of singlet state relative to a uniformly populated triplet states is represented by the density operator where 1 4 is the four-dimensional identity operator and the scalar quantity S quantifies the singlet-order [18]. Since the dominant intra-pair dipolar relaxation process does not connect subspaces of different symmetries, the singlet-order often lives much longer than other non-equilibrium states whose lifetimes are limited by the spin-lattice relaxation time constant T 1 [16,17]. In favorable cases, singlet life-times as long as over 50 times T 1 have also been observed [27]. One way to access singlet-order is to utilize the chemical shift separation (along with J-coupling) between two spins to prepare a mixture |S 0 S 0 |−|T 0 T 0 | of singlet and triplet states. This is followed by suppression of chemical shift to impose symmetry, achieved either with low-field switching by shuttling the sample out of the magnet [13], or with a strong RF spin-lock while retaining the high-field [15]. After a desired storage period, the chemical shift separation is restored and the singlet-order is converted back into observable single quantum coherence.
Later, accessing singlet-order in systems with chemical equivalence, but magnetic inequivalence w.r.t. a chemically equivalent ancillary spin-pair, was discovered [22]. In this case, each of the chemically equivalent spin-pairs exist in singlet states at high magnetic fields without requiring external spinlock to impose symmetry. It was also shown that by exploiting the higher sensitivity of ancillary 1 H-1 H spin-pair, one can prepare, store, and detect 13 C-13 C singlet order either with isotopic labeling [41] or even at natural abundance [42].
In this work, we show that using the BB optimal control techniques, we can directly transfer polarization from ancillary protons to enhance naturally abundant 13 C-13 C singlet order. This method is widely applicable in a variety of systems where a pair of spins with or without chemical equivalence are coupled to a few ancillary spins.
Although the concept of polarization transfer has long been a part of NMR spectroscopy [43,44], it has been revisited in quantum information while attempting to achieve a small set of highly pure quantum bits (system qubits) at the expense of purity of a large number of ancillary quantum bits (reset qubits). This process known as algorithmic cooling (AC) systematically transfers entropy from system qubits to reset qubits [45,46]. Motivated by these concepts, we refer to the single iteration polarization transfer as AC. Heat bath algorithmic cooling (HBAC) is a nonunitary extension of AC, that involves removal of the extra entropy from the reset qubits to an external bath so that AC can be iteratively applied to achieve higher purity of system qubits [47].
The paper is organized as follows: In section 2 we describe BB optimal control in detail, along with our spin system and pulse sequence. In section 3 we describe experimental results and simulations. Finally, section 4 contains discussions and conclusions.
Bang Bang (BB) optimal control
Consider a system in a state ρ in that needs to be steered to a target state ρ. We discretize the time evolution into N segments each of duration ∆t. In the rotating-frames of the RF carriers, let H 0 be the internal Hamiltonian of the system and H k,n = A k,n I k x cos φ k,n + I k y sin φ k,n = A k,n Z k,n I k be the RF Hamiltonian on k th channel and n th segment. Here A k,n , φ k,n are the amplitudes and phases respectively, Z k,n = exp(−iφ k,n I k z ), and I k x , I k y , I k z are the spin operators on k th nuclear species. The full piecewise continuous Hamiltonian achieves an effective unitary evolution U = N n=1 e −iH n ∆t . In this work, we try to find a unitary U that prepares the target state containing a long-lived singlet component |S 0 S 0 | with a maximum singlet-order S (0). Here ρ ∆ is an undesired, though unavoidable, component containing triplet states as well as other artifact coherences. The expectation value of the singlet com- Therefore by maximizing Q via BB control, we can obtain an unitary preparing a maximum singlet-order. A subsequent spinlock of duration τ rapidly damps out the short-lived component ρ ∆ towards the maximally mixed state 1 4 /4, such that the purified state is of the form where S (τ) = S (0)e −τ/T S is the singlet-order decayed due to the long singlet life-time T S . In the next section we use AC to enhance the singlet-order from S to AC S by polarization transfer from ancillary spins.
As opposed to schemes like GRAPE [8] which use smooth RF modulations, BB control employs pulses having either zero or full RF amplitudes (A k,n = Ω k or 0) but variable phases (φ k,n ) to generate arbitrary unitaries. A flowchart describing various steps of the BB optimal control using genetic algorithm is shown in Fig.1. The major advantage of BB control is that the exponentiation of Hamiltonian to obtain the basic unitary (X k ) as well as the delay unitary (U d ) is rendered an one-time process that is outside of the iterations. Matrix exponentiation is a bottleneck in conventional algorithms based on amplitude modulation, particularly for large spin systems. The unitaries corresponding to arbitrary bangs are obtained by rotating the basic opertor X k aboutẑ axis, i.e., U k,n = Z k,n X k Z † k,n . Here Z k,n is a diagonal operator in Zeeman basis and hence is efficiently computed during the run-time of iterations. Thus BB method allows quantum control of large spin systems as demonstrated in the later section. It is even more efficient in designing RF sequences with low duty-cycle requiring long evolutions of internal Hamiltonian such as polarization transfer operations.
Spin System
To demonstrate HBAC, we use an 11-spin system including a pair of naturally abundant, weakly-coupled 13 C spins surrounded by nine chemically equivalent 1 H spins of 1, 4-Bis(trimethylsilyl)butadiyne (BTMSB). The sample was prepared by dissolving 120 mg of BTMSB in 0.7 ml of CDCl 3 (0.88 M). We use the protons to directly prepare enhanced 13 C-13 C singlet-order. The molecular structure of BTMSB is shown in Fig. 2. The molecular symmetry provides twice the probability of naturally abundant 13 C-13 C pairs. The chemical shift difference between the two 13 C spins is 2.32 ppm, and the 13 C 1 -13 C 2 J-coupling constant is 12.7 Hz, while J-coupling between 13 C 1 and the closest equivalent protons is 2.7 Hz. The spinlattice relaxation time constants (T 1 ) are about 3 s, 6.5 s, and 8.2 s for 1 H, 13 C 1 and 13 C 2 respectively. The effective transverse relaxation time constants (T * 2 ) are respectively 0.3 s, 2.5 s, and 2.9 s. Here protons in the shaded area act as ancillary spins which provide polarization to 13 C-13 C singletorder.
Pulse sequence
The pulse sequence employed for the preparation and enhancement of 13 C-13 C singlet polarization is shown in Fig 3. The initial thermal equilibrium state of the system is where C and H are the carbon and proton polarizations respectively, and H / C = γ H /γ C 4. Then a BB sequence is applied to prepare 13 C-13 C singletorder. Thus the reduced density operator for carbon spins is now where AC S (0) represents the enhanced singlet-order. At the end of the spin-lock of duration τ AC , one obtains a high quality singlet state with the singlet-order AC S (τ AC ). Suppose 1 H spins have much shorter T 1 relaxation time constant compared to the life time of singlet (T S ). Then during the spin-lock duration, 1 H spins regain polarization by spin-lattice relaxation and are available for further polarization transfer to 13 C-13 C singlet state. In our pulse sequence this is achieved by another BB pulse. This process (Fig. 3) known as HBAC can be iterated to further enhance the singlet-order in favorable systems. At the end of m HBAC iterations we obtain the state where HB m S (τ HB ) is the singlet-order after the final spin-lock of duration τ HB .
Finally, we convert the singlet polarization into, where zy and yz terms form the observable single quantum coherences of 13 C spins. In the following section we describe experimental results and numerical analysis.
Experimental results and numerical analysis
All experiments were performed using a 9.4 T (400 MHz) Bruker NMR spectrometer at an ambient temperature of 298 K using a standard high-resolution BBO probe. The transitionselective Gaussian pulse was of 750 ms duration. The BB pulse for AC was of duration 296 ms and that of HBAC was of 248.5 ms. Fig. 4 (a) displays the 13 C spectra corresponding to 13 C-13 C singlet-order at natural abundance without AC, and was obtained with the standard sequence without involving any polarization transfer [15]. The recycle delay was set to 35 s (approximately five times T 1 of carbons) and a total 512 scans were recorded. Although the characteristic signature of the singlet state is visible in terms of the antiphase magnetizations, the signal to noise ratio is rather poor. Fig. 4 (b) displays the 13 C spectra corresponding to 13 C-13 C singlet-order with AC again recorded with 512 scans. Since the polarization is mainly contributed by 1 H spins, we need a recycle delay of only 15 s (approximately five times T 1 of protons) and accordingly required only half the experimental time as that of without AC. Both spectra in Fig. 4 were recorded with the same spin-lock duration τ AC of 10 s. However, the signal to noise ratio with AC is about twice that of without AC spectrum. The estimated enhancement of singlet-order with AC compared to that of without AC, i.e., AC S / S , is about 3. The enhanced singlet-order allows us to conveniently monitor its decay versus the spin-lock duration τ AC . The results shown in Fig. 5 indicate the singlet decay constant T S of about 25.9 s. Thus, the singlet-order is approximately 3 to 4 times longer lived compared to the T 1 values of carbons.
In this particular spin system, we did not observe any advantage of HBAC over AC. HBAC is suitable for systems with fast relaxing ancillary spins and very slow relaxing system spins [48]. In such a system, protons recover their magnetization (after AC) much faster than the decay of singlet state, so that further polarization transfer can be carried out. In our system, the T 1 to T S contrast was insufficient to observe this effect.
We now numerically analyze the BB sequences to understand the dynamics of singlet-order enhancement. Fig. 6 (a) shows the BB profile of AC pulse and 6 (b) displays the evolution of singlet-order as a function of time starting from state ρ 0 of Eqn. 9. Here time discretization was done with ∆t = 500 µs, and RF amplitude Ω C /(2π) = 250 Hz. Thus each bang corresponds to a 45 • nutation.
At the end of the AC sequence the singlet enhancement factor reaches a maximum value of 4. Experimentally, however, we achieved an enhancement of about 3, presumably due to RF inhomogeneity, hardware non-linearity, and relaxation effects.
Discussions and conclusions
Sophisticated quantum control techniques are recently being used in both spectroscopy as well as in quantum information to achieve complex and precise spin dynamics [7,8,10,11,[49][50][51]. The challenge in many of such techniques is the numerical complexity involved in evaluating and optimizing propagators of large spin systems. In this regard, the Bang-Bang (BB) quantum control technique offers a unique advantage, since it only needs one-time evaluation of basic propagators by matrix exponentiation. Therefore, we can synthesize BB controls for larger spin-systems. Here we have described the various steps in the BB control technique using a flowchart.
In this work, we achieve the quantum control of 11-spin system by transferring polarization from nine ancillary spins into the singlet-order of a spin-pair. We experimentally demonstrate this method in a naturally rare 13 C-13 C spin-pair, with a probability of 0.011%, and obtain an enhanced singlet-order by a factor of 3, compared a standard method without involving polarization transfer. However, owing to the faster T 1 relaxation of the ancillary protons, the BB approach needed only half the experimental time compared to the latter. Thus effectively, we gain sensitivity enhancement by about 4.2 times or effectively over 18 times reduction in experimental time.
Exploiting the enhanced sensitivity, we investigated the decay of the singlet-order under spin-lock and found it to be three to four times longer lived compared to individual spin-lattice relaxation time constants.
We also investigated the heat-bath algorithmic cooling (HBAC) which attempts to further enhance the singlet-order by iterative transfer of polarization from ancillary spins. HBAC is particularly suited for systems with fast relaxing ancillary spins and slow relaxing target spins [48]. In principle, the long-lived singlet states are ideal for storing the spin-order between the iterations where ancillary spins re-thermalize by giving away extra heat to their bath. With this motivation, we explored HBAC in the 11-spin system described above. However, due to insufficient contrast between the life-times of singlet-order and ancillary spins, as well as insufficient enhancement by each iteration, we could not observe any significant advantage of HBAC process in this system.
The methods described here are can be applied to other homonuclear spin-pairs such as naturally rare 15 N-15 N or even naturally abundant 31 P-31 P pairs. We also anticipate to find many other interesting applications of BB control techniques in spectroscopy as well as in quantum information processing.
|
2017-06-30T05:58:03.000Z
|
2017-06-08T00:00:00.000
|
{
"year": 2017,
"sha1": "abb428b8514d2c895d36f53d771d6a2374e56f66",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1706.02594",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "abb428b8514d2c895d36f53d771d6a2374e56f66",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
}
|
250568373
|
pes2o/s2orc
|
v3-fos-license
|
MULTI-BODY ROPE APPROACH FOR THE FORM-FINDING OF SHAPE OPTIMIZED GRID SHELL STRUCTURES
. Over the past decades, different approaches, physical and geometrical, were implemented to identify the optimal shape, reducing the internal stresses, of grid shells and vaults. As far as their original organic shape is concerned, the design of grid shell structures inspired architects and structural engineers worldwide and in any time. The method, here presented, is developed and extended, from its original formulation, employing a self-made code based on the dynamic equilibrium, ensured by the d'Alembert principle, of masses interconnected by rope elements in the space-time domain. The equilibrium corresponding the optimized shape to be defined, is obtained through an iterative process in the falling masses connected by a net for the definition of the "catenary surface" coinciding with the best shape of the shell (form minimizing the bending moment) according to the conditions of zero velocities and accelerations of the nodes. The implementation of the method is realized in MATLAB and set up for Python in an interpreted high-level general-purpose programming language. By the use of this code as well as its object-oriented architecture the MRA Python code will be linked to the Grasshopper environment for the direct visualization of the shapes and their fast-parametrization phase.
INTRODUCTION
In the last times, new architectural requirements like internal distribution flexibility and the practise of free-form for large span roofing structures, encouraged the use of groundbreaking double curved shells and domes as a valid result for column-free buildings [1]. To this purpose the reticulated, lattice or grid shells is giving a valid option able to propose advanced solution joining aesthetic purposes and structural necessities in a sole product. As mentioned, by many authors, a grid shell is essentially a structure with a single thin layer with a thickness very small in comparison to the main span of the roof. The grids are frequently optimized in order to reduce the bending moment inside the structural elements [1,2]. As in the case of very famous examples of the recent past this kind of structures are even today progressively used and characterized by double curvature geometrical domains and parts of the roof with very high shallowness ratio. The captivating constructions of the roof of the Yas Viceroy Hotel in Abu Dhabi built in 2009 and the Chadstone grid shell (Chadstone, Australia), recently finished, are just some of the most recent examples of these kind of structures (see Figs 1a and b). Certainly, these architectures and particularly their shapes were designed considering the aesthetic influence as one of the most important and underlying idea.
Different approaches, physical and mathematical, were used to discovery the shape diminishing the internal stresses [1][2][3][4][5]. This kind of structures are characterized by high technology levels in the construction and by the necessity of stability analysis during the design phase. The optimized shape very often if altered by partial or global collapse due to buckling, snap-through and coupled instability. [6][7][8][9][10]. The form-finding usually accepted (optimization techniques, genetic algorithms etc.) leads to the description of a particular shape in which the stresses are minimized for a certain loading arrangement. The importance of finding a funicular shape for 3D shells lies in the fact that the evenly distributed gravity load underwrites largely to the load to be resisted. For over 40 years Heinz Isler used physical suspended models as the most suitable way to describe three dimensional systems [11]. Similarly, Frei Otto, during his research activity in Stuttgart in the '70s, developed accurate physical models for the form finding methodology definition (e.g. the models for the Multihalle of Manneheim construction). In the early 20 th Century, Antoni Gaudì employed hanging models in the form-finding process for the chapel of the Colonia Guell [11] and the arches of the Casa Mila. Robert Hooke recognized in the eighteen century that tension forms could be inverted to find the shape of structural forms acting in pure compression under the same loading conditions. In a unique sentence: "As hangs the flexible line, so but inverted will stand the rigid arch." [11]. However, today, most commercially available structural analysis software are suited for analysing grid shell structures. Very often, large displacements are not supported and the form finding based on the suspended shape results to be hardly appropriate [11,12].
Shape optimization of grid shells has been carried out using different techniques including among them linear software design [13] and gradient optimization [14]. At the same time, discrete truss topology method [15], graphed based design [16], simulated anelling [17], and cut-and-branch methods [18] have been used. Moreover, genetic algorithms have been recently employed for the optimization of three-dimensional discrete system, such as spatial structures planar structures and geodetic domes [19]. Multi-objectives optimization scheme have been developed by Winslow for free form grid shell constituted by elements with variable orientation [20]. At the same time, a coupled form-finding and grid optimization has been anticipated by Richardson et al. [2]. Form-finding approaches such as the force density method [21] and the dynamic relaxation (DR) [22] have been introduced to weightless configurations. Among these last kinds of systems Kilian and Ochsendorf [11] proposed a shape-finding tool for statically determinate systems based on particle-spring model. At the same time, Block and Ochsendorf proposed the thrust network analysis to establish the shape of pure compression systems in particular for masonry structures [23]. Recently, in addition to the overall grid shell form also the selection of the grid type is considered as an important key point. As reported by Richardson et al. [1] the grid configuration generated by computer aided design software is transposed to static layout. Triangulated grids are the most basic and intuitive means of configuring the grid on a curved surface. However, this grid is not essentially the most efficient choice for a given form: triangulated grids tend to be higher-priced [2], since not all elements are essential for stability. Quadrangular grid configurations with planar faces are a good substitute of triangulated grids. Adriaenssens et al. [2] used a strain energy origami approach to enforce planar face constraints in the form-finding of an irregular configured grid shell to achieve ideal planarity of the faces. In this context, it is also necessary to discriminate the solutions in which the selected mesh is shaped by triangular units founded by elements of dissimilar length but with the same section and configurations in which the mesh is instead quadrangular and the elements that are forming the diagonal layer are absent (pure quadrangular mesh) as in the case of the Manneheim Multihalle (1975), or belong to another hierarchy, constituting the bracing effect, as in the case of the courtyard roof of the Museum of Hamburg History (1989) [1].
In the present paper, different shapes were obtained by the dynamic study of a hanging grid formed by free masses connected by flexible ropes with a certain slack coefficient (sc). In this case, any kind of loads can be assumed as the input for the step-by-step analysis and both 2D and 3D systems can be taken on. With this approach, named multi-bodies-rope approach (MRA), solving the mathematical model describing the model of the whole system, it is possible to achieve the equilibrium configuration of the net for the masses [24,25]. Originally, due to the large numbers of variable of the model the author adopted a numerical approach to solve the scheme by a multi-bodies numerical code using Runge-Kutta solution method. By this way, it is possible to define the configuration of the structure as the upturned model consistent to the last step (equilibrium step. In addition, in the case of roofs with a very large number of nodes, a calculation procedure is presented here. It is based, obviously, on the dynamic model proposed, providing, in the preliminary phase, a geometric model built using NURBS surfaces based on Bézier curves [26,27]. This combined approach allowed to obtain good results with a lower cost in term of time consuming if compared to simulations for complex form in which the form-finding is obtained for grid where the initial conditions very far from the optimal shape. The MRA approach is used to define the shape of three circular grid shell varying the sc. The implementation of the method is realized in Matlab and Python in an interpreted highlevel general-purpose programming language. The adopted design philosophy emphasizes the code readability by other languages with respect to the traditional model realized in Visual Nastran 4D. By the use of this code as well as its object-oriented architecture the MRA Python code will be linked to the Grasshopper environment for the direct visualization of the shapes and their fast-parametrization phase.
MRA APPROACH FOR THE FORM-FINDING
As mentioned before different approaches have been examined in the last periods in order to range the target shapes for grid shells structures. Among these, very interesting outcomes have been obtained by particle-spring models, consisted of particles linked by rotational and translational springs elongating during the forming phases [11,12]. In these models the self-weight of the nodes and the load of the rods are focused in the nodes (elements). In the present paper, the proposed model considers real ropes in order to simulate the part of the hanging net creating the suspended shape. The ropes are characterized by different s.c. permitting shapes more or less curvilinear for the final shapes [24,25]. The main difference between particle spring model (PSM), the dynamic relaxation model (DRM) and the MRA (multi-body rope approach) consisted into the system of forces acting on the nodes. In the first cases the forces due to the linear (ku) or the non-linear translational spring stiffens (ku n ) and the bending due to the rotational contribution (krθ) are considered together with the external load to describe the resultant for each node, see Eq. (1) and (2). In the method, here offered, the connection (constraints) between two nodes is realized by a proper rope. From this point of view, the rope does not put on reactions at all when the distance between the endpoints (x), starting from initial positions corresponding to an initial distance (li), are less than the prefixed rope length (lf). When the distance between the nodes is equal to the rope length, forces are applied at the endpoints equal in magnitude and opposite in direction, while no bending is applied excluding the limitation of any degrees of freedom, see Eq. (3) and (4) , From this point of view the proposed method demonstrated to be consistent to the experimental models and to the form finding. The static configuration of the hanging net can be got by an iterative technique applied to the grid using the equations of the equilibrium of the nodes in a three-dimensional in the time domain [24,25]. Between one stage and the next, the node coordinates matched to a time step, their difference is characterized by the velocity of the falling masses (nodes) and their accelerations. In order to guarantee the convergence of the iterative process, using an actual step time, it is possible to simulate numerically the falling of the net by the dynamic equilibrium with inertial actions. In Figure 1c a generic node"i" of a grid with quadrilateral mesh is reported. The node is a generic internal node adjacent to four other nodes. At this element four rope elements (a,b,c,d) are converging. The node is identified by the coordinates xi, yi, zi, expressed respect to the Cartesian space. The elements connected the node i to the four adjacent nodes j, k, l, m. In the node i a generic load pi represented the external load and the proper load (pix , piy , piz) due to the mass node (mi). In the equation system of equilibrium, the inertial and dissipative actions are taken into account as proportional to the velocity and the acceleration of each nodes of the suspended model. The equilibrium of the node i, as referring in Figure 1, is the following: Suspended model for the 3D definition of the catenary surface (b). Elementary portion of the grid: the node to which the rope elements converge (node i) connected four adjacent nodes (c) [25]. (5) where the summation is equal to Ri representing the resultant in the node i (generic node in the net). F I is representing the effects of the inertial force (F') with a module equal to the product between the mass of the node and the amplitude of the acceleration vector with a direction equal to the opposite direction of the acceleration and the dissipative force (F'') assumed equal to the product of a constant times the velocity vector with a direction equal to the opposite of the velocity. The contribution of Fei is constituted by Sa, Sb, Sc and Sd represented the resultants along the ropes a, b, c and d (see Fig. 1 c) and by the external loads. In this way, it is possible to take as the initial position of the grid nodes a configuration also very far from the final balance (final suspended shape). The convergence of the system, indeed, was guaranteed by the convergence of the iterative process as a physical process of the three-dimensional suspended grid. In this case the non-linear system of equations (6), (7) and (8) was a system of non-linear differential equations to be solved by numerical methods in the time domain. According to this system the solutions are found according to the dynamic balance equations by a step-by-step analysis [25]: The contribution of velocity and acceleration for each node can be expressed as: The main conditions to create the model through the MRA approach are represented by an appropriate level of hypostatic condition of the suspended configuration at the initial step, and by the constraint typology defined for the rope elements. For the realization of the first condition it will be sufficient that the number of degrees of freedom (D.o.F) of the three-dimensional system is > to the number of degrees of tridimensional constraint of the whole system (D.o.C). In particular it is possible to define the number of D.o.C as: D o C n n n n n = + + + + (11) where nA, nB, nC1, nC2 are the numbers of nodes in the mesh characterized by constraint level as reported in Fig. 2 (nt is the number of the node at all). At the same time, the number of degrees on freedom can be defined as: . .
The final condition of true equilibrium is considered fulfilled when for each component of the system it will be possible to consider the velocity and acceleration tending to zero or otherwise ≤ e with e as a negligible value. The form-finding found leads to the definition of a particular shape in which the stresses are minimized for a certain loading configuration. In this way, the same structural scheme of shells and arches may be affected by instability problems, even up to the collapse, changing the boundary conditions (loading and constraints) or with the emerging of defects [28][29][30][31][32][33][34][35][36][37][38][39][40][41].
APPLICATION OF MRA BY THE LENGTH CONTROL
Within the framework of the code that was developed, a specific effort was made in order to explore the problems related to the creation of a shape through a form-finding process able to ensure the use of rods (ropes) characterized by the same length. The search for shapes that are optimized for force distribution (bending moment minimization) and that are, on the other hand, consisting a system, a layer (mesh), that allows to have the biggest number as possible of rods characterized by the same length. This condition is assumed to be a key concept in the design and construction of shells that are marked by extremely free and complex shapes. The use of free forms, in fact, for roofs and shells with increasingly large span is widely spread. Some examples may be the roof of the shopping center at Chadstone in Australia and the project for the roof of the shopping center Pompeii Maximall in Italy. Considering the process of finding the adopted form, however, the final configuration is the result of several parameters. The initial slack coefficient, the shape of the edges (edge beams or suspension points), the number of nodes in the initial mesh and their initial distance, the last but not the least the constraint pattern assumed for the definition of the characteristics of the rope. As mentioned before, the developed code aims to solve problems related to the use of elements of equal length. In the figure 3 the case of a mesh composed of 9 ×4 elements with 6 suspension points has been simulated. Later it was possible to perform patterns with a much larger number of elements and characterized by many frames of the edges. Searching for a very complex configurations of the final shape (see Fig. 4).
CONCLUSIONS
The code developed offer the solution for a structural form-finding of shells where the equilibrium corresponding to the optimized shape to be defined, is obtained through an iterative process of falling masses connected by a net in order to define the "catenary surface" coinciding with the best shape of the shell (form minimizing the bending moment). The implementation of the method is realized in MATLAB and predisposed to be implemented in Python in an interpreted high-level general-purpose programming language. The adopted design philosophy emphasizes the code readability by other languages with respect to the traditional model realized in Visual Nastran 4D. By the use of this code as well as its object-oriented architecture the MRA Python code will be linked to the Grasshopper environment for the direct visualization of the shapes and their fast-parametrization phase. Moreover, the code was developed in order to explore the problems related to the creation of a shape through a form-finding process able to ensure the use of rods (ropes) characterized by the same length. The search for shapes that are optimized for force distribution (bending moment minimization) and that are, at the same time, consisting of a system allowing the biggest number as possible of rods with the same length, seems to be a key concept in the design and shells characterized by large span, extremely free and complex shapes.
|
2022-07-16T15:08:47.750Z
|
2022-01-01T00:00:00.000
|
{
"year": 2022,
"sha1": "d9a08893067f1b69febd483fbb488358eef387ce",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.scipedia.com/wd/images/8/89/Draft_Content_647007515-1837_paper-7275-document.pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "0c5ddd8b839b32588c8759b998be7978b06594d3",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": []
}
|
252022176
|
pes2o/s2orc
|
v3-fos-license
|
Benefits and Conflicts: A Systematic Review of Dog Park Design and Management Strategies
Simple Summary Dog parks contribute physical and social benefits for both canines and their owners, especially during and since the COVID-19 pandemic. However, dogs in public places can create various conflicts. Growing numbers of scholars have explored strategies for effective park design and management. This systematic study synthesizes and analyze the benefits, conflicts, and strategies for the design and management of dog parks according to the PRISMA guidelines. Based on the summary of conflicts between canines, humans, and their environment, we present design and management guidance for dog parks to effectively mitigate these conflicts while enhancing the benefits of off-leash areas. While this study promotes a sustainable and healthy coexistence of canines and residents of built environments through appropriate design and management strategies, several research and practice gaps have been identified from the results, such as the dearth of experimental evidence and limitations of the physical benefits of dog parks. These research gaps provide opportunities for experts to address in future. Abstract Dog ownership and dog walking brings various health benefits for urban dwellers, especially since the COVID-19 pandemic, but trigger a number of controversies. Dog parks have become increasingly significant public resources in the pandemic to support these benefits while facing intense conflicts. To develop effective dog parks in urban settings, growing numbers of scholars have provided insights into the design and management strategies for addressing the benefits and conflicts. The objective of this study is to synthesize and analyze various aspects of dog park design and management and to assess identified strategies for enhancing their benefits while mitigating their drawbacks. Following the PRISMA guidelines, a systematic study was conducted to synthesize the benefits, conflicts, and management strategies of dog parks, supported by Citespace. Benefits and conflicts in dog park design and management have been synthesized and organized according to their frequency of presence and the statistical results. We analyzed and assessed existing design and management strategies. Through this systematic study, we discovered the need obtain o po experimental evidence on effective dog park design and management to enhance their benefits while mitigating their sources of conflict and limitations in the intensity of park visitors’ physical activity in off-leash areas. Guidelines for the design and management strategies for effective dog parks were made to enhance their benefits while alleviating conflicts in the future development of sustainable dog parks that promote healthy relationships between canines and residents in urban built environments.
Dog Ownership and the Impacts
Dog-ownership accounts for a significant proportion of households across countries [1]. A high proportion of dog ownership provides various benefits, including increased physical activity [2,3], social and mental health benefits [4,5], reduced cardiovascular risk [6], and
Aspects of Dog Parks
Many scholars have asserted that as built environmental resources, dog parks tend to strengthen a myriad of positive impacts of dog ownership [26][27][28]. The primary benefits of dog parks can be generalized into physical and social dimensions. Many studies have acknowledged that a nearby dog park can encourage physical activity, through dog walking and play, which consequently contributes to human and canine physical health [22,23,[29][30][31]. Moreover, dog parks provide a space for dogs and their owners to meet and become acquainted with each other, which enhances social interactions for both individuals and their dogs [26,32]. Socializing in dog parks can result in greater positive feelings towards the neighborhood, enhancing the sense of community and social capital [18,[22][23][24][32][33][34]. Other related benefits of dog parks include reduced aggressiveness of dogs, resulting in better controlled dogs [21], and reduced criminal activity [23]. These benefits should be advocated through effective design strategies for dog parks.
Although dog parks provide benefits for both individuals and their dogs, considerable opposition in the design process should not be overlooked. Some issues caused by domestic dogs are often aggravated in dog parks because of the concentrated gathering of dogs. Some studies indicated negative impacts of dog waste on plants, causing soil erosion, unpleasant odors, and the transmission of diseases [23,35]. Canine aggression including dog fights and bites can be a severe phenomenon in dog parks and may result in injuries and controversies. However, it remains questionable if the environmental and health issues related to humans and their dogs can be addressed in the design of dog parks [36].
The design of the built environment can either facilitate or hinder activities such as dog walking [2]. In order to strengthen related activities within dog parks, an increasing number of design guidelines/strategies have been developed for researchers and practitioners. With the advancement of research, most recent studies indicate design strategies should consider the benefits and problems of dog parks beforehand [21]. Systematic studies have also suggested that the primary goal of dog park design should be to enhance their benefits while reduce their conflicts in place [23].
Research Objective
Dog parks are constructed to provide opportunities for individuals and their dogs to achieve health benefits while mitigating the conflicts between people, animals, and the environment. Design strategies for dog parks should target strengthening these benefits and while mitigating their problems. Even though emerging research suggests that dog park design strategies should be formulated in terms of their benefits and conflicts, no study has investigated whether the existing design strategies match the increased benefits and conflicts of dog parks, or assess if the design of dog parks can support the provision of health benefits while relieving these conflicts. To address these research gaps, this systematic study first synthesizes and analyzes the existing pros and cons and design/management strategies of dog parks. Based on the analysis, we provide recommendations for future planners, researchers, and decision-makers to optimize the design and research processes for dog parks. To achieve the research objective, the following detailed research questions were explored in this study: (a) What are the existing benefits of dog parks? (b) What are the conflicts that have happened in the dog parks? (c) What are the design strategies for dog parks? (d) What are the management strategies for dog parks? (e) How is it possible to endorse the benefits and minimizing the conflicts while determining the design and management strategies of dog parks?
Method
To answer the research questions, a systematic study was conducted following PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analyses) [37], using the support of Citespace to investigate the knowledge structure of canine-related relevant studies.
Search Criteria and Strategy
Inclusion criteria of this study are English-language, peer-reviewed journal articles and full-text academic dissertations and theses [38], in which dog parks are mentioned or the thematic focus. More specifically, our review focused on articles pertaining to: (1) aspects of domestic dogs and dog-related activities in urban open space, including both the benefits and problems; (2) design and/or management strategies of dog parks/off-leash areas of urban open space. Articles that did not focus on the settings of dog parks or urban off-leash areas were excluded from this review.
First, Citespace was employed to determine the knowledge structure of relevant fields, by exploring the development and importance of the dog park research. Second, the in-depth systematic review was conducted following PRISMA. In order to fully cover the relevant concepts of dog parks, the search key words included: "dog park" OR "off-leash area" OR "dog walking" OR "dog ownership" OR "canine". An online search with these keywords was conducted in Google Scholar, Scopus, PubMed, and Web of Science. Results from the keyword search were scanned for their titles and abstracts to determine the fulltext articles for analyses. Additional literature that aligned with the search criteria, detected from the reference lists of the full-text articles, were also included for subsequent screening.
Data Extraction and Analysis
According to PRISMA, relevant content from the selected articles was extracted and analyzed. In addition to the benefits, problems, and design/management strategies of dog parks, we also explored research objectives, methods, and connections between the pros and cons and the design and management strategies. In order to address the core research questions of this study of whether/how the existing design/management strategies of dog parks correspond to the identified benefits and problems, we analyzed the strategies, problems, and benefits of dog parks, as well as the logic underpinning the development of design/management strategies.
Citespace Analyses
For the topics of "dog walking" or "dog ownership", and "benefit", Citespace conducted with the WOS displayed the time span of retrieval from the year of 1990 to 2022. A total of 1276 journal articles were obtained. The number of studies has grown steadily since 2000, which indicated the topic is worthy of in-depth discussion and research. Figure 1 illustrates the knowledge map of the discipline distribution structure of the 1276 articles. As the largest circles are related to veterinary science, the majority of the total searches has been performed in the disciplines of Veterinary Science. With the emergence of circles for other keywords over time, the disciplines of dog ownership and dog walking research have gradually become distributed broadly, from the veterinary disciplines focusing on pet dogs to people-centered social sciences, the environment and ecology, health, and other fields.
analyzed. In addition to the benefits, problems, and design/management strategies of dog parks, we also explored research objectives, methods, and connections between the pros and cons and the design and management strategies. In order to address the core research questions of this study of whether/how the existing design/management strategies of dog parks correspond to the identified benefits and problems, we analyzed the strategies, problems, and benefits of dog parks, as well as the logic underpinning the development of design/management strategies.
Citespace Analyses
For the topics of "dog walking" or "dog ownership", and "benefit", Citespace conducted with the WOS displayed the time span of retrieval from the year of 1990 to 2022. A total of 1276 journal articles were obtained. The number of studies has grown steadily since 2000, which indicated the topic is worthy of in-depth discussion and research. Figure 1 illustrates the knowledge map of the discipline distribution structure of the 1276 articles. As the largest circles are related to veterinary science, the majority of the total searches has been performed in the disciplines of Veterinary Science. With the emergence of circles for other keywords over time, the disciplines of dog ownership and dog walking research have gradually become distributed broadly, from the veterinary disciplines focusing on pet dogs to people-centered social sciences, the environment and ecology, health, and other fields. The keyword time zone map ( Figure 2) shows the distribution of the keywords, their frequencies, and relationships over time from 1990 to 2022, with the time slice set to every year. Prior to the year 2000, based on the keywords of walking, physical activity, and exercise, we can see that research was focused on the physiology, behavior, and movement. The keyword time zone map ( Figure 2) shows the distribution of the keywords, their frequencies, and relationships over time from 1990 to 2022, with the time slice set to every year. Prior to the year 2000, based on the keywords of walking, physical activity, and exercise, we can see that research was focused on the physiology, behavior, and movement. After 2000, the associations between people and pet dogs attracted more attention as research objects. Since 2005, the occurrence of similar keywords began to increase, and dog ownership in the search terms was put forward for the first time. Keywords such as health, human health, and public health have gradually become greater areas of focus. Researchers also began to be concerned about whether dog walking and dog ownership brought other effects besides health, such as risk factors, impacts, and perception. In addition, factors affecting dog walking also attracted research attention, including impacts on the environment, as reflected in the keywords of built environment, park, and other place-based keywords. ownership in the search terms was put forward for the first time. Keywords such as health, human health, and public health have gradually become greater areas of focus. Researchers also began to be concerned about whether dog walking and dog ownership brought other effects besides health, such as risk factors, impacts, and perception. In addition, factors affecting dog walking also attracted research attention, including impacts on the environment, as reflected in the keywords of built environment, park, and other place-based keywords. Figure 2. Time zone chart of keywords (each circle in the figure represents a keyword that first appears in the analyzed dataset and is fixed in the first year from the left side. If the keyword appears in a later year, it will be superimposed at the first occurrence).
Systematic Study Following PRISMA
The Citespace analyses uncovered impacts on dog walking and health from the perspective of physical environments. Most recent research into the canine disciplines has started to switch the focus to related environments, such as dog parks.
The subsequent systematic study was conducted following PRISMA. Figure 3 illustrates the flow of the literature identification, screening, and inclusion, which yields a total of 55 articles of interest, of which 46 were peer-reviewed journal articles and 9 were dissertation/thesis [36,[39][40][41][42][43][44][45][46]. Most of these articles were conducted in the global west, especially in the USA, Australia, and Europe. There were 16 articles proposing dog park design and/or management strategies without discussion of their pros and cons, and 13 articles focusing on the benefits and/or conflicts of dog parks. Around half of these articles (26 out of 55) explored both the pros and cons and the strategies of dog parks, but only nine of them formulated design and/or management strategies according to the benefits and conflicts. Dog park benefits, conflicts, and design and management strategies for the 55 articles are summarized in Table 1. If the keyword appears in a later year, it will be superimposed at the first occurrence).
Systematic Study Following PRISMA
The Citespace analyses uncovered impacts on dog walking and health from the perspective of physical environments. Most recent research into the canine disciplines has started to switch the focus to related environments, such as dog parks.
The subsequent systematic study was conducted following PRISMA. Figure 3 illustrates the flow of the literature identification, screening, and inclusion, which yields a total of 55 articles of interest, of which 46 were peer-reviewed journal articles and 9 were dissertation/thesis [36,[39][40][41][42][43][44][45][46]. Most of these articles were conducted in the global west, especially in the USA, Australia, and Europe. There were 16 articles proposing dog park design and/or management strategies without discussion of their pros and cons, and 13 articles focusing on the benefits and/or conflicts of dog parks. Around half of these articles (26 out of 55) explored both the pros and cons and the strategies of dog parks, but only nine of them formulated design and/or management strategies according to the benefits and conflicts. Dog park benefits, conflicts, and design and management strategies for the 55 articles are summarized in Table 1.
As some studies addressed multiple aspects, including benefits, conflicts, or design and management strategies of dog parks (Table 1), we synthesized the information in the following figures (Figures 4-7) according to their frequencies in the identified studies. As some studies addressed multiple aspects, including benefits, conflicts, or design and management strategies of dog parks (Table 1), we synthesized the information in the following figures (Figures 4-7) according to their frequencies in the identified studies.
In Figure 4, the most reported benefits brought by dog parks were identified as improving the physical and social health of dogs and their owners. Some other benefits often mentioned by scholars included building a sense of community and enhancing social cohesion, public safety, and community engagement. Individual scholars indicated that the existence of dog parks in the community can increase property values [22], bring vibrancy to the community [57], and enhance emotional attachment of dog walkers [43]. Additional benefits of dog parks, including mental/psychological health benefits, are related to social benefits, such as promoting human socialization and the enhancement of social cohesion and community engagement. Hygiene problems related to the dog waste is a serious issue in dog parks, as identified by most studies. Figure 5 also showed that incompatible uses between dog owners and non-dog owners, aggressive dogss, and the lack of regulation of dogs received additional attention among large numbers of researchers. While physical health benefits are the most identified dog park benefits for both park visitors and canines, several studies indicated that dog park visitors may have a limited intensity of physical activity. In addition to the negative impacts on the environments indicated by more than one study, such as damage plant community and soil degradation and erosion, Booth [63] was also concerned that the presence of off-leash dogs may influence wildlife in parks. In Figure 4, the most reported benefits brought by dog parks were identified as improving the physical and social health of dogs and their owners. Some other benefits often mentioned by scholars included building a sense of community and enhancing social cohesion, public safety, and community engagement. Individual scholars indicated that the existence of dog parks in the community can increase property values [22], bring vibrancy to the community [57], and enhance emotional attachment of dog walkers [43]. Additional benefits of dog parks, including mental/psychological health benefits, are related to social benefits, such as promoting human socialization and the enhancement of social cohesion and community engagement. Design strategies for dog parks include the consideration of their location, size, adjacent park facilities, amenities, and esthetics ( Figure 6). Improvements in accessibility and amenities received the most attention among the proposed design strategies, such as increasing park access and the provision of garbage bins for dog waste. Several studies indicated the placement of signage for direction, adequate seating for dog owners, and monitoring programs or equipment for governing off-leash areas. Numerous studies stated that vegetation and plantings also need to be carefully considered in dog parks. Some other design strategies, such as linear-based path design [3], safety amenities, and natural reserves [64], although only discussed by individual studies, were consistent with the common strategies for dealing with identified conflicts, including lack of physical activity and hygienic issues. In Figure 7, most research has suggested that strengthening public engagement in the decision-making process for dog park construction/management can address the conflicts between canines and humans, as well as between dog owners and non-dog owning park users. A self-enforcement policy that motivates dog-walkers to manage their dog's
Discussion
The results from Citespace analyses indicated that growing research focus has been directed on dog ownership, dog walking, and the physical environment since 2016. Afte the COVID-19 outbreak, growing numbers of researchers have emphasized the physica and mental health benefits brought by dog ownership and dog walking. Additionally, th ways to promote dog-related activities in urban settings have become a significant topic In this context, dog parks or off-leash areas became important research foci, consisten with the direction of the body of research and illustrating their future research potential As the outcomes in the Results section clearly illustrate the identified benefits, con flicts, and design/management strategies of dog parks (research questions a, b, c, d), th research question e: How is it possible to endorse the benefits and avoid the conflicts whil determining the design and management strategies of dog parks still remains to be re solved. To explore the research question, we discussed the results of the systematic study from the perspectives of the following two questions.
Do the Existing Design/Management Strategies Address the Benefits and Conflicts in Dog Parks?
According to the results of the PRISMA, although most of the related studies ex plored both the pros and cons of dog parks and their design/management strategies, 9 ou of 26 studies [26,44,53,55,60,62,71,74] developed strategies in response to the pros and con of dog parks. Sixty-seven percent of these studies focused on hygiene issues, including dog waste, and solutions in dog parks [44,53,55,62,71,74]. Other studies, while discussing the pros and cons of dog parks and the associated importance of considering the benefit and conflicts for the construction and management of a dog park, did not explicitly dis cuss design and/or management strategies in terms of the benefits and conflicts brough by dog parks. For example, Lee et al. [22] investigated park user patterns, activities, and their satisfaction and perceptions to provide design guidelines for dog parks. Their find Hygiene problems related to the dog waste is a serious issue in dog parks, as identified by most studies. Figure 5 also showed that incompatible uses between dog owners and non-dog owners, aggressive dogss, and the lack of regulation of dogs received additional attention among large numbers of researchers. While physical health benefits are the most identified dog park benefits for both park visitors and canines, several studies indicated that dog park visitors may have a limited intensity of physical activity. In addition to the negative impacts on the environments indicated by more than one study, such as damage plant community and soil degradation and erosion, Booth [63] was also concerned that the presence of off-leash dogs may influence wildlife in parks.
Design strategies for dog parks include the consideration of their location, size, adjacent park facilities, amenities, and esthetics ( Figure 6). Improvements in accessibility and amenities received the most attention among the proposed design strategies, such as increasing park access and the provision of garbage bins for dog waste. Several studies indicated the placement of signage for direction, adequate seating for dog owners, and monitoring programs or equipment for governing off-leash areas. Numerous studies stated that vegetation and plantings also need to be carefully considered in dog parks. Some other design strategies, such as linear-based path design [3], safety amenities, and natural reserves [64], although only discussed by individual studies, were consistent with the common strategies for dealing with identified conflicts, including lack of physical activity and hygienic issues.
In Figure 7, most research has suggested that strengthening public engagement in the decision-making process for dog park construction/management can address the conflicts between canines and humans, as well as between dog owners and non-dog owning park users. A self-enforcement policy that motivates dog-walkers to manage their dog's waste is important in the off-leash areas. Numerous researchers have raised the issue of environmental impacts caused by the canines, and this should be core to the management process of dog parks. Some other management strategies covered the necessity of having animal control officer presence, policies and penalties for noncompliance [36], and even a banned list of chronic offenders [46] in dog parks. To deal with the conflicts in dog parks, the strategies of periodic monitoring of soil conditions [74] and share of time in unfenced areas with other park users [21] were also raised.
Discussion
The results from Citespace analyses indicated that growing research focus has been directed on dog ownership, dog walking, and the physical environment since 2016. After the COVID-19 outbreak, growing numbers of researchers have emphasized the physical and mental health benefits brought by dog ownership and dog walking. Additionally, the ways to promote dog-related activities in urban settings have become a significant topic. In this context, dog parks or off-leash areas became important research foci, consistent with the direction of the body of research and illustrating their future research potential.
As the outcomes in the Results section clearly illustrate the identified benefits, conflicts, and design/management strategies of dog parks (research questions a, b, c, d), the research question e: How is it possible to endorse the benefits and avoid the conflicts while determining the design and management strategies of dog parks still remains to be resolved. To explore the research question, we discussed the results of the systematic study from the perspectives of the following two questions.
Do the Existing Design/Management Strategies Address the Benefits and Conflicts in Dog Parks?
According to the results of the PRISMA, although most of the related studies explored both the pros and cons of dog parks and their design/management strategies, 9 out of 26 studies [26,44,53,55,60,62,71,74] developed strategies in response to the pros and cons of dog parks. Sixty-seven percent of these studies focused on hygiene issues, including dog waste, and solutions in dog parks [44,53,55,62,71,74]. Other studies, while discussing the pros and cons of dog parks and the associated importance of considering the benefits and conflicts for the construction and management of a dog park, did not explicitly discuss design and/or management strategies in terms of the benefits and conflicts brought by dog parks. For example, Lee et al. [22] investigated park user patterns, activities, and their satisfaction and perceptions to provide design guidelines for dog parks. Their findings aligned with the following research that dog parks contributed to the social and physical health of both park users and their dogs, and expanded the knowledge that the park design should be based on the evaluation of aspects of dog parks [22]. However, they overlooked the established connection between aspects of dog parks, especially the identified health benefits and limitations, and the design guidelines of an effective dog park. Both experimental and systematic studies started to propose design and management strategies for effective dog parks for enhancing their benefits while mitigating their risks [23,26,60]. Prior to 2020, a dog park design and management guideline considering both the advantages and disadvantages was generated based on the literature [21]. This design guideline was developed based on existing literature, which did not robustly examine the pros and cons aspects of dog parks. Additionally, the strategies have not clearly indicated which benefits they can bring and/or which issues they can mitigate, so the effectiveness of the design strategies is questionable. Existing research has developed design/management strategies for dog parks that address their benefits and conflicts, but how these strategies can effectively enhance these benefits and avoid the conflicts remains a significant research question to be explored.
How Can the Design/Management Strategies Endorse the Benefits and Avoid the Conflicts of Dog Parks?
Dog parks can bring users various benefits, but their improper design or management can lead to conflicts between dogs, their owners, other park users, and the physical environment. Based on the findings of the systematic study, we summarized the design and management strategies according to the frequency and relevance of the identified benefits, conflicts, and existing strategies (Tables 2 and 3). leash law compliance; self-policing and self-enforcement;policies and programming regarding safety environment/ quality of life improvement garbage cans; enhance water system; more grass (suitable grass varieties); order and variety in design concern about environmental impacts; self-policing and self-enforcement; managing dog waste The identified articles inferred that a linear-based design could support both people and canine walking activities [3], which was aligned with the experimental evidence [77]. Significant numbers of the selected studies concluded that increased dog park access and proximity can encourage physical activities among dogs and their owners [21,22,30,33,39,46,49,59,64], because residents of local communities tended to walk to nearby dog parks more frequently. McCormack et al. [30] further discussed that a well-maintained dog park with clear signage could improve dog walking that contributes to physical health benefits. Recent statistical analysis indicated durable seating areas and adequate shade and shelter could facilitate social interaction among park users [78]. This reinforces the design strategies for investing in seating and shade trees to enhance social benefits in dog parks [32]. Physical and social health benefits among park visitors and their dogs are the most reported benefits brought by dog parks, but limited management strategies were developed for maximizing these benefits. It has been demonstrated that having events at parks, such as sports competitions, is correlated with physical activating and gathering of people [79], so we suggested that investing in events can encourage gathering visitors and dogs to engage in and physical and social activities in dog parks. Specific evidence between the research correlations in the dog park setting is anticipated to provide opportunities for future research.
As illustrated in Table 3, a greater number of design and management strategies were plotted to address conflicts and issues occurring in dog parks, when compared to enhancing their benefits. Hygienic issues in dog parks received the most attention from scholars. Dog feces, soil erosion, and damage to vegetation have received notable attention [53]. More concerning, health risks from disease transmission between dogs or from dogs to humans may occur without the proper design and management of gathering areas [80]. Regular monitoring programs and equipment, such as the placement of onsite surveillance cameras, can continuously supervise the condition of dog parks, including damage to vegetation and soil. Some scholars designed monitoring protocols with public engagement and mobile technology to examine hygiene issues and aggressive dogss [58,69]. However, surveillance programs should be carefully considered, as they can create issues of privacy. In addition to the park amenities, such as garbage cans, waste bags, and signage reminding and providing direction for waste disposal, the enhancement of the water system of a dog park is a key strategy. To minimize the transmission of zoonotic diseases, the location of dog parks should avoid proximity to natural water resources such as rivers and lakes [81], and Middle [26] suggested that seasonal drainage basin areas could be locations of choice for dog parks. Most importantly, management strategies corresponding to individuals and dogs can mitigate the spread of bacteria. The education of dog owners about environmental impacts to enhance self-enforcement of park users is the most effective strategy for decreasing the accumulation of dog waste and related hygienic issues in dog parks. This may be more critical than the waste-management amenities and strategies. In extreme cases, penalties and enforcement of a banned list of frequent offenders may also be necessary to mitigate these issues in dog parks.
Conflicts between park visitors with and without dogs for extended periods are especially prevalent in dog-gathering areas. Dogs that suffer from behavioral issues may trigger dog fights and aggregation-based dog conflicts, but also impact incompatibility between dog walkers and other park users. It is important for a well-designed and managed dog park to mitigate these issues. Among the listed strategies in Table 3, to mitigate conflicts, a logical park design with clear boundaries and proper fencing will separate dogs of different sizes and visitors with different intentions. Significant research including the most recent studies, indicates strengthening public engagement in the decision-making process is an effective solution to many controversaries [29,32,43,57,63,67,76]. Most existing issues in dog parks ultimately result from conflicts between different dogs, dog owners and non-dog owners, and impacts of ordinary dog park usage on environmental resources, such as the vegetation, soil, and other park uses. To relieve these conflicts, regular communication and cooperation between constituents, the local government, and stakeholders, including those who advocate for and oppose dog parks, are important in the public involvement process. The selection of dog park sites, design process, and daily management can all be enhanced through representation of different constituents. For example, the involvement of dog park activists and other residents in the process of determining a dog park's location can resolve issues by taking into consideration the concerns of hygienic problem, noise and odors caused by the placement of a dog park from the beginning. Researchers can also be a vital part in the decision-making process by providing professional alternatives for relieving conflicts [29,57]. Additionally, an efficient negotiation mechanism will allow various members to mitigate dog park issues during the decision-making process. We concur with Toohey and Rock [62] that many problems created by the existence of dog parks should not be concealed but openly discussed. Various approaches, such as public meetings, anonymous emails, and online polls can work during the processes of the creation and use of a local dog park [23,82]. Routine evaluation of dog parks is also recommended for strengthening public engagement, enhancing their benefits, and alleviating their conflicts.
Although we provided the design and management strategies and distinguished the vital role researchers play in response to the identified benefits and conflicts of dog parks, some dilemmas in the existing research still need to be addressed, such as the lack of experimental evidence supporting specific aspects of dog parks and the strategies applied.
Growing numbers of studies have quantitatively explored associations between the features of built environment and dog-walking in Australia, Canada, and the USA. However, there is a dearth of experimental evidence about how the features/design of dog parks may influence park-based activities, such as dog walking. Arguments are passionate on both sides and debate has remained subjective and unresolved because experimental evidence of the ecological impacts of dog walking has been lacking. Holderness-Roddam [21] provided guidance for designing, planning, and managing dog parks primarily based on the literature. Some recent studies have begun to place focus on physical benefits and/or social components of dog parks. For example, Middle [26] proposed that increasing the accessibility of dog parks for neighborhood walking could bring a higher proportion of social interactions. Kresnye [69] designed a cooperative system addressing the physical and social experience of canines in dog parks. However, these strategies are primarily created through qualitative analysis, which not only lacks the establishment of reliability of rationale, but also challenges the generalizability of knowledge and quantitative comparison, such as a meta-analysis. Experimental evidence should be provided in future studies for the development of reliable design and management strategies for the progress of dog park development.
Physical health benefits among dogs and their owners going to dog parks are the most reported benefits in the systematic study. However, a national survey disclosed that dog walking was not sufficiently intensive so as to count as moderate to vigorous physical activity (MVPA) [83]. Evenson et al. [60] discovered people visiting a dog park tended to engage in sedentary activities, such as standing and watching dogs play, a finding supported by other two studies [22,84]. Recent studies have concluded that dog park users are less likely to engage in physical activities than other urban park users, which is contradictory to the previous self-reported results and would thus warrant further, more detailed investigation [26]. As for the differences between perceived physical benefits and the limitations on levels of physical activities, it is important for dog park design and management to support visitors who engage in various park-based physical activities. This leads to some suggestion that an effective dog park should increase general walkability and be accessible for potential dog walking residents. Consideration of the walkable surface with the degradation of grass in larger dog parks was proposed by Evenson et al. [60] to increase the levels of physical activity for dog park users. Park proximity and accessibility to a dog park is central to its health benefits, specifically through facilitating dog walking behaviors, which affirms previous findings by McCormack et al. [30] and Lee et al. [22]. Improving physical activity through dog walking is a promising public health strategy to improve health that could feasibly reach those who are sedentary [54]. Improving the routes to and from dog parks, such that owners can safely walk or jog with their dogs to and from the park, ultimately benefit people and canine physical health. Dog companionship provides social support for the owners to join group activities, and dog parks offer a destination for owners to go and join in activities with their dogs. In addition to dog walking, which was challenged as a sufficient MVPA, we advocate for the placement of exercise facilities, including human-canine specific exercise equipment, to facilitate dog owners to engage in intense MVPA with their dogs other than just walking. The organization of frequent activities and competitions can motivate the MVPA between dog owners and the dogs, which also contribute to the community engagement and social cohesion.
While some researchers elicited that the existence of dog parks in a community could improve the quality of life and the environment [21,22,60], and increase the property value [22], the controversies brought by off-leash areas may detract from the benefits of the dog park for a community. For such reasons, it is often controversial for municipal governments to plan for dog parks. The literature suggests to strengthen the connection to community-based dialogues for dog park planning and management. Graham and Glover [32] stated the social structure of dog park committees should be governed and managed by disadvantaged groups to increase the stewardship and communication with the community. Not only by strengthening the public engagement, especially the researchers' involvement in the municipal governments' decision-making process, but by attaching importance to the endogenous conflicts and public controversies caused by the canines as well, significant opportunities can be achieved to bring about positive changes to the relationships between urban residents and their canines [57].
Conclusions
Although dog ownership and dog walking bring various physical and social benefits, especially since the pandemic, dog parks, on one hand strengthen the benefits for people and their dogs; on the other hand, they cause contentious community issues because of the allowance and gathering of off-leash dogs in a public space. Hygienic issues and conflicts between dogs, park visitors with and without dogs are the most identified issues occurring in dog parks. Many people value the physical and social benefits of dog parks, but the objectively measured intensity of physical activity among dog park users is often lower than other park users.
Recent studies have started to develop design and management strategies for dog parks that address the benefits and conflicts. Our study advances these findings to specifically maximize the benefits and minimize drawbacks of off-leash areas. A number of corresponding strategies for the benefits and/or issues of dog parks are formulated based on the experimental evidence for urban parks, rather than specifically for a dog park setting. As there is a lack of empirical research exploring the associations between the design/management strategies and the benefits and conflicts of dog parks, there are research opportunities for experimental studies and greater sample sizes to fill the research gaps. Well-designed strategies for both the planning and management processes of dog parks can enhance the experience of dogs and their owners, while avoiding some of the conflicts that arise during visits to dog parks. The inevitable issues should be confronted and discussed through the decision-making process, from the placement and planning of a dog park to the daily management of the off-leash areas. This study contributes to an integrated understanding and the sustainable coexistence of canines, dog owners, and those human park users who do not own dogs in built environments through appropriate design and management strategies.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author.
|
2022-09-03T15:16:14.536Z
|
2022-08-31T00:00:00.000
|
{
"year": 2022,
"sha1": "b5154bf864281d99d8c55b2eb19a7132f8a02f1d",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-2615/12/17/2251/pdf?version=1661930444",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bfcb6d279cc3efd0ff1e27e18a7558f03e1d6973",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
103028417
|
pes2o/s2orc
|
v3-fos-license
|
Optimization of the fermentation time and bacteria cell concentration in the starter culture for cyanide acid removal from wild cassava (Manihot glaziovii)
Cassava is one of the most widespread starchy tuberous roots in Indonesia, being one of the typical plants used in the starch market. However, due to the high cyanide content (338.41 ppm), these roots become a poison if they are unsuitably processed. Therefore, a detoxification process is needed to reduce the cyanide level to the safe level for human consumption (10 ppm). This study was focused on (i) the investigation of the detoxification potential of fermentation with Lactobacillus plantarum (L. plantarum) on the cyanide level of wild cassava tubers (Manihot glaziovii) and (ii) the optimization of the fermentation time and bacteria cell number in the starter culture. The fermentation was performed for different periods of time (12, 24 and 36 h) and various initial bacteria cell number (7x10, 7x10, 1.05x10, and 3.5x10 L. plantarum cells). The results showed a significant decrease of the cyanide level, 97 % of cyanide degradation being noticed after 36 h of fermentation for an initial bacterial cell number of 3.5x10 cells. Hence, the strong point of the study consists of a noteworthy reduction of the cyanide content in wild cassava in short periods, whereas the protein content was increased (from 1.5% to 3.5%) in Modified Cassava Flour (MOCAF).
Introduction
Due to its high content in starch, one of the key food sources for humans, cassava (Manihot esculenta) is a raw material to prepare different food products, along with rice, sago, and corn. Cassava plant has several advantages such as high drought tolerance and grown in poor soil, whereas other staple crops are unable to grow under critical conditions of climate and soil [1]. The estimated production of cassava in Indonesia is more than 23 million tons/year [2]. Also, cassava has high productivity, it is easy to cultivate and can provide a good source of food for a vast majority of the Indonesian population and can increase national food security of Indonesia. Although there are more than 100 cassava cultivars in Indonesia, especially on Java Island, only few of them are commonly used. One cultivar that has not been widely used is wild cassava. This cultivar of cassava has a gnarled root system that develops well and has resistance to drought. The analysis of 45 Indonesian cultivars of cassava roots for cyanogenic potential (CP) showed that CP content usually ranges from 17.8 to 233.2 ppm [3].The wild cassava roots contain high content of hydrocyanic acid (HCN) ranges from 183.2-357.2 mg/kg on dry matter basis (DM) [4]. Its flavor is influenced by the amount of cyanogenic glucosides present in roots, which make it bitter. However, cassava root has a high starch content, which makes it a good source for starch processing with applications in food industry [5]. Generally, starch contains 15-30% amylose and 70-85% amylopectin [6]. Amylose has a strong effect on starch, while amylopectin provides crisp effects in food [7].The utilization of wild cassava root as such for food purposes has few drawbacks, because it contains high toxic cyanogenic glucosides compounds (93% linamarin and 7% lotaustralin) [8]. Cassava roots also contain the enzyme linamarase, which can hydrolyze linamarin to produce glucose and acetone cyanohydrin. Furthermore, it can also decompose to acetone and HCN spontaneously at pH higher than 4 and temperatures greater than 30 ºC [9], as shown in Figure 1.
Fig.1
Enzymatic breakdown of the major cyanogenic glucosides (linamarin) in cassava roots [9]. Hydrocyanic acid is a highly toxic compound for humans. Therefore, the toxic potential of cassava roots should be addressed during cassava root processing before human consumption. The roots have to be detoxified to less than 10 ppm, which is the safe limit proposed by the World Health Organization [10]. The lactic acid fermentation is one of the most used processes for cassava detoxification during the fermentation process the formation of lactic acid decreases the pH below 6. This causes the laminarase enzyme to become non-active, and linamarin does not change into HCN. L. plantarum is a lactic acid bacteria commonly used in the fermentation process. These bacteria have antagonistic properties against fooddamaging microorganisms, such as Staphylococcus aureus and Salmonella. L. plantarum is tolerant to salt, produces acid quickly and has an optimum pH of 5.3-5.6 [11]. Nevertheless, although the fermentation process is widely used, there is a lack of information about optimal parameters, in particular the fermentation time and bacteria concentration, to carry out the process.
Therefore, the objective of this study was to evaluate the effect of the bacteria concentration in the starter culture on the cyanide removal in wild cassava for the production of cassava flour. The fermentation time was also investigated in order to optimize the detoxification process of the roots. The fermentation was performed by a submerged process with the addition of L. plantarum bacteria.
Materials
Wild cassava roots were obtained from a cassava plantation in Blitar, which is located in East Java, Indonesia. The L. plantarum bacteria was obtained from the "Chemical Laboratory of Microbiology" of Chemical Engineering Department, Institut Teknologi Sepuluh Nopember (ITS) Surabaya.
Sample and preparation
Freshly harvested cassava roots were chosen so that to keep the normal level of the proximate content. Cassava roots were peeled out and the skin was washed with running water at 28 ºC. After that, the proximate composition of wild cassava was performed. Furthermore, wild cassava was cut into chip form with a thickness of 0.5-1 cm to increase the surface area.
Preparation of starter bacteria (L. plantarum)
The L .plantarum inoculum was prepared in an Erlenmeyer flask containing 15 ml of nutrient broth (NB), and 135 mL of distilled water. The mixture was incubated for 16 h to achieve bacterial growth in log phase. The starter volume used for fermentation was adjusted by the variable number of L. plantarum bacteria cells. All the tools used for experiments were sterilized previously. The sterilization was carried out in an autoclave at 121 ºC for 15 min.
Fermentation process
A 150 g of cassava roots were used for the fermentation process. The starter amount of L. plantarum cells of 7x10 10 , 7x10 11 , 1.05x10 12 and 3.5x10 12 were added. The fermentation was performed for 12, 24 and 36 h. The temperature was kept constant at 32 ºC during fermentation.
Milling process
After fermentation, cassava roots were dried in oven at 45 ºC for 4 h, and then milled in a crusher. After milling, cassava flour was sieved to obtain a granulometric fraction lower than 100 mesh. The resulted flour was denoted as (MOCAF)
Determination of proximate composition
The proximate composition of wild cassava variety was determined for moisture, ash, crude fiber, fat and protein content as described by Association of Official Analytical Chemist (AOAC) [12].
Determination of cyanide content
The cyanide acid content on fresh and fermented cassava roots was determined as described by [13]. 20 g of sample were transferred into a Kjeldahl digestion flask. Then, 200 mL of distilled water was added to the digestion flask. The flask was stirred to mix the content thoroughly for 2 h, then placed on the heater to recover cyanide acid as distillate. The distillate was collected in a conical flask containing 20 mL of 2.5% NaOH solution. Next, 8 mL of NH 4 OH solution and 5 mL of 5% KI solution were added as an indicator. The resulting mixture was titrated against 0.02 N AgNO 3 until there was turbidity. A blank was also run through all steps above. The cyanide acid content was calculated from the amount of AgNO 3 used for titration.
Determination of Starch Content
The starch content of the cassava sample was determined as described by [12]. A 2 g of sample was immersed into 50 mL of distilled water and stirred for 1 h on a magnetic stirrer. Next, the solids were separated by filtration and washed with distilled water to produce 250 mL of filtrate, while the residue left on filter paper was washed with 10 mL diethyl ether and then with 150 mL of 10% ethanol. Then, the solid was transferred to an Erlenmeyer flask and cleaned with 200 mL of distilled water. 20 mL of HCl was added and the mixture was heated above its boiling point for 2.5 h. After cooling, the sample was neutralized with 500 ml of 45% NaOH solution and filtered through Whatman paper. Furthermore, the sugar content of the filtrate was analyzed using the Nelson-Somogyi method. From these methods, the glucose content was obtained, and the percentage of starch was determined by multiplying the glucose content by 0.9 (factor number).
Chemical composition of fresh wild cassava root
According to the proximate composition, the wild cassava used for this study contains 81.57 % starch, 1.25 % protein, 0.39 % lipids, 1.28 % fibers, 0.26 % ash, and 13,74 % moisture. The cyanide acid content of fresh cassava was 338.41 ppm. These results clearly shows that the cassava roots used for this investigation are promising raw materials for starch production but the high content of the cyanide limits its exploitation. Therefore, detoxification process of these roots is a mandatory step before utilization in food industry.
Microbial growth
The fermentation process was performed by using of L. plantarum bacteria. To determine the time necessary to reach the log phase of L. plantarum, the growth curve was obtained. The calibration of the growth curve of microorganisms was performed using the counting chamber method, which aims to determine the number of microorganisms used in fermentation. Calculation of cell count was made every 2 h for 24 h. The logarithmic phase of the L. plantarum growth curve was obtained from 6 to 16 h (Figure 2). This log phase was used to determine the amount of bacteria by using the number of different starter volumes on each variable of the number of microorganisms.
Effect of the starter culture concentration on wild cassava fermentation
Two types of fermentation are used for processing of cassava roots, spontaneous and the starter culture, both of them are commonly used in Central Africa and Asia [9]. In this study, fermentation with starter cultures on wild cassava roots was used with the number of microorganisms being increased from 7x10 10 , 7x10 11 , 1.05x10 12 , to 3.5x10 12 L. plantarum cells with a fermentation time of 12, 24 and 36 h. ppm. This remarkable fall could be explained by the enzymatic activity during fermentation, when the pH is lowered at 4, a value at which the enzymatic activity of linamarase, whose optimum pH is 6, is drastically reduced [14]. Due to the fact that linamarin and HCN are very soluble in water, the submerged fermentation process will provide the best conditions for the enzymatic activity aiming to reduce the HCN content in the wild cassava tuber. It is important to mention that a starter culture containing 3.5X10 12 cells decreases the cyanide content up to 94% in the first 24 h, value that is superior to 91% obtained for mixed-starter culture [15]. In the light of this result, the advantage of pure culture such an L. plantarum is reproducibility and control to produce MOCAF with low cyanide content.
Effect of the starter culture concentration on the starch content
Starch is composed of two glucose polymers, i.e., amylose and amylopectin. Amylose is a linear α-(1 → 4) glucose polymer, while the amylopectin consists of short linear α-(1 → 4) linked chains with α-(1 → 6) linked side chains [16]. The mass ratio between these two components is around 1:3. However, from the medical point of view, a starch with higher content in amylose is preferred (the so-called "resistant starch") because it has a good effect on the physiological function of the body, due to its hypoglycemic effect (lowers blood sugar after meals), prebiotic effect, it lowers cholesterol and reduces the risk of colon cancer [6]. Thus, there is a huge interest to prepare such kind of starch. Our study shows that it is feasible to regulate the ratio between amylose and amylopectine in starch by controlling the concentration of the L. plantarum cells in the starter culture. Figure 4 shows the evolution of the starch content over time depending on the concentration of microorganisms during cassava fermentation, when a decrease in the starch content with the increase of fermentation time and cell concentration was observed.
Fig.4. Effect of initial bacteria cell number on starch content.
A dramatic decay was noticed for a concentration of 3.5x10 12 cells. The decrease in starch content is caused by the use of organic matter to meet the energy needs for the growth of microorganisms [6]. It is well known that during the process of fermentation, the starch is hydrolyzed into simpler sugars from oligosaccharides and maltose to glucose [11].
The effect of the cells concentration on the evolution of the amylose and amylopectin content during the fermentation process is depicted in Figures 5 and 6, respectively. It can be noticed that a value of 3.5x10 12 cells was the optimum concentration at which the content of starch components were changed over time in the favour of amylase. These results were in accordance with previous studies [17]. Cassava roots were allowed to ferment enough time to reach an acidic pH value (put the value of pH), as a result of bacterial activity, suitable to start the starch hydrolysis at α-(1.4) bonds, thus increasing amylose groups [6]. According to Syahputri et al, a similar behavior was observed in the amylose content of fermented jali flour, for which the amylose content increased from 26.33% to 29.06% [18]. Enhance of the amylose content is justified by the breaking of the amylopectin branch chain at α-(1.6) glycoside bonds as a result of enzyme activity during fermentation. The highest amount of amylose (26 %) was obtained of fermentation (36 h) for the higher bacteria concentration (3.5x10 12 cells). As already mention above and according to Aliawati indicated that high amylose content correlates with high resistant starch [19]. Therefore, the product generated by our work has the potential to be used as raw material for resistant starch.
Effect of the starter culture concentration on the protein content
The current study was mainly focused on the detoxification of the cassava roots. The protein content was also evaluated, since high protein content in MOCAF is essential for nutritional reasons. It was interesting to emphasize that an increase in the protein concentration was noticed for the highest concentration of the cells in the starter culture. This feature is of particular interest knowing that usually, the protein content in MOCAF is improved by adding powdered dried fish debris, oil cake, or soybean flour to the cassava flour [20].The results are illustrated in Figure 7.
|
2019-04-09T13:08:49.712Z
|
2018-01-01T00:00:00.000
|
{
"year": 2018,
"sha1": "2852a95a396440a0c0abd53578a70a0ce53f997a",
"oa_license": "CCBY",
"oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2018/15/matecconf_rsce2018_01004.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "7c8535a23955b71cc76aa88a50f443ca7aa84eb0",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
226238312
|
pes2o/s2orc
|
v3-fos-license
|
Converging focal radiation and immunotherapy in a preclinical model of triple negative breast cancer: contribution of VISTA blockade
ABSTRACT Antibodies targeting the co-inhibitory receptor programmed cell death 1 (PDCD1, best known as PD-1) or its main ligand CD274 (best known as PD-L1) have shown some activity in patients with metastatic triple-negative breast cancer (TNBC), especially in a recent Phase III clinical trial combining PD-L1 blockade with taxane-based chemotherapy. Despite these encouraging findings, however, most patients with TNBC fail to derive significant benefits from PD-L1 blockade, calling for the identification of novel therapeutic approaches. Here, we used the 4T1 murine mammary cancer model of metastatic and immune-resistant TNBC to test whether focal radiation therapy (RT), a powerful inducer of immunogenic cell death, in combination with various immunotherapeutic strategies can overcome resistance to immune checkpoint blockade. Our results suggest that focal RT enhances the therapeutic effects of PD-1 blockade against primary 4T1 tumors and their metastases. Similarly, the efficacy of an antibody specific for V-set immunoregulatory receptor (VSIR, another co-inhibitory receptor best known as VISTA) was enhanced by focal RT. Administration of cyclophosphamide plus RT and dual PD-1/VISTA blockade had superior therapeutic effects, which were associated with activation of tumor-infiltrating CD8+ T cells and depletion of intratumoral granulocytic myeloid-derived suppressor cells (MDSCs). Overall, these results demonstrate that RT can sensitize immunorefractory tumors to VISTA or PD-1 blockade, that this effect is enhanced by the addition of cyclophosphamide and suggest that a multipronged immunotherapeutic approach may also be required to increase the incidence of durable responses in patients with TNBC.
Introduction
Successful tumor rejection requires the induction of robust anticancer T-cell responses. 1 Therapeutic targeting of the immunosuppressive pathways regulated by cytotoxic T lymphocyte-associated protein 4 (CTLA4) and programmed cell death 1 (PDCD1, best known as PD-1) has been successfully implemented in the clinical management of several malignancies. However, primary and acquired resistance to immune checkpoint inhibitors (ICIs) remain an obstacle in the majority of patients. 2 In breast cancer, early studies testing monoclonal antibody directed against PD-1 or its main ligand CD274 (best known as PD-L1) have shown variable but generally modest activity, which was relatively more pronounced in patients with triple-negative breast cancer (TNBC). 3 Recent results from the Phase III IMpassion130 clinical trial demonstrate that the addition of the PD-L1-targeting ICI atezolizumab to nab-paclitaxel increases progression-free survival (PFS) of metastatic TNBC patients. 4 Interim analysis also showed improved overall survival (OS) (25 mo vs. 15.5 mo) among women with PD-L1 + tumors (369/902, i.e., 41%). These results raise the question as to whether other cytotoxic inducers of immunogenic cell death (ICD) [5][6][7] could enhance the proportion of patients with breast cancer that respond to anti-PD -1/PD-L1 therapies. Both radiation therapy (RT) and cyclophosphamide have multiple immunomodulatory effects, 8,9 encompassing the ability to induce ICD 6,10,11 and boost responses to ICIs. [12][13][14][15] In support of this notion, various clinical studies have shown a positive interaction between RT and antibodies targeting PD-1, PD-L1, or CTLA4 in patients with lung cancer. [16][17][18] However, multiple co-inhibitory receptors other than PD-1 or CTLA4 have been described, potentially explaining why most patients fail to respond to the combination of RT and ICIs. 19 While such receptors may offer alternative pathways of immunoevasion to developing tumors, they may also constitute potential targets for therapeutic intervention. 20,21 In line with this possibility, multiple studies have demonstrated the advantage of simultaneously targeting distinct immunological checkpoints in preclinical tumor models. [22][23][24] V-set immunoregulatory receptor (VSIR, best known as VISTA) is a co-inhibitory receptor that shares structural resemblance with other members of the Ig domaincontaining B7 family. 25 VISTA is constitutively expressed in the hematopoietic compartment, with the highest expression levels found on myeloid cells. Specifically, VISTA suppresses cytokine production by antigen-presenting cells and hence their ability to drive proliferative T cell responses. 26 VISTA is also expressed on (and inhibits the activity of) CD4 + T cells, 27 where expression overlaps with that of PD-1 and other coinhibitory receptors. 26 Studies from knock-out mice indicate that VISTA and PD-1 have distinct and non-overlapping roles in the regulation of T-cell activation, which can be therapeutically targeted to achieve a synergistic anti-tumor activity. 28 Here, we used mouse 4T1 mammary cancer cells as a model of rapidly metastatic and poorly immunogenic TNBC 29 to test the hypothesis that a multipronged therapeutic strategy including ICD inducers like RT and cyclophosphamide as well as ICIs is required for the activation of robust antitumor immune responses, capable to limit metastatic dissemination and increase survival. VISTA emerged as a promising candidate, largely in line with the notion that immunosuppressive myeloid cells have been shown to constitute a large fraction of the immunological infiltrate of human breast cancer, 30 they are induced by focal radiotherapy and they express high levels of VISTA.
Our data suggest that optimal therapeutic responses of immunotherapy against TNBC require a multipronged approach that leverages the direct immunostimulation of focal radiotherapy while limiting lymphoid (anti-PD-1, anti-VISTA) and myeloid (cyclophosphamide) immunosuppression. These results provide the rationale for testing VISTA blockade as a component of a multipronged immunotherapeutic approach for tumors that are insensitive to radiation and PD-1 blockade.
Cells and reagents
Mouse 4T1 mammary cells were grown in DMEM supplemented with 2 mol/L L-glutamine, 100 U/mL penicillin, 100 μg/mL streptomycin, 25 µmol/L 2-mercaptoethanol and 10% fetal bovine serum (Invitrogen). Cells were authenticated by morphology, growth, and pattern of metastasis in vivo and routinely screened for Mycoplasma spp. contamination with the LookOut® Mycoplasma PCR Detection Kit (Sigma-Aldrich). The InVivoMAB mouse anti-PD-1 antibody (Clone RMP1-14) was purchased from BioXCell. The anti-VISTA antibody (Clone 13F3) was generously provided by Janssen Pharmaceuticals.
Animal experiments
Six to eight-week-old wild-type female BALB/c mice were obtained from Taconic. All in vivo experiments were approved by the Institutional Animal Care and Use Committee (IACUC) of Weill Cornell Medicine. Mice were subcutaneously (s.c.) inoculated with 0.5 × 10 5 4T1 cells and randomly assigned to treatment groups thirteen days later, when tumors typically achieved an average diameter of 5 mm. Focal RT was given with the Small Animal Radiation Research Platform (SARRP, from Xstrahl Ltd) in two doses of 12 Gy each on days 13 and 14 post-tumor implantation. To this aim, mice were anesthetized with isoflurane and animals assigned to radiation were placed on a dedicated tray and positioned so that only the tumor was targeted by the radiation beam by means of a 10 × 10 mm collimator. Tumors were measured every 2-3 days until euthanasia (at experimental endpoints, when tumor exceeded 5% of body weight, or if mice showed signs of pain or distress). Perpendicular tumor diameters were obtained using a Vernier caliper and total tumor volume calculated following the common ellipsoid approach 12,31,32 as longer diameter x shorter diameter 2 x π/6. Cyclophosphamide (100 mg/kg body weight) was given i.p. on day 9 post tumor implantation. Systemic (i.p.) checkpoint blockade using monoclonal antibodies targeting PD-1 (Clone RMP1-14, 200 μg/mouse) and/or VISTA (Clone 13F3, 10 mg/kg) was initiated the day after the last RT dose. In experiments evaluating the efficacy of treatment on metastatic dissemination, mice were euthanized on day 32 and excised lungs were fixed in 4% paraformaldehyde. Gross lung metastases were enumerated using a dissecting microscope by at least 3 observers, which were blinded to the treatment received by each specimen.
Flow cytometry
4T1 tumors were excised and digested with the Mouse Tumor Dissociation kit (Miltenyi Biotec) as per manufacturer's instructions, and ran on a Miltenyi gentleMACS Octo Dissociator with Heaters using pre-set program (37C_m_TDK2). The resulting cell suspensions were filtered using a 40 µm cell strainer and subjected to RBC lysis. Samples were counted and stained with the Zombie Aqua Fixable Viability Dye (BioLegend) to distinguish live cells. All samples were then incubated with purified anti-mouse CD16/32 (Fc block) prior to staining. The following anti-mouse antibodies, all purchased from BioLegend, were used for immunostaining in the indicated dilutions: CD69 APC (Clone H1.2 F3) 1:100, CD4 PE/Cy5 (Clone GK1.5) 1:100,
Statistical analysis
Statistical analyses were done using GraphPad Prism v. 8. To determine significant differences in tumor volumes among treatment groups, two-way ANOVA with repeated measures and Tukey correction for multiple comparisons was utilized. For in vitro experiments, ordinary one-way ANOVA with Holm-Sidak's posttest correction for samples with single pooled variance was employed to identify significant changes. Kruskal-Wallis test with Dunn's correction for multiple comparisons was used to detect significant differences in lung metastases among treatment groups. The Kaplan-Meier method was used to estimate median OS and the log cumulative hazard transformation was used to derive 95% confidence limits for median OS in each arm. Differences in OS curves were compared using log-rank (Mantel-Cox) test with correction for multiple pairwise comparisons. All reported p values are two-sided and statistical significance is defined as p< .05.
TCGA analysis
Patients with TNBC (n = 116) were identified in The Cancer Genome Atlas (TCGA) public database (https://cancergenome. nih.gov/). Differentially expressed genes (DEGs) between the VSIR high and VSIR low groups were determined using the LIMMA-R package. 33 Hierarchical clustering analysis was conducted using the ComplexHeatmap package, based on the Pearson distance and complete clustering method. 34 The MCPcounter R package and "metagene" markers were used to estimate the relative abundance of tissue-infiltrating immune cell populations. 35,36 Functional and enrichment analysis of DEGs was performed using the ClusterProfiler. 37 Survival analysis was performed using the Survival and Survminer R packages, based on log-rank tests. The prognostic value of continuous variables was assessed using median cutoffs. Correlation was analyzed by the Spearman's correlation approach and visualized by using the corrplot package in R. GSEA analyses were performed using the fgsea package in R and loading gene set analysis was conducted using MSigDB gene sets H, from msigdbr R package. 38 R version 3.6.0 was used for all in silico studies.
Focal RT elicits local and systemic anticancer effects in the context of multiple immune checkpoint blockade
The mouse mammary carcinoma 4T1 model is a wellcharacterized model of cold, highly metastatic, and immunotherapy-resistant mammary tumor, mimicking the behavior of aggressive TNBC in humans. 29,[39][40][41] Treatment of 4T1 tumors established in syngeneic BALB/c mice with ICIs targeting CTLA4 and/or PD-1 is ineffective. 29 We have previously shown that RT directed to primary 4T1 tumors enables responsiveness to CTLA4-or PD-1-targeting ICI by inducing T cells that are able to reject the irradiated tumor and reduces metastatic dissemination to the lungs. 12,42 Although mice treated with RT plus ICIs experience increased OS as compared to mice receiving ICIs alone, they ultimately succumb to disease progression, suggesting the presence of additional barriers limiting tumor rejection. One such barrier may be represented by MDSCs, which are abundant in the 4T1 microenvironment, 29,43 and are known to mediate robust immunosuppressive effects both mice and humans 44 prompting interest in developing therapeutic strategies to target them. 45 Since myeloid cells have been shown to express high levels of the VISTA, 25 we asked whether targeting VISTA could improve responses to RT and PD-1 blockade in the 4T1 model. As monotherapy, neither VISTA nor PD-1 blockade limited the progression of 4T1 tumors established in BALB/c mice (Suppl. Fig. 1A,B). Conversely, both the VISTAtargeting and the PD-1-targeting ICI significantly improved the local control of 4T1 tumors receiving 2 focal RT doses of 12 Gy each, on two consecutive days (Figure 1(a)) and reduced the number of lung metastases (Suppl. Fig. 1 C). Local tumor control rates achieved with the RT plus VISTA blockade were comparable to those observed with RT plus PD-1 blockade. However, dual VISTA/PD-1 blockade failed to further improve local or systemic tumor control rates achieved with RT plus PD-1 or VISTA blockade (Figure 1(a) and Suppl. Fig. 1 C). These results lend further support to the notion that RT can be cells were injected s.c. at day 0 into syngeneic BALB/c mice, and treatment started when tumors reached average volume of 100mm 3 (day 13). Anti-VISTA mAb 13F3 (300 μg/mouse) or PBS was given i.p. starting on day 13 thrice weekly for a total of 6 doses. Anti-PD1 mAb RMP1-14 (200 μg/mouse) or PBS was given i.p. starting on day 13 every 3 days for a total of three doses. 4T1 tumor-bearing mice (n = 6-8 mice per treatment group) were randomly assigned to six treatment groups, as indicated. Tumor growth over time (*p< .05, **p< .005, two-way ANOVA) (b,c) CYP (100 mg/kg i.p.) was given on day 9, RT and antibodies were administered as in Figure 1 used to sensitize immunoresistant tumors to ICIs but do not suggest a benefit for dual VISTA and PD-1 blockade.
Next, we asked whether the responses obtained with RT plus VISTA blockade could be further improved by cyclophosphamide, a chemotherapeutic agent with broad immunomodulatory properties 6 that has been successfully exploited in preclinical studies as a therapeutic partner for vaccine-based and other immunotherapeutic approaches. [46][47][48] We thus tested the effect of a single low dose of cyclophosphamide (100 mg/ kg) given a few days prior to RT, based on a treatment schedule that was previously shown to induce durable anti-tumor immunity along with a temporary decrease in regulatory T (T REG ) cells in 4T1-bearing mice treated with RT and a Tolllike receptor 7 (TLR7) agonist. 47 In our model, treatment with cyclophosphamide alone neither delayed tumor growth or OS, nor did it improve therapeutic responses to RT (Suppl. Fig. 1D,E). However, when combined with RT plus PD-1 (p= .0014) or VISTA (p= .0003) blockade cyclophosphamide significantly improved tumor control and OS. (Figure 1(b,c)).
Next, we tested a multipronged immunotherapeutic strategy involving cyclophosphamide, RT as well as PD-1-and VISTA-targeting ICIs. The effect of cyclophosphamide preadministration on tumor control in mice treated with radiation plus dual PD-1/VISTA blockade was comparable to that achieved by radiation plus either checkpoint blockade (mean tumor volume at day 31: 103.53 ± 12.79 mm 3 in cyclophosphamide plus RT plus PD-1 blockade, 82.1 ± 12.6 mm 3 in cyclophosphamide plus RT plus VISTA blockade vs. 81.3 ± 17.7 mm 3 in cyclophosphamide plus RT plus dual PD-1/VISTA blockade, p= .9550) (Figure 1(d)). However, mice treated with cyclophosphamide plus RT and dual PD-1/ VISTA blockade experienced a significantly longer median OS as compared to all other mice (median survival: 48 days for cyclophosphamide plus RT plus VISTA and PD-1 blockade vs. 42 days for cyclophosphamide plus RT plus PD-1 blockade, p= .048; and 41 days for cyclophosphamide plus RT plus VISTA blockade, p= .0495.) (Figure 1(e)). Importantly, cyclophosphamide was required for survival extension, as median OS in mice treated with RT plus dual PD-1/VISTA blockade was significantly shorter (42 days, p= .0351).
As the survival of 4T1 tumor-bearing mice is mainly dictated by metastatic spread to the lungs, 39 we set to evaluate metastatic lung burden prior to overt symptoms of respiratory distress. Mice treated with cyclophosphamide plus RT and dual PD-1/ VISTA blockade had significantly fewer lung metastases as compared to all other groups, with one-third of these animals free of metastases at 32 days after tumor inoculation (mean number of metastases: 13.1 ± 1.2 for cyclophosphamide plus RT plus PD-1 blockade, 10.11 ± 1.4 for cyclophosphamide plus RT plus VISTA blockade vs. 1.7 ± 0.47 for cyclophosphamide plus RT and dual PD-1/VISTA blockade, p= .0002) ( figure 1(f)).
We further tested whether the timing of cyclophosphamide administration could affect its beneficial effects on systemic tumor control. Machiels and coworkers had previously demonstrated that optimal antitumor immune responses were achieved when cyclophosphamide was given a few days before a GM-CSFsecreting whole-cell vaccine. 48 On the other hand, improved tumor responses to RT have been demonstrated when cyclophosphamide was given concurrently to irradiation. 49 Thus, we compared the administration of cyclophosphamide 4 days before RT (day 9) vs. concurrent with the first RT dose (day 13) (Suppl. Fig. 2A). No difference in efficacy (neither on tumor growth nor on metastatic dissemination) could be observed when cyclophosphamide was delivered according to different schedules in the context of RT plus dual PD-1/VISTA blockade (Suppl. Fig. 2B, C). Overall, these findings support an essential role for low-dose cyclophosphamide to maximize the ability of RT plus dual PD-1/ VISTA to control the progression and metastatic dissemination of 4T1 tumors.
Cyclophosphamide in combination with RT and dual PD-1/ VISTA blockade enables the priming of tumor-specific CD8 + T cells coupled with MDSC depletion
To understand the mechanisms underlying the improved control of lung metastases in 4T1 tumor-bearing mice treated with cyclophosphamide plus RT and dual PD-1/VISTA blockade, we analyzed the tumor immune infiltrate at day 18, 3 days after administration of the first ICI dose (Figure 2(a)). The flow cytometry-assisted analysis of immune cells isolated from 4T1 tumors demonstrated that RT was required, but not sufficient, to drive robust tumor infiltration by CD8 + T cells. Indeed, a significant increase in intratumoral CD8 + T cells was observed in animals treated with RT plus PD-1 blockade (p= .022) or RT plus VISTA blockade (p= .024), but not RT alone. The combination of cyclophosphamide plus RT and dual PD-1/VISTA blockade induced the largest augmentation in tumor-infiltrating CD8 + T cells, which expressed increased levels of the activation marker CD69 (Figure 2(b,c)). The fraction of tumor-infiltrating CD8 + T cells expressing high levels of PD-1, which is a marker terminally activated/exhausted T cells, 50 was reduced in mice treated with cyclophosphamide plus RT and single or dual VISTA/PD-1 blockade, as well as mice treated with RT plus dual VISTA/PD-1 blockade in the absence of cyclophosphamide (Figure 2(d)). We next investigated tumor-specific CD8 + T cell responses in tumordraining lymph nodes. Notably, interferon gamma (IFNG, best known as IFN-γ) secretion by tumor-infiltrating CD8 + T cells exposed to the CD8 epitope AH-1-A5, which is derived from the envelope of an endogenous retrovirus expressed by 4T1 cells, 51,52 was markedly increased only in mice treated with cyclophosphamide plus RT and dual PD-1/VISTA blockade (p< .005) (Figure 2 (e)). Thus, in the 4T1 model of TNBC, only a multipronged immunotherapeutic strategy comprising cyclophosphamide, RT and two ICIs elicits abundant tumor infiltration by activated CD8 + T cells plus robust priming of tumor-specific immunity.
Analysis of the CD4 compartment revealed no significant changes in total CD4 + T cells in any of the treatment groups ( figure 2(f)). Similarly, the proportion of T REG cells, which constituted ~70% of all CD4 + T-cells in untreated 4T1 tumors, was not significantly altered by treatment (Figure 2(g)). However, activated CD4 + effectors, identified by interleukin 2 receptor subunit alpha (IL2RA, best known as CD25) expression and forkhead box P3 (FOXP3) lack of expression, were increased in the tumors of mice treated with cyclophosphamide plus RT and PD-1, VISTA or dual PD-1/VISTA blockade (Figure 2(h)). Cyclophosphamide has previously been shown to temporarily decrease T REG cells. 47,48 As we failed to observe such a decrease in intratumoral T REG cells in mice received cyclophosphamide for 9 days (data not shown), we asked whether T REG cells could have been depleted earlier, shortly after cyclophosphamide administration. To address this question, T REG cells were analyzed in the spleen and tumor of mice treated with cyclophosphamide and/or RT at different time points: (1) 3 days after cyclophosphamide administration (day 12), (2) at completion of RT (day 15), and (3) at day 20 (Supp. Fig 2D). This analysis revealed a mild but significant decrease in T REG cells in both the spleen and tumor of mice treated with RT plus cyclophosphamide at day 15, but T REG cells quickly rebounded to baseline levels by day 20 (Suppl. Fig. 2E,F).
To gain more insights into the mechanisms underlying the development of antitumor immunity in mice treated with cyclophosphamide plus RT and dual PD-1/VISTA blockade, we next analyzed MDSCs. Differential expression of lymphocyte antigen 6 complex, locus G (Ly6G) and lymphocyte antigen 6 complex, locus C (Ly6C) on intratumoral CD11b + cells defines the two major MDSC subsets: monocytic MDSCs (mMDSCs, Ly6G − Ly6C hi ) and granulocytic MDSCs (gMDSCs, Ly6G + Ly6C low ). 53,54 Both MDSC subsets infiltrating 4T1 tumors expressed comparable levels of VISTA (Figure 3(a)). In untreated mice, CD11b + myeloid cells comprised approximately 55% of all tumor-infiltrating CD45 + cells (Figure 3(b)), ~40% of which were granulocytic MDSCs (Figure 3(c,d)). In the absence of RT and regardless of cyclophosphamide treatment, dual PD-1/VISTA blockade did not alter the abundance of tumor-infiltrating CD11b + cells. Similarly, RT employed as a standalone treatment did not significantly impact tumor infiltration by CD11b + myeloid cells (Figure 3(b)). Conversely, RT combined with VISTA (but not PD-1) blockade led to a significant decrease in tumor-infiltrating CD11b + cells (Figure 3(b)), particularly in the granulocytic MDSC compartment (Figure 3(c)). Addition of cyclophosphamide and a PD-1-targeting ICI to RT plus VISTA blockade did not further decrease the proportion of tumor-infiltrating gMDSCs. Finally, also in the absence of VISTA blockade, the combination of cyclophosphamide with RT and PD-1 blockade significantly reduced CD11b + myeloid cells as compared to control conditions (Figure 3(b)). Of note, RT was critical to achieve gMDSC depletion even in the context of cyclophosphamide plus dual PD-1/ VISTA blockade (mean %: 53.55 ± 2.44 for cyclophosphamide plus dual PD-1/VISTA blockade vs. 19.9 ± 3.18 for cyclophosphamide plus RT and dual PD-1/VISTA blockade, p= .0001), suggesting that RT induces key changes in the tumor microenvironment that are required for VISTA blockade to deplete gMDSCs.
Impact of VISTA on the immune infiltrate of breast cancer patients
To investigate the translational value of our findings, we took advantage of The Cancer Genome Atlas (TCGA) public patient dataset, which contains annotated bulk transcriptomic data for 116 patients with immunohistochemistry-confirmed TNBC. First, we interrogated whether VISTA expression levels would be indicative of increased immune infiltration by T cells, based on Spearman correlation on genes the encode phenotypic markers preferentially (although not exclusively) expressed by these immune effector cells. We found that VSIR levels positively correlate with general markers of the T cell compartment (e.g., CD3E), with markers of specific T cell populations (e.g., CD4, CD8A, FOXP3), as well as with co-inhibitory T cell receptors (e.g., CTLA4, HAVCR2, LAG3, PDCD1) (Figure 4 (a)). Based on our previous observations in the ovarian setting, 21 we postulated that such an immunological configuration would be associated with improved OS. However, VSIR levels did not influence the OS of patients with TNBC from the TCGA, neither when patients were stratified based on median VSIR levels (Figure 4(b)), nor when VSIR was assessed as a continuous variable (HR: 1.34; 95% CI: 0.84;2.15; p= .22).
We thus hypothesized that other immunological features of the tumor microenvironment of patients with TNBC from the TCGA could be relevant. We, therefore, tested the relative abundance of multiple immune cell subsets in patients with higher-than-median (VSIR high ) versus lower-than-median (VSIR low ) VSIR levels by harnessing the MCPcounter R package, which is based on gene signatures that identify specific immune cell populations. 35 As compared to their VSIR low counterparts, VSIR high tumors were enriched not only in lymphoid cells encompassing T cells, CD8 + T cells, cytotoxic lymphocytes, T REG cells, B cells, and NK cells (largely replicating the results of our Spearman correlation analysis), but also in cells from the monocytic lineage, myeloid dendritic cells, MDSCs, macrophages, and neutrophils (Figure 4(c)). Consistent with these findings, the unsupervised hierarchical clustering of patients with TNBC from the TCGA based on the 400 most differentially expressed genes between VSIR high and VSIR low tumors, identified two major patient clusters that were almost precisely determined by VISTA status (Figure 4(d)), and were largely defined by signatures of immunological competence (VSIR high vs. VSIR low ) (Figure 4(e)). Thus, VSIR high TNBCs stand out as tumors with a complex lymphoid and myeloid infiltrate.
We next tested whether immunosuppressive features of the myeloid immune infiltrate would correlate with VISTA levels in this patient subset. We found that VSIR levels correlate (to variable degrees) with the abundance of TGFB1 and IL10 (coding for two cytokines with robust immunosuppressive activity), ENTPD1 and 4NTE (which code for two ectonucleotidases involved in the generation of the immunoregulatory metabolite adenosine), 55,56 IDO1 (encoding an intracellular enzyme involved in the degradation of tryptophan, which is required for optimal T cell activity, and the synthesis of kynurenine, which is immunosuppressive) 57,58 as well as CD38 (which codes for another extracellular enzyme with immunoregulatory activity) 59 (Figure 4(f)). In line with this notion, VSIR high TNBCs significantly differed from their VSIR low counterparts in the relative abundance of each of these transcripts taken individually and in association, conveying a global signature of myeloid immunosuppression (Figure 4(g)).
With the caution imposed by the transcriptomic analysis of a single patient cohort, the immunological signature we documented in VSIR high TNBCs lend further support to our preclinical findings indicating that optimal therapeutic response to radiation therapy plus VISTA inhibitors may require not only immune checkpoint blockers to offset immunosuppression in the lymphoid compartment, but also strategies to target myeloid immunosuppression (cyclophosphamide).
Discussion
The success of ICIs that target CTLA4 and PD-1 for the management of an ever-growing list of malignancies underlies the key relevance of immunosuppressive pathways that prevent T cells from effectively recognizing and killing their neoplastic counterparts. 60 However, while durable responses to ICI-based immunotherapy have been documented in a fraction of patients with solid tumors, most patients fail to respond to ICI when employed as single agents. Efforts to increase response rate by simultaneously blocking the two co- inhibitory receptors CTLA4 and PD-1 have been successful in patients with melanoma and lung cancer, but at the expense of increased toxicity. [61][62][63] In addition, DNA-damaging agents with immunostimulatory effects, such as focal RT, have been shown to synergize with ICIs in some patient populations. 17,18,64 These findings demonstrate that combining multiple immunotherapies with non-overlapping mechanisms of action may constitute a valuable strategy to increase response rate to ICI-based immunotherapy.
PD-1 and VISTA regulate immune responses via nonoverlapping pathways, and the concurrent targeting of PD-1 and VISTA has been shown to improve the control of mouse CT26 colorectal carcinomas as compared to either agent employed as monotherapy. 28 However, we found that 4T1 tumors are refractory to VISTA blockade alone as well as to dual PD-1/VISTA blockade (Supp Fig. 1). Prior work by Le Mercier and colleagues has demonstrated that antibody-mediated VISTA blockade limits the growth of various mouse tumors, at least in part by depleting MDSCs. 65 In our study, we used the same antibody clone used by Le Mercier and collaborators, 65 pointing to the highly immunosuppressive microenvironment established by growing 4T1 tumors as to the reason for limited monotherapeutic activity. Consistent with this notion, VISTA blockade was able to deplete MDSCs in the microenvironment of 4T1 tumors only when given with RT ( Figure 3).
Moreover, VISTA administration significantly improved the control of irradiated 4T1 tumors and metastatic dissemination, an effect that was comparable to the PD-1 blockade. Dual PD-1/ VISTA blockade failed to further improve tumor control in this setting ( Figure 1). However, when low-dose cyclophosphamide was administered before RT, we observed a significant improvement in tumor control and OS in mice treated with RT plus PD-1 or VISTA blockade, and that combination of all four therapies (cyclophosphamide, RT, PD-1 blockade, VISTA blockade) further extended OS resulting in almost complete control of lung metastases, independently of the time of administration of cyclophosphamide (Figure 1 and Suppl. Fig. 2).
In the absence of immunotherapy, cyclophosphamide did not increase RT-mediated tumor control (Suppl. Fig. 1), suggesting a role for the immunomodulatory effects of cyclophosphamide in the improved tumor responses enabled by ICIs. Such effects have generally been linked to the depletion of intratumoral T REG cells, which in rodents are more sensitive to cyclophosphamide than conventional T cells. 66 There was a small and temporary reduction in T REG cells in the spleen and tumor of 4T1 tumorbearing mice treated with RT and cyclophosphamide, but T REG cells represented the majority of the CD4 compartment in tumors exposed to various combinations of RT and ICIs regardless of cyclophosphamide (Figure 2 and Suppl. Fig. 2). Thus, it is unlikely that the ability of cyclophosphamide to dramatically enhance the priming of tumor-specific CD8 + T cells in mice treated with RT and dual PD-1/VISTA blockade (Figure 2) originates from T REG cell depletion. Cyclophosphamide has also been shown to promote the activation of cytotoxic CD8 + T cells and T H 1/T H 17 polarization in CD4 + T cells, 8 at least in part linked to the ability of cyclophosphamide to reshape the intestinal microbiota. 67 Thus, it is conceivable that the improved anti-tumor T cell responses observed in mice treated with cyclophosphamide plus RT and dual PD-1/VISTA blockade may reflect at least some degree of systemic immunomodulation by cyclophosphamide 68 coupled to (1) MDSC depletion by cyclophosphamide and (2) de-repression of the effector phase of the immune response in the tumor microenvironment by VISTA and PD-1 blockade.
With the caveats associated with a retrospective transcriptomic study based on a relatively small patient cohort, our preclinical findings are supported by the fact that the microenvironment of patients with TNBC from the TCGA database containing high VSIR levels is enriched in gene signatures pointing to a robust myeloid immunosuppression (Figure 4). Moreover, CD68 + macrophages have recently been identified as an important reservoir of VISTA-expressing cells in prostate and pancreatic tumors, 69,70 suggesting a key role for VISTA in the myeloid tumor microenvironment. Of note, RT can drive robust tumor infiltration by myeloid cells, as shown by a study in non-metastatic prostate cancer patients, 71 and multiple preclinical work suggesting that RT generates a broad and complex effect on recruitment, removal, reorganization, repolarization and/or representation of tumor-infiltrating myeloid cells. 72,73 In prostate tumors, treatment with fractionated low-dose RT led to elevated levels of macrophage colony-stimulating factor 1 (CSF1), a key cytokine driving the systemic accumulation of MDSCs. 74 In this context, RT-induced DNA damage was shown to mediate the nuclear translocation of ABL proto-oncogene 1, non-receptor tyrosine kinase (ABL1), and consequent Csf1 transactivation. On the other hand, indolent type I interferon secretion by RT has been implicated in MDSC recruitment via C-C motif chemokine receptor 2 (CCR2), 75 which not only supports T REG infiltration upon RT but also has been proposed to represent a biomarker for cyclophosphamide sensitivity. 76,77 Most importantly, selective targeting of these axes, either by small molecule inhibitors of the CSF1 receptor or CCR2 antagonists, has defined a new therapeutic partnership to increase patient response to RT. Our study suggests that VISTA blockade stands out as an additional pathway through which the detrimental effects of myeloid (and potentially T REG ) cell accumulation driven by RT can be overcome.
In conclusion, our data suggest that the immunological rejection of tumors that are resistant to ICIs may require treatments that act at multiple levels, encompassing not only the robust activation of ICD (and hence an increased availability of tumor-associated antigens and danger signals), as effectively elicited by RT, but also the neutralization of immunosuppressive circuitries involving lymphoid and myeloid compartments, as mediated by multiple ICIs and cyclophosphamide, respectively. Moreover, these findings suggest that tumor types with prominent MDSC-dependent immunosuppression may benefit from combinatorial therapies that also target this compartment. Additional studies are required to translate these observations into clinical trials.
Disclosure of Potential Conflicts of Interest
M.H. and J.F. are full-time employees of Sotio. L.G. declares research funding from Lytix, and Phosplatin, and speaker and/or advisory honoraria from Boehringer Ingelheim, Astra Zeneca, OmniSEQ, The Longevity Labs, Inzen, the Luke Heller TECPR2 Foundation. S.D. has received prior honorarium for consulting from AstraZeneca, AbbVie, Lytix Biopharma, EMD Serono, Eisai Inc., Cytune Pharma, Regeneron, and research grants from Nanobiotix, and Lytix Biopharma. S.C.F. has received prior honorarium for consulting/speaker from AstraZeneca, Merck, Regeneron, Bayer, Serono, and research funding from Varian, Merck, Bristol Meyer Squibb. All other authors have no conflicts of interests to disclose. As per standard operations at Oncoimmunology, LG has been excluded from all steps of editorial evaluation of the present article.
|
2020-10-28T19:16:12.560Z
|
2020-01-01T00:00:00.000
|
{
"year": 2020,
"sha1": "5508616db213375d634378910f2ded9c5cf0af11",
"oa_license": "CCBYNC",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/2162402X.2020.1830524?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2a569e6a0c681210f6a574fd40c292fadef343f8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
245829282
|
pes2o/s2orc
|
v3-fos-license
|
Body mass index was linked with multi-cardiometabolic abnormalities in Chinese children and adolescents: a community-based survey
Background Evidence on how body mass index (BMI) influence cardiometabolic health remains sparse in Chinese children and adolescents, especially in south China. We aim to investigate the effect of overweight and/or obesity on high blood pressure (HBP), dyslipidemia, elevated serum uric acid (SUA) and their clustering among children and adolescents in an island in South China. Methods Using multi-stage cluster sampling method, 1577 children and adolescents aged 7–18 in Hainan province, south China, participated in the survey. The association between body mass index and cardiometabolic indexes were explored. Overweight and obesity were classified according to criteria of World Health Organization for children and adolescents aged 5 to 19. Restricted cubic spline models were used to examine the possible non-linear association between BMI and cardiometabolic profiles. Multivariable logistic regression models were fitted to examine the effect size of BMI on cardiometabolic disorders including HBP, elevated SUA and dyslipidemia. Comorbidity of at least two cardiometabolic abnormalities (HBP, dyslipidemia, elevated SUA) was defined as clustering of cardiometabolic risk factors. Results Comparing with normal weight and underweight subjects, overweight/obese youths had higher levels of BP, SUA, triglyceride, low-density lipoprotein but lower level of high-density lipoprotein. Overweight/obese youth had higher risk of dyslipidemia (OR:2.89, 95%CI: 1.65–5.06), HBP (OR:2.813, 95%CI: 1.20–6.59) and elevated SUA (OR: 2.493, 95%CI: 1.45–4.27), respectively, than their counterparts. The sex-, age-adjusted prevalence of abnormalities clustering was 32.61% (95% CI: 20.95% to 46.92%) in overweight/obesity group, much higher than in the under/normal weight group (8.85%, 95%CI: 7.44% to 10.48%). Conclusion Excess adiposity increased the risk of elevated serum uric acid, serum lipids, blood pressure and their clustering among children and adolescents in south China.
Introduction
Cardiometabolic risk factors in childhood, such as high blood pressure, elevated serum uric acid, dyslipidemia, are associated with earlier onset and greater risk of chronic diseases in adults [1][2][3].Excess adiposity is associated with childhood metabolic profiles [4], and may increase the risk of cardiometabolic abnormalities, such as high blood pressure (HBP), dyslipidemia, insulin resistance, and elevated serum uric acid (SUA) [5][6][7]. Overweight and obesity in children and adolescents have become a significant public health issue for both developed and developing countries given its fast increasing over the past few decades [8,9]. The prevalence of overweight and obesity in Chinese children also continuously increased in the past thirty years [10,11].
Since cardiometabolic disease develops gradually, it is important to identify children and adolescents who are in high risk and have already had abnormalities. There are previous studies exploring the association between excess adiposity and pediatric cardio-metabolic risk factors [12][13][14][15][16], which provided valuable health information, but these health profiles did not take elevated serum uric acid into consideration, and the investigation on comorbidity of multiple cardiometabolic abnormalities is sparse, especially among Chinese youths. Hainan is an island located at the southernmost of China with tropical climate and has long been on the fringe of the Chinese cultural sphere. Limited study has carried out in Hainan partially because of its geographic location. Little was known about the youths' cardiometabolic health profiles in this place. In 2013, we carried out a cross-sectional study among children and adolescents aged 7-18 in Hainan province, with the purpose of exploring the effect of excess body weight on cardiometabolic health profiles in the youth population. Evidence found by this study may provide useful information for early prevention of cardiometabolic risk among children and adolescents.
Study design and population
The present study is cross-sectional design and based on data collected in Hainan province, which is an island in southernmost China. The sampling method is the same with our previous studies [17,18]. Briefly, from Nov to Dec 2013, a multi-stage stratified clustering sampling method was used to enroll participants. In the first stage, the provincial capital city (Haikou), one mid-sized city (Zhanzhou) and two counties (Changjiang and Baisha) were selected based on their economic status measured by local gross domestic product (GDP). In the second stage, districts were selected from urban areas and townships were selected from rural areas. In the last stage, communities were selected from districts and villages were selected from townships. The inclusion criteria for participants were: aged 7-18 years; lived in current residence for at least one year. The exclusion criteria were: youths who have been diagnosed as high blood pressure, type 1 diabetes or took medication for these diseases. Ethical approval was obtained from the Bioethical Committee of Institute of Basic Medical Sciences, Chinese Academy of Medical Sciences. Informed consent was obtained from parents/guardians of the participants.
As in this study, several cardiometabolic disorders were measured, we used the lowest prevalence among these disorders to ensure the statistical power. Based on our previous analysis, the prevalence of dyslipidemia and high blood pressure among children and adolescents were 20% and 7% [17,18], respectively. As there were no standard definition of elevated serum uric acid in children and adolescents, we used the prevalence of high blood pressure to calculate the sample size. The following formula was used: Alpha (α) was the significant level, p was the prevalence of HBP, q equals 1-p, and d was the error tolerance. To reach a significant level of 0.05 and error tolerance 0.2 × p, the estimated minimum sample size was 1276. An additional 20% to the minimum sample size was added factoring in possible non-compliance rate and targeted 1532 subjects. Finally, 1609 children and adolescents participated in the study.
Measurements
A standardized questionnaire interview was conducted face-to-face by fixed trained staff to collect information on demographic characteristics at community health centers or village clinics in the study sites. Height was measured to the nearest 0.1 cm using a fixed stadiometer by the same staff to avoid measurement system error. Body weight were measured barefooted with light clothes using a body composition analyzer (TANITA BC-420, Japan) with decimal accuracy. BMI was calculated as weight in kilograms divided by the square of height in meters (kg/m 2 ). Using a digital blood pressure measuring device (Omron HEM-907, Japan) with children customized cuff size, systolic blood pressure (SBP) and diastolic blood pressure (DBP) were measured on the right arm of individual after at least 5 mins in a sitting position in the study morning. The average value of the three measures were recorded. A 9 ml (at least 8 h fasting overnight) venous blood sample from each participant was draw by qualified nurses for serum lipids and uric acid tests by Chemistry Analyzer (ROCHE Cobas8000C701, USA). For each day of the survey, the samples were kept in a portable, insulated cool box with ice packs to maintain their temperature at 0-4℃ for up to 3 h before being transported to the laboratory of local center for disease control and prevention (CDC) for immediate processing. Serum lipid tests included total cholesterol (TC), n = Z α 2 × pq/d 2 triglycerides (TG), high-density lipoprotein cholesterol (HDL-C) and low-density lipoprotein cholesterol (LDL-C).
Definitions
Underweight, normal weight, overweight and obesity were classified according to criteria of World Health Organization (WHO) for children and adolescents aged 5 to 19 [19], which were: BMI-for-age lower than two standard deviations (SDs) below the WHO Growth Reference median as underweight; greater than one SD as overweight and greater than two SDs as obesity. Serum lipid disorders were defined according to the National Heart, Lung, and Blood Institute (NHLBI) cholesterol screening guidelines and cut points backed by the American Academy of Pediatrics [20], and has been described in detail by our previous study [18].
Blood pressure was classified into normal BP and high blood pressure based on the updated guidelines of the Fourth Report on the Diagnosis, Evaluation, and Treatment of High Blood Pressure in Children and Adolescents [21]. As there is no universally accepted definition of childhood hyperuricemia, we use 5.7 mg/dl (339 μmol/l) as the criteria for elevated serum uric acid because in the National Health and Nutrition Examination Survey (NHANES) population, higher than 5.7 mg/dl increased the risk of metabolic syndrome in youths [22].
Comorbidity of at least two cardiometabolic abnormalities (HBP, dyslipidemia, elevated SUA) was defined as clustering of cardiometabolic risk factors.
Statistical analyses
After excluding subjects with missing data on main risk factors (height, weight, blood pressure, serum uric acid, serum lipids), data on 1577 youths were analyzed.
Summary results were presented as mean (SD) for normally distributed continuous data, median (interquartile range, IQR) for continuous data in-normally distributed, and counts (percentage, %) for categorical data. T-test (for normally distributed data) or U Mann-Whitsney test (for non-normally distributed data) was used to compare continuous data between two groups. Chi-square test was used to compare grouped data. Twoway analysis of covariance (for normally distributed data) or quantile regression models (for non-normally distributed data) were used to compare cardiometabolic profiles among body weight groups after adjusted for potential confounders. Partial correlation analysis was performed to understand the correlation between BMI and cardiometabolic indexes by adjusting potential confounders. Scatter plots were presented to show these correlations. Factors that were associated with both BMI and cardiometabolic indexes, such as age and residential areas, were set as covariates in the adjusted regression models, to avoid the potential confounding caused by these factors.
Logistic regression models were used to calculate the multi-variable adjusted prevalence of elevated SUA, HBP and dyslipidemia among different body weight groups [23]. Restricted cubic spline (RCS) models were used to examine the possible non-linear association between BMI and cardiometabolic variables. Tests for non-linearity used the likelihood ratio test, comparing the model with only the linear term to the model with linear and the cubic spline terms [24]. Logistic regression models were fitted to examine the effect size of BMI on cardiometabolic disorders including high blood pressure, elevated SUA and dyslipidemia.
A p-value < 0.05 (two-tailed) was considered statistically significant. Because of the limited number of obesity in both sexes, obesity was combined with overweight as one group, presented as overweight/obesity in the present study. All statistical procedures were performed using SAS 9.4 (SAS Institute Inc. Cary, NC, USA).
Basic characteristics and cardiometabolic indexes
The demographic and basic clinical information were presented in Table 1. The mean age of the whole participants was 13.59 ± 3.25, and 13.25 ± 3.06 in boys,13.84 ± 3.37 in girls. Boys and girls were different in age distribution, height, weight, SUA, blood pressure and serum lipids (all p values less than 0.05).
The association between BMI and cardiometabolic indexes
The correlation between BMI and SUA, blood pressure and serum lipids, stratified by sex, were presented as scatter plots in Fig. 1, and their partial correlation coefficients were given. All cardiometabolic indexes were significantly correlated with BMI, with p values less than 0.01 in both sexes, but the effect size of coefficients varied greatly. The correlations between BMI and SBP seemed stronger than DBP in both sexes (boys: r = 0.256 for SBP and 0.117 for DBP; girls: r = 0.200 for SBP and 0.040 for DBP). For boys, the correlation between BMI and SUA seemed stronger than other indexes (r = 0.315), but this correlation seemed weaker in girls (r = 0.154).
The comparison between underweight, normal weight and overweight/obesity on SUA, blood pressure and serum lipids were presented in Table 2. After adjusted for age and residential areas, the overweight/obesity group had higher SUA in both sexes (p = 0.016 in boys; p < 0.001 in girls). For blood pressure, only in girls, the overweight/ obesity group had higher SBP (p < 0.001). No difference was tested in DBP. In both boys and girls, overweight/ obesity group had higher level of TG, LDL-C, but lower HDL-C than their counterparts (P < 0.05).
The association between BMI and cardiometabolic disorders
The restrict cubic splines demonstrating the relationship between BMI and elevated SUA, HBP and dyslipidemia, after adjusted for age, sex and residential area, were presented in Fig. 2. When took BMI as continuous variable into regression models, only linear positive associations of BMI with elevated SUA (P < 0.001) and dyslipidemia (P < 0.001) were observed. Neither linear nor non-linear significant association between BMI and HBP was tested by RCS (P > 0.05). When took BMI as grouped variable (under/normal weight vs. overweight/obesity) to fit the logistic regression model by sex, increased risk of cardiometabolic disorders in the overweight/obesity group were observed. In the overall model, overweight/obese youth had 1.89, 1,81, and 1.49 times higher risk of dyslipidemia, HBP and elevated SUA, respectively, comparing with the under/normal weight youths (Fig. 2).
Generally, the prevalence of elevated SUA, HBP and dyslipidemia increased with higher BMI classifications. In overall, in underweight, normal weight and overweight/obesity groups, the prevalence of elevated SUA were 32.71%, 37.35% and 58.21%, respectively; the prevalence of HBP were 5.14%, 5.48% and 10.45%, respectively; the prevalence of dyslipidemia were 16 The sex-specific crude prevalence and adjusted prevalence of elevated SUA, HBP and dyslipidemia in different BMI groups were presented in Fig. 3. Girls had lower prevalence of elevated SUA then boys in underweight and normal weight groups (both adjusted P < 0.001), but similar prevalence in the overweight/obese group (adjusted P = 0.4507). No other sex difference of cardiometabolic disorders prevalence was found in body weight groups.
Clustering of cardiometabolic abnormalities
The crude prevalence of the clustering of at least two cardiometabolic abnormalities in under/normal weight and overweight/obesity groups were 9.21% and 23.88%, respectively (Fig. 4-A). After adjusted for sex, age and residential areas, the prevalence of abnormalities clustering were 32.61% (95% CI: 20.95% to 46.92%) in overweight/ (Fig. 4-B). The intersection of cardiometabolic abnormalities in the study population were presented as Venn diagrams in Fig. 4-C and D.
Discussion
In this study, based on a representative population sample, we explored the association between body mass index and cardiometabolic profiles, especially take elevated serum uric acid as one of the components of cardiometabolic abnormalities, in a southernmost island of Fig. 1 The correlation between body mass index and cardiometabolic profiles in separated sex. A Boys; B Girls. The partial correlation coefficients were calculated after adjusted for age and residential areas China. Our findings revealed that BMI was linked with multiple cardiometabolic abnormalities and their clustering. The prevalence of cardiometabolic abnormalities increased with elevated BMI, and the level of serum lipids, blood pressure and serum uric acid were associated with overweight and/or obesity regardless of sex. The epidemic of overweight and obesity is increasing in Chinese children [10,11] and varied greatly by regions. Compared with the population in Northern China, both children and adults in Southern China have a smaller body size [25]. In a national survey, compared with other areas, Hainan has a relatively low prevalence of overweight and obesity [26]. Similar with their finding (2.3% in Hainan), youths in our study have an overweight/obesity prevalence of 4.25%. The prevalence of childhood cardiometabolic disorders has substantial geographic variations in China. Based on a national school survey, the prevalence of childhood HBP and dyslipidemia were 18.2% and 15.8%, respectively [27], comparing with 5.9% and 18.8% in crude HBP and dyslipidemia prevalence in the current findings. The lower prevalence of HBP in Hainan youth maybe attribute to its tropical weather or dietary patterns [28] which need further exploration.
The relationship between excess adiposity and cardiometabolic diseases in children and adolescents has been acknowledged by previous studies [12][13][14][15]29]. For example, the Bogalusa Heart Study found that 70% of obese youths aged 5-17 had at least one cardiovascular risk factor [30]. Zhang et al. also reported that overweight and/or obesity were associated with increased levels of cardiovascular risk factors among children aged 7.5-13 years in Guangdong, China [14]. Cardiovascular risk factors, such as high blood pressure and dyslipidemia are positively associated with Fig. 2 The effect of body mass index on elevated serum uric acid, high blood pressure, and dyslipidemia among children and adolescents aged 7-18. Restrict cubic spline regression models were used to test the linear and non-linear relationship between BMI (in continuous variable) and cardiometabolic abnormalities. Logistic regression models were used to test the effect of overweight/obesity (BMI in grouped variable) on cardiometabolic abnormalities. All the regression models were adjusted for age, sex and residential areas. A BMI and elevated serum uric acid; B BMI and high blood pressure; C BMI and dyslipidemia; D Forest plots reflecting the effect of overweight/obesity on multiple cardiometabolic abnormalities. BMI: body mass index atherosclerotic lesions in youth [31,32], and the elevated serum uric acid may further add to the burden of risk [33]. Obesity is a major cause of hyperuricemia in healthy children and adolescents without chronic conditions [34].
Our study revealed that, increased BMI was linked with elevated SUA, and this effect varies on sex. Boys had much higher SUA level than girls and the correlation between BMI and SUA was also stronger than girls. The negative association between SUA and BMI groups Covariates being adjusted were age and residential areas in boys may be due to the limited sample size and the cut-off value of serum uric acid. SUA and other metabolic components has complicated interactions. Elevated SUA can also influence other cardiometabolic risk factors and vice versa [7,22,35]. The mechanism of the linkage between BMI and uric acid may be explained by that dysfunction of obese adipose tissue could be associated with overproduction of uric acid [36]: increasing uric acid-dependent intracellular and mitochondrial oxidative stress, activating the nuclear transcription factor, carbohydrate responsive element-binding protein or inhibition of AMP-activated protein kinase [37][38][39].
Elevated body weight was also associated with increased risk of serum lipid disorders [18,40], and this effect varies on sex. Girls had higher prevalence of dyslipidemia than boys, especially in the overweight/obese group (48.26% vs. 32.15%). Overweigh/obese girls were also more likely to suffer from dyslipidemia according to the logistic regression model. Although this sex disparity is inconsistent with the result revealed by NHANES for 1999-2006 [41], which indicated a higher prevalence of dyslipidemia in boys, but a meta-analysis in China supported our conclusion in the same way of sex disparity [42]. Therefore, the difference may be attribute to diversity of genetic or socio-environmental factors. Recent studies indicated that triglycerides and triglyceride-rich lipoproteins are in the casual pathway for atherosclerotic cardiovascular disease, much like LDL-C [43,44]. The present study demonstrated that in both sex, higher TG and LDL-C was observed in the overweight/obese group. Although recent guidelines have not recommended lipid lowering therapy [45], it is important to initiate life-style interventions, such as weight loss, to reduce the hazard of developing further health damage.
Overweight and obesity are important risk factors for HBP in children [17,46]. In line with other studies [29,47,48], the current study revealed that overweight/obese children and adolescents had higher BP levels, especially Fig. 4 The co-morbidity of cardiometabolic abnormalities among children and adolescents aged 7-18. A the distribution of cardiometabolic abnormalities in different BMI groups; B The adjusted prevalence of having at least two cardiometabolic abnormalities in different BMI groups; C Venn diagrams reflecting the co-morbidity of cardiometabolic abnormalities in the overall participants; D Venn diagrams reflecting the co-morbidity of cardiometabolic abnormalities in different body weight groups in boys. This partial correlation suggested that the relationship between BMI and SBP was stronger than with DBP. It implies that in the monitoring of high blood pressure in overweight or obese youths, SBP maybe more sensitive.
Few studies explored the clustering of cardiometabolic abnormalities. Seo et al. investigated the cardiovascular disease risk factors clustering (CVD-RFC) among Korean children and adolescents aged 6-15 [49], in which elevated BP and serum lipids were considered as clustered factors also found the positive association between excess adiposity and CVD-RFC. Data from the Bogalusa Heart study [30] implied that over half of obese children had at least two cardiometabolic risk factors including adverse levels of serum lipids, insulin and blood pressure. As elevated serum uric acid has been considered as one of the most important risk factors of cardiovascular or cardiometabolic diseases, the clustering of risk factors including elevated SUA should be given more attention among youths. Our study indicated that in overweight/obese participants, 49% had one abnormality and 24% had at least two cardiometabolic abnormalities, much higher than in their counterparts. Among the cardiometabolic abnormalities, elevated SUA had a relatively high prevalence, even in the normal weight children and adolescents. Therefore, the early prevention of hyperuricemia should be considered as an important intervention target health issue in youths.
The investigation of co-morbidity of high blood pressure, serum lipids and SUA could provide more comprehensive estimation on disease risk and clues for risk factors prevention among children and adolescents. Several studies have reported the cardiovascular risk factors clustering in Chinese children and adolescents [25], but the measure of clustering is different with ours, thus make the comparison.
The strength of our study is the representative large sample size and the co-mobility estimation of cardiometabolic abnormalities clustering among children and adolescents. Given the limited data of cardiometabolic abnormalities clustering, especially the clustering of elevated SUA and other risk factors among Chinese children and adolescents, this study could provide evidence on disease burden estimation in youth in South China. In addition, in this study we used both linear and non-linear models to understand the association between excess adiposity and cardiometabolic abnormalities in multi-aspects. Nonetheless, the limitation of the current study should also be acknowledged. Firstly, using cross-sectional data, we cannot make causal inference between excess adiposity and cardiometabolic abnormalities. Given that both epidemiological and Mendelian randomization study have improved the causal role of overweight/obesity in the disease onset of cardiometabolic disorders [50,51], and that intervention on body weight because of existed disease could result in an underestimation of the estimation, the association between body weight and cardiometabolic profiles were stable and helpful. Secondly, the lack of data on insulin resistance, dietary information, socialcultural and economic status information, and parental information limited our exploration.
In summary, with representative community-based sample, vigorous methodology, we find that excess adiposity increased the risk of elevated serum uric acid, serum lipids, blood pressure and their clustering among children and adolescents in a southernmost island of China. This study highlights the risk of overweight and/ or obesity on multiple cardiometabolic abnormalities and would be helpful for policy makers as well health practitioners to learn the current situation of local childhood disease burden and risk factors, and further useful for early intervention.
|
2022-01-10T14:37:42.154Z
|
2022-01-10T00:00:00.000
|
{
"year": 2022,
"sha1": "3ca7ad12272f50778e4b82d4a2b732542d5d433b",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "3ca7ad12272f50778e4b82d4a2b732542d5d433b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
261064804
|
pes2o/s2orc
|
v3-fos-license
|
Neural-Network Force Field Backed Nested Sampling: Study of the Silicon p-T Phase Diagram
Nested sampling is a promising method for calculating phase diagrams of materials, however, the computational cost limits its applicability if ab-initio accuracy is required. In the present work, we report on the efficient use of a neural-network force field in conjunction with the nested-sampling algorithm. We train our force fields on a recently reported database of silicon structures and demonstrate our approach on the low-pressure region of the silicon pressure-temperature phase diagram between 0 and \SI{16}{GPa}. The simulated phase diagram shows a good agreement with experimental results, closely reproducing the melting line. Furthermore, all of the experimentally stable structures within the investigated pressure range are also observed in our simulations. We point out the importance of the choice of exchange-correlation functional for the training data and show how the meta-GGA r2SCAN plays a pivotal role in achieving accurate thermodynamic behaviour using nested-sampling. We furthermore perform a detailed analysis of the exploration of the potential energy surface and highlight the critical role of a diverse training data set.
INTRODUCTION
Nested sampling (NS) is a powerful Bayesian method that can efficiently sample high-dimensional parameter spaces. [1,2] The applications of NS in materials science have progressed steadily in the past decade. While early investigations mainly focused on simple model systems such as Lennard-Jones [3] or hard sphere models [4], more recent work has used embeddedatom potentials to study a variety of metallic systems, including elemental metals such as Fe, Zr, and Li [5][6][7], as well as alloys like CuAu [8,9], AgPd [10] and CuPt nano-particles [11].
With the emergence of efficient machine-learned force fields (MLFFs), the sampling of ab initio accuracy potential energy surfaces (PES) become affordable for NS. In this context, it has been applied in conjunction with Gaussian approximation potentials (GAPs) and Moment Tensor potentials (MTPs) to predict the thermodynamic behavior of carbon [12], platinum [13] and AgPd [14] alloy, respectively.
MLFFs use statistical learning techniques to approximate the potential energy surface of a material. [15] Unlike classical interatomic potentials, MLFFs do not require extensive parametrization. Instead they provide a highly flexible functional form that has the ability to generalize across different chemical environments. Trained on datasets obtained from ab initio calculations, MLFFs can thus capture the physics of the system on par with the underlying method. However, two critical factors are paramount for their successful application. Firstly, the diversity and representativeness of the training dataset [16] and secondly the quality of the selected ab-initio method.
Concerning the ab initio method, Kohn-Sham density functional theory (DFT) has been the method of choice for calculating the properties of solid-state materials. A crucial aspect of DFT is the exchange-correlation functional, which incorporates electron-electron interactions within the system. [17] As an exact formulation of this functional is not available, various approximations are employed, and the choice of approximation significantly impacts the accuracy of DFT for a given problem.
Traditionally, the suitability of exchange-correlation functionals has been assessed based on ground-state properties, such as lattice parameters, or cohesion energies. [18][19][20] Despite their importance for practically relevant predictions, finitetemperature properties, such as the melting point, these have been less explored as target properties for evaluating the suitability of functionals, due to the computational complexity of obtaining them. [21,22] Using NS together with MLFF-based models to conduct an exhaustive exploration of the PES and give access to finite-temperature thermodynamic behaviour can bridge this gap, and thus open the door for a much more comprehensive evaluation of functional performance in a broader range of conditions. Backed by our recently developed neural-network force field (NNFF) architecture [23,24], here we demonstrate this aspect in a NS study of the low-pressure silicon p − T phase diagram. We show how the choice of a suitable exchange-correlation functional crucially influences the predicted melting temperature of Si over a large pressure range.
In contrast to simple metallic systems, silicon stands out by its variability in chemical bonding. In its low-pressure allotrope, strong directional bonds lead to the characteristic tetrahedral coordination of the semiconducting cubic diamond phase. At higher pressures the system transitions to more closely packed structures like the well-known β -Sn phase. These circumstances complicate the use of classical interatomic force fields. Due to their rigid functional form these models are usually very poorly transferable and thus work only for the specific phases and properties they were designed for. [25].
This diversity of chemical bonding requires a diverse set of training data to be representative of the rich phase behaviour. To address this, we perform a detailed analysis of the configurations explored by NS, revealing a wide range of attraction basins and regions of the potential energy surface explored during the simulation. For the training data we reevaluate the database of Bartok et. al. [26], which contains around 2475 manually curated silicon structures. We show how the database, a result of the continuous efforts to create general-purpose MLFFs, possesses the diversity and representativeness necessary to deliver accurate thermodynamic predictions from a NNFF-backed NS simulation.
METHODOLOGY DFT
To assess the effect of the exchange-correlation functional, we recomputed the energies and forces of the database provided by Bartók et al. [26]. For the DFT calculations the PBE and r2SCAN functionals as implemented in VASP [27,28] were used. [29,30] The cutoff energy for the plane wave basis was chosen as 300 eV. The partial occupancies for the orbitals were determined employing Fermi smearing with a smearing parameter of 0.025 eV. The reciprocal space sampling was performed on a Monkhorst-Pack grid with a k-spacing of 0.3 Å −1 and the energy convergence criterion was set to 10 −5 eV. For the evaluation of energy-volume curves we used a denser sampling with a k-spacing of 0.2 Å −1 and a tighter convergence criterion of 10 −8 eV. We removed one configuration from the database, where a single atom is placed in a large vacuum.
Neural-Network Force Field
The simulations in this work use the NEURALIL architecture [23,24]. Atomic coordinates are encoded into atom-centered descriptors that are invariant with respect to global rotations and translations, with relative positions of neighbors transformed into second-generation spherical Bessel descriptors [31]. The descriptors are fed into a ResNet-inspired [32] model, which includes a repulsive Morse contribution to avoid unphysical behavior for short interatomic distances [33]. The implementation uses JAX [34] for just-in-time compilation and automatic differentiation, and FLAX [35] for simplified model construction and parameter bookkeeping.
In order to compute second-generation spherical Bessel descriptors for our training configurations, we rely on the minimum image convention. This means that we require the training dataset to have cells large enough to fit a sphere with the corresponding cutoff radius. However, since training datasets often contain structures with a variety of different cell sizes, we need to apply a special procedure to handle this. We use an iterative process to generate diagonal supercells of increasing size and perform a Minkowski reduction [36] to make the cell as compact as possible. This process continues until the desired cutoff fits into the cell, at which point the cycle is stopped. The result is a set of supercell structures that conform to the cutoff parameter for the descriptor generation. However, this set may still contain configurations with significantly varying numbers of atoms. To ensure that our JAX-based approach is efficient, it is important to have static array sizes. Therefore, we perform a padding procedure with ghost atoms to fill up all supercells until the number of atoms is constant. This allows us to generate second-generation spherical Bessel descriptors efficiently and accurately for all configurations in the training dataset, regardless of their cell size and number of atoms.
For the descriptor generation we consider atomic environments within a cutoff of r cut = 4 Å and choose n max = 6 [23]. For each training we randomly split the database and use 3 4 of the data for training and the rest for validation.
Nested Sampling
NS partitions the configuration space into a nested sequence of phase space volumes confined by surfaces of iso-likelihood. Moving from the outer shells to the higher-likelihood inner shells in this nested sequence corresponds to the transition from a high-entropy fluid phase to more ordered crystalline states. Each iteration of the NS algorithm peels off a layer of the nested sequence, resulting in a corresponding sample. This approach to sampling ensures that only thermodynamically relevant structures are sampled, ultimately enabling the calculation of the thermodynamic partition function. [10].
During the NS process, a group of K walkers is continuously updated by replacing the highest-energy walker at each iteration. This replacement is carried out by performing a random walk with a cloned version of one of the remaining walkers, taking a total of L steps. In the case of a constant pressure simulation, these steps involve modifying the simulation cell, through isotropic volume changes, shear transformations, or stretching operations. Additionally, atom steps involve modifying the positions of individual atoms. This is typically accomplished through Monte Carlo steps or moves that use atomic forces, such as Galilean Monte Carlo or Hamiltonian Monte Carlo methods [8,37]. Our NS calculations were performed with a modified version of the pymatnest code [8,10]. We retained most of the logic of pymatnest, but adapted the parallel workflow to be managed by a scheduler provided in the Python library Dask. In the original parallelization scheme [8], instead of the clone taking all the L steps required to decorrelate the configuration in one iteration, each of the available n p processes is used to have n p different walkers perform L/n p steps each. Since we perform atom moves in sequences of several consecutive steps, and these moves involve more expensive force evaluations, they typically take significantly longer than single cell steps, which only require energy evaluations. As a result, depending on the random choice of step types the computational work allocated to each processor can vary substantially. In contrast, our Dask uses a pool of n w workers to which individual walks are dynamically assigned. The improved load balance is schematically depicted in Fig. 1, showing two artificial scenarios for the original and the Dask parallelization. In both cases, the workload is handled by n p/w = 4 processing units and the total walk length is L = 80 steps. In the original parallelization this corresponds to four walkers being walked for L/n p = 20 steps. In this scenario, the MPI code would be required to wait for all processes to complete their random walk, resulting in significant computational overhead. Whereas, for the Dask example the random walk is split into n t = 16 slices of length L/n w = 5 and the workload is handled by the pool of n w = 4 workers. Here, n t is a parameter that should be chosen as a multiple of the number of processing units n w . In the edge case of n t = n w , the Dask implementation becomes equivalent to the MPI scheme.
After conducting a series of convergence tests on the pristine silicon system, we chose a number of walkers of K = 600 and a walk length of L = 1000 steps for our simulations. With these parameters, we usually find the correct phases and the statistical spread of the transition temperatures confined to a range of 200 K. Samples were generated through a process of NS random walks consisting of cell and atom-movement steps. The atom movement steps were executed using the Galilean Monte Carlo algorithm in series of 8 consecutive steps. The step probability ratio for volume, stretch, shear, and atom steps was set to 2:1:1:1, respectively. We restrict our simulation cell to a minimum aspect ratio of 0.8, to avoid pathologically thin cells forming at the early stages of the sampling. [10] Steps violating this constraint are discarded. To initiate the sampling process, we first generated an initial set of 600 replicas of a cubic diamond cell with a density of 2.31 g/cm 3 . In a first step, we diversify the cell shapes of these structures by an initial isotropic volume scaling and a subsequent series of 1000 cell shape modifying steps. In each of these steps one shear and one stretch move is performed in random order. In a second step, to further decorrelate the walkers, a series of 10 NS random walks each with a walk length of 100 steps was performed on the generated structures. The energy threshold for these walks was chosen to be U initial = U max + N · 1 eV, where U max is the energy of the highest energy walker and N is the number of atoms. From the converged NS runs we compute the isobaric heat capacities according to where Y is the microscopic enthalpy, N is the number of atoms, k B is the Boltzmann constant and β = (k B T ) −1 . The thermodynamic expectation values in equation (1) are evaluated using the NS partition function where the sums run over all acquired samples, w i are the nested sampling weights and O is an arbitrary observable depending on the configuration R i . We use the isobaric heat capacity C p to locate first-order phase transitions.
Structure representation in 2D
Visualizing the structural variety occurring in highdimensional spaces, such as the 3N atoms -dimensional space of potential configurations for a system with N atoms , requires a projection into a lower dimensional space. For that purpose, we utilize the same spherical Bessel descriptors used for encoding atomic environments for the NNFF. For a structure composed of N atoms , this results in a matrix with shape (N atoms , n features ), which describes the complete structure. To make this representation invariant with respect to atom permutations within the structure, we compute the distributions of each of the n features features as a histogram and divide it by the number of atoms N atoms . After flattening this yields a vector of length n features n bins which is permutation invariant and independent of the system size. To visualize the permutation-invariant structure descriptors, we use principal component analysis (PCA) as implemented in the scikit-learn library [38]. The histograms are calculated using n bins = 128 and in a range from 0 to 4.
Optimization and symmetry determination
For the analysis of the walker population we perform rough relaxations of the atomic positions using our NNFF model. For that purpose, we use the BFGS implementation contained in the JAX [34] library with a loose force convergence criterion of 0.01 eVÅ −1 .
For symmetry determination we employ Spglib [39] as implemented in the pymatgen package [40]. Since the finite temperature structures may have slightly distorted cells, we employ a very loose symmetry precision parameter of 0.3 Å to determine the space group.
For runs that converge into strongly disordered or amorphous metastable minima, we determine the nature of the minimum by eye and by looking at the radial distribution function.
Neural-Network Force Field
Based on the spherical Bessel descriptors the configuration space spanned by the structures in the training dataset [26] is illustrated in Fig. 2. The map is divided into several distinct regions, each representing the most prominent phases of silicon. One of the most striking features of the map is the energetically lowest phase, cubic diamond, which occupies a significant area towards the north east. Moving towards the west, we can see the most relevant phases at intermediate pressures, such as β -Sn and simple hexagonal. The liquid configurations of silicon are only represented by two tiny patches located in the south west region of the map.
Figs. 3a and b show detailed parity plots as well as averaged statistics for the models trained on the r2SCAN and PBE datasets. The energy and force errors are in the order of magnitude of 10 meV/atom and 100 meV/Å respectively, with a slightly better result for the PBE database. Only small differences in error statistics between the training and validation sets are observed, indicating that no significant overfitting occurred in the training process.
To further test the transferabilty and accuracy of our trained models, we created a test set of structures that are not included in the training dataset. For this we extracted crystal structures of the most prominent silicon phases in the investigated pressure range from the materials project database. In addition, we added the body centered orthorhombic Imma phase, first reported by McMahon et al. [41], since it was not present in the materials project database. For each of these six phases, we created a set of isotropically scaled cells around the equilibrium volume and evaluated the energies by DFT and the corresponding NNFFs. The resulting energy-volume curves are shown in Fig. 3 for both functionals. The NNFFs reproduced the respective curves with very similar performance. However, compared to the PBE energies, the r2SCAN energies exhibit a significant increase in energy differences. For example, the energy difference between the cubic diamond and beta-tin minimum is increased by almost 50% in the case of r2SCAN.
Phase diagram
We conducted NS calculations using our NNFF models across a range of eight different pressures, spanning from 0 to 16 GPa. To account for finite size effects on the calculated quantities, we performed simulations on systems consisting of 16 and 32 silicon atoms. Additionally, for a few specific data points, we extended the simulations to include also 64 atom systems. The resulting constant pressure heat capacities are shown in Fig. 4. An overview of all calculations and the corresponding identified most stable phases is summarized in Table I.
Theoretically, the heat capacity diverges at first order phase transitions in the thermodynamic limit. This behavior is consistent with our findings, as the heat capacity peaks become more pronounced increasing the system size from 16 to 64 atoms. Moreover, we observe a slight shift towards higher temperatures in the smaller systems. Interestingly, this finite size effect appears to be pressure-dependent. Comparing the 16-and the 32-atom simulations we observe that at 0 GPa, the deviation is more pronounced, and, as the pressure increases, the deviation gradually decreases. For the three pressures we ran using 64 atoms, at 4 and 10 GPa the melting temperature reduces by around 100 K, at 16 GPa almost no shift appears. A similar trend was recently observed in a NS study of carbon, where the finite size effect almost diminished above a pressure of 100 GPa [12].
Based on the calculated melting temperatures from our simulations we can construct a p − T phase diagram. It is depicted in Fig. 5 together with the experimental phase diagram reported by Voronin et al. [42], which we briefly describe below. In the low-pressure regime up to approximately 10 GPa, the predominant phase is the cubic diamond phase and the melting line exhibits a consistent negative slope of -60 K/Pa. Increasing the pressure above 10 GPa, the β -Sn becomes stable. The cubic diamond-β -Sn-liquid triple point is found at 10.5 GPa and 1003 K . Within a relatively narrow region of around 2 GPa, the β -Sn phase occupies a distinct range and is separated from the orthorhombic Imma phase by a phase boundary characterized by a negative slope at approximately 13 GPa. As pressure increases further, the equilibrium structure transitions to a simple hexagonal phase. The β -Sn, Imma and simple hexagonal phases are separated by a gentle positively sloped melting line 20 10 0 10 20 from the the liquid phase.
The simulated r2SCAN melting temperatures (see Fig. 5, black dashed and dotted lines) reproduce the experimentally observed trends. The 32-atom simulations show a negatively sloped melting line until 10 GPa close to the experimental cubic diamond, β -Sn, liquid triple point. For the higher pressures, the melting line follows the experiment with a slightly decreased slope and a small constant shift to lower temperatures. Our simulations of different system sizes indicate that a small finite size effect even for the 64-atom runs remains for the lower pressures, while it seems to be almost absent for the higher pressure domain. We discuss the regions of stability for the r2SCAN calculations in the following section where we perform a detailed analysis of these NS runs.
The 16-atom PBE simulations (see Fig. 5, grey dashed line) correctly predict the slope of the low-pressure melting line, however, they fail in the prediction of the absolute values, which are shifted to lower temperatures by approximately 300 K. This is in line with previous ab-initio molecular dynam-ics simulations that predict the 0 GPa melting temperatures to be 1687 and 1450 K for the SCAN and the PBE functionals, respectively [43]. For higher pressures, PBE neither captures the correct trend nor the correct magnitude of the experimental melting line, with a continuing negative slope down to a transition temperature of 691 K at 16 GPa. We note that these differences in melting temperatures do not arise from the algorithm finding different phases. A comparison of Table I reveals that overall similar phases are found for PBE and r2SCAN. Instead, we relate this observation to a misprediction of the relative energies of the different silicon phases shown in Fig. 3c. Due to the smaller energetic differences, the observed phase transitions can occur already at lower temperatures. In the following, we restrict our analysis to the more accurate r2SCAN results. Table I). For better visibility, 16-and 64-atoms heat capacities are scaled by factors of 3 and 0.3, respectively. Fig. 6 shows the evolution of the walker live set for the 32-atom NS run (seed=0) at 10 GPa in the 2D structure representation map (compare Fig. 2). In order to assign each of the walkers to a certain basin of the potential energy surface, we relaxed their atomic positions and determined the corresponding space group. Initially, all walkers reside in the liquid configuration area of the training database. At this point, the walkers are in highly disordered liquid or even gas-like states characterized by large cells and low coordination numbers. Therefore, even after ionic relaxation, the system remains in a low-symmetry crystalline configuration, and all walkers are assigned to space group P1 at the start of the sampling. As the iteration progresses, the cloud of walkers leaves the liquid area and enters the domain spanned by the β -Sn and the simple hexagonal P6/mmm phase structures of the training database. Although the majority of walkers remain in the P1 space group, we observe a diverse population of other space groups, notably Imma, I4 1 /amd, and C2/m. In subsequent snapshots, the presence of strongly disordered, liquid-like walkers diminishes. The simulation predominantly focuses on the Imma phase, with the I4 1 /amd and C2/m phases being weakly represented. Towards the end of the simulation, sampling lower enthalpy levels, a shift occurs in the population towards the I4 1 /amd phase, which turns out to be the most stable phase at this pressure. This contradicts the depiction in the 2D configuration space Table I). Grey dotted line shows the melting line of a series of 16-atom simulations using a model trained on PBE data. Colored areas show regions of stability we deduce from our runs (Red: Fd3m, blue: P6/mmm, green: I4 1 /amd -Imma -P6/mmm, see solid-solid phase transition section for detailed explanation) map, where the trajectory ends at the tip of the cubic diamond region, not falling into the actual I4 1 /amd region. We interpret this behavior as an artifact of PCA, which can not always preserve the full information from the high-dimensional space upon dimensionality reduction. Nevertheless, the visualization in Fig. 6 provides insight into how the NS algorithm explores the configuration space during the simulation. Throughout the process, the walker set encompasses a wide region in configuration space until eventually converging into the most stable basin. A summary of the walker populations for all investigated pressures in the 32-atoms, seed=0 calculation series is presented in Figure 7a. Additional analyses for the other runs can be found in the Supplementary Material.
Analysis of NS runs
For the three lowest pressures, a similar pattern emerges with a predominant population of the cubic diamond Fd3m phase. Although the NS algorithm visits alternative basins such as Imma or I4 1 /amd, these are quickly disregarded due to the exceptional stability of the cubic diamond phase under those conditions. In the intermediate pressure range of 10 to 12 GPa, two competing phases are observed. The Imma phase experiences a substantial initial increase in population alongside a gradual representation of the I4 1 /amd phase. The Imma phase later becomes depopulated towards the end of the simulation, with the I4 1 /amd phase emerging as the most stable. The presence of the P6/mmm phase is also noted, gaining significance between 10 and 12 GPa. Beyond 12 GPa, the walker population is dominated by the P6/mmmm phase, which becomes the ground state. The exploration of the I4 1 /amd phase diminishes in importance, while the Imma phase maintains a degree of population throughout the simulation. For the pressures above 10 GPa all r2SCAN simulations converge to the same phases consistently (see Table I). However, discrepancies arise among the runs at lower pressures. Regardless of the system size, between 0 and 9 GPa multiple runs converge into the hexagonal diamond P6 3 /mmc phase, which has been observed experimentally as a as a minor phase in indented cubic silicon. [44] We attribute this to the small energetic difference between the actual ground state Fd3m and the P6/mmm phase (see Fig. 3c). Furthermore, we observe a disordered cubic diamond phase for the 4 GPa 64-atom simulation. At 10 GPa, two runs of the 16-atom simulations converge to the P6 3 /mmc phase, while the third run results in the Ia3 space group, known as the cubic body-centered BC8 phase of silicon, which is metastable at ambient pressure [42]. The BC8 phase can be obtained by slowly decompressing the metallic β -Sn phase and remains metastable unless heated above 200°C [45]. In the 32-and 64-atom case, we observe one simulation collapsing into a disordered cubic diamond minimum. The remaining 64-atom simulations converge to amorphous structures. We connect these discrepancies with the experimental findings of multiple metastable phases in this pressure range [42], including the BC8 phase Si III and the rhombohedral R8 phase Si III . Thus, the problem becomes strongly multimodal under these conditions, hampering the sampling. Fig. 6 furthermore gives a visual impression of how the number of walkers determines the granularity of the potential energy surface sampling. Smaller numbers increase the likelihood of the walker cloud missing the entry point to a particular basin funnel. As a result, the set of walkers can become trapped in a metastable minimum since the Galilean Monte Carlo walk cannot traverse large energy barriers. This interpretation is supported by our comprehensive analysis of the walker populations (see Supplementary Material). In cases where a run converges to a metastable phase, the actual most stable phase is never populated, indicating that its entry point has not been found due to its small phase space volume. The more frequent occurrence of the P6 3 /mmc phase in the 16-atom simulations may also be influenced by the minimum aspect ratio constraint imposed on the cell shape. We speculate that in the 16-atom case this constraint may favor the formation of the P6 3 /mmc phase compared to the competing Fd3m phase.
Solid-solid phase transition
To facilitate the identification of solid-solid phase transitions, we calculate the ratios of the contribution to the partition function by the competing phases. To achieve this, we perform optimizations on every 10 th sample obtained during the NS process and determine the corresponding space group. This enables us to assign specific samples to particular basins of the potential energy surface (PES) and separate the overall partition function into individual contributions from different phases: The result is shown as an average over the independent runs for the 32-atom simulations (excluding the two outliers at 0 and 10 GPa discussed above) in Fig. 7b. In all cases the melting transition is clearly visible in form of a sharp step in the P1 partition function contribution. For the pressures above 9 GPa the competition between different phases becomes apparent. The intersections of the I4 1 /amd and the Imma contributions at 10, 11, and 12 GPa indicate towards a solid-solid phase transition occurring at 337, 213 and 82 K, respectively. Although the Imma phase is significant at 13 GPa, determining a clear phase transition point is challenging in this case. To summarize, in the pressure range from 10 to 13 GPa, three distinct phases, namely I4 1 /amd, Imma, and P6/mmm, interact in a complex manner (see the green shaded area in Fig. 5). At 16 GPa we observe an unambiguous dominance of the P6/mmm phase.
CONCLUSION
In the current study, we successfully combined the nested sampling method with a fully automatically differentiable neural-network force field. By employing this powerful methodology, we can achieve ab-initio precision in our predictions, which we demonstrated by accurately simulating the pressure-temperature phase diagram of silicon. Through a comparison of the predicted melting lines from two common exchange-correlation functionals, we have demonstrated that the performance of a machine-learning model is limited by the quality of its corresponding ground truth data. Moreover, we underscore the importance of NNFF-backed nested sampling simulations, as they provide comprehensive finite temperature benchmarks for exchange correlation functionals.
Despite their great success, machine learning potentials still heavily rely on the quality, size and diversity of the training datasets to deliver accurate and reliable results. This requirement can be demanding and hinder their transferability and widespread applicability. The inherent capability of NNFFs to handle large amounts of data facilitates the adoption of active learning methods. By developing efficient NNFF-backed nested sampling active learning approaches, we may mitigate the necessity for intricate manually curated training databases. This opens up new possibilities for purely data-driven configuration space exploration, enhancing our understanding of complex systems.
CODE AVAILABILITY
A compatible version of NEURALIL, including example scripts for training and evaluation, is available on GitHub [46]. The pymatnest code on which our implementation is based is available on github [47].
DATA AVAILABILITY
A dataset containing the energy and sample trajectories of all presented nested sampling calculations as well as the DFTevaluated training databases are available on Zenodo [48].
|
2023-08-23T06:45:44.267Z
|
2023-08-22T00:00:00.000
|
{
"year": 2023,
"sha1": "728caa26249765da7e67fa7b106d2672868a0198",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "728caa26249765da7e67fa7b106d2672868a0198",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Physics"
]
}
|
203881561
|
pes2o/s2orc
|
v3-fos-license
|
Metapopulation ecology links antibiotic resistance, consumption and patient transfers in a network of hospital wards
Antimicrobial resistance (AMR) is a global threat. A better understanding of how antibiotic use and between ward patient transfers (or connectivity) impact hospital AMR can help optimize antibiotic stewardship and infection control strategies. Here, we used metapopulation ecology to explain variations in infection incidences of 17 ESKAPE pathogen variants in a network of 357 hospital wards. Multivariate models identified the strongest influence of ward-level antibiotic use on more resistant variants, and of connectivity on nosocomial species and carbapenem-resistant variants. Pairwise associations between infection incidence and the consumption of specific antibiotics were significantly stronger when such associations represented a priori AMR selection, suggesting that AMR evolves within the network. Piperacillin-tazobactam consumption was the strongest predictor of the cumulative incidence of infections resistant to empirical sepsis therapy. Our data establish that both antibiotic use and connectivity measurably influence hospital AMR and provide a ranking of key antibiotics by their impact on AMR.
Antimicrobial resistance (AMR) is a global threat. A better understanding of how antibiotic use and betweenward patient transfers (or connectivity) impact hospital AMR can help optimize antibiotic stewardship and infection control strategies. Here, we used metapopulation ecology to explain variations in infection incidences of 17 ESKAPE pathogen variants in a network of 357 hospital wards. Multivariate models identified the strongest influence of ward-level antibiotic use on more resistant variants, and of connectivity on nosocomial species and carbapenem-resistant variants. Pairwise associations between infection incidence and the consumption of specific antibiotics were significantly stronger when such associations represented a priori AMR selection, suggesting that AMR evolves within the network. Piperacillin-tazobactam consumption was the strongest predictor of the cumulative incidence of infections resistant to empirical sepsis therapy. Our data establish that both antibiotic use and connectivity measurably influence hospital AMR and provide a ranking of key antibiotics by their impact on AMR.
Antimicrobial resistance (AMR) of pathogenic bacteria progresses worldwide and imposes a considerable burden of morbidity, mortality and healthcare costs 1,2 . AMR is increasingly recognized to emerge in various settings including agriculture 3 or polluted environments 4,5 . However, hospitals remain major hotspots of AMR selection 6,7 that concentrate strong antibiotic pressure, fragile patients and highly resistant pathogens 8 , while patient transfers between wards and facilities accelerate pathogen dissemination 9 .
The primary hospital-based strategies against AMR are antimicrobial stewardship and infection control 10 , which aim respectively to lower the antibiotic pressure and the transmission of pathogens. The need for such strategies is widely accepted 11 but their implementation details are more debated 12 , especially regarding which antibiotics should be restricted first 13 or the risk-benefit balance of screening-based patient isolation procedures 14,15 . Thus far, designing efficient antibiotic stewardship strategies has been hindered by the paucity of evidence concerning which antibiotics exert the strongest selection pressure. As a result, available rankings of antibiotics for de-escalation and sparing strategies rely on expert consensus with partial agreement 16 , themselves based on conflicting evidence 17,18 .
Linking antibiotic use and AMR prevalence is difficult due to the confounding effects of bacterial transmission and the complexity of the ecological processes underlying AMR (reviewed in 19 ). Observational studies of AMR usually report on the proportion of resistant variants in a limited set of species, which conceals the overall burden of AMR and can make interpretation difficult when, for instance, resistant variants apparently increase in proportion while decreasing in incidence 20 . To alleviate these issues, studies of the impact of antibiotic use on AMR in hospitals could benefit from ecological frameworks able to simultaneously model the incidence of infection with most relevant pathogens while controlling for confounding effects. Metapopulation ecology is such a framework. It was introduced by Levins 21 to explain the persistence of agricultural pests across a set of habitat patches and refined by Hanski to account for the size of patches, the connectivity between them, and habitat quality within them 22,23 . Metapopulation models, beyond their frequent use in wildlife and conservation biology [24][25][26] , have recently provided theoretical grounds for pathogen persistence in the healthcare setting 27 . So far, however, metapopulation models of hospital AMR have been applied on simulated rather than empirical data 27,28 . We used metapopulation modeling to isolate the effect of antibiotic consumption on the incidence of infections with 7 major pathogen species and their resistant variants within a 5,400-bed, 357-ward hospital network, using detailed data over the course of one year. We considered that bacteria in this network approximate a metapopulation of local populations in patches represented by hospital wards, connected by transfers of colonized patients 29 .
Our primary objective was to determine the respective impacts of antibiotic use and connectivity on the incidence of infections with resistant pathogens. The secondary objective was to compare the impacts of specific antibiotics on hospital AMR, by modeling each pathogen variant incidence separately, then by considering all variants resistant to drugs commonly used in empirical sepsis therapy. Our findings highlighted both common patterns and species-specific behaviors of pathogens and identified a major, so far underappreciated, association between the widelyused drug piperacillin-tazobactam and resistance to both 3 rd -generation cephalosporins and carbapenems.
Results
Distribution of bacterial pathogens and antibiotic use in a hospital network. We analyzed pathogen isolation incidence in clinical samples, antibiotic use and patient transfers in 357 hospital wards from the region of Lyon, France, from October 2016 to September 2017. Ward-level data were aggregated from 14,034 infection episodes, defined as ward admissions with ≥1 clinical sample positive for E. coli or an ESKAPE pathogen (Enterococcus faecium, Staphylococcus aureus, Klebsiella pneumoniae, Acinetobacter baumannii, Pseudomonas aeruginosa and Enterobacter cloacae complex), collectively termed ESKAPE 2 . Pathogens were grouped into speciesresistance pattern combinations, namely 3 rd generation cephalosporin-resistant E. coli, E. cloacae complex and K. pneumoniae (3GCREC, -EB and -KP), carbapenem-resistant E. coli, E. cloacae complex, K. pneumoniae, P. aeruginosa and A. baumannii (CREC, -EB, -KP, -PA and -AB), vancomycin-resistant E. faecium (VREF) and methicillin-resistant S. aureus (MRSA). Pathogen variants not falling into these resistance groups were designated by species (EC, EB, KP, PA, AB, EF, SA) and collectively referred to as the less-resistant variants.
Infection episodes most frequently involved the lessresistant variants, mainly EC, SA, PA and KP (Table 1). These variants were also found in the largest number of wards. Resistant variants were consistently less frequent than their less-resistant variants in all species and, in enterobacteria, carbapenemresistant variants were consistently less frequent than 3GC-resistant variants. VREF and CRAB infections were exceptional.
To estimate the degree of concentration of each variant in the network, we calculated concentration indices defined as the probability that two random occurrences of the same variant originated from the same ward, analogous to the asymptotic Simpson index (see Methods). The concentration of infection episodes was weak (<5%) for all variants, indicating a global lack of clustering (Table 1). Concentration increased with resistance (~2-fold increase from the least to the most resistant variant) in E. coli, K. pneumoniae, P. aeruginosa and A. baumannii, suggesting an adaptation of resistant variants to more specific habitats compared to their less-resistant counterparts. This pattern was not found in E. cloacae complex, E faecium and S. aureus.
Antibiotics were prescribed in 86.3% of wards (Table 2), with a total consumption of 125.7 defined daily doses per year per bed (ddd/y/b). Antibiotics usually suspected to select for AMR in the selected ESKAPE 2 variants were grouped into 9 classes (Table 2). Antibiotics with comparatively rare use (e.g., rifampicin), narrow spectrum (e.g. amoxicillin) or most frequently used in combination therapy (aminoglycosides) were excluded. The distribution of antibiotic use in the network was analysed using the concentration index described above, here representing the probability that two random doses were delivered in the same ward. Antibiotic use was diffuse, with concentration indices <4%, ranging from 0.8% for CTX/CRO and FQ to 3.6 for OXA.
Antibiotic use and connectivity predict ESKAPE 2 infection incidence. We used generalized linear models (GLMs) within the metapopulation framework to disentangle the influences of antibiotic pressure, connectivity and other ward characteristics on the incidence of infections with ESKAPE 2 pathogens and their resistant variants. Connectivity quantifies the incoming flux of each pathogen variant in a downstream ward receiving infected patients from upstream wards. Practically, we estimated connectivity for each variant and downstream ward as the sum of the transfers from each upstream ward multiplied by the variant's prevalence in that ward (see Methods). Wards were characterized by their size (no. of beds) and type, representing patient fragility and coded on an ordinal scale with a score of 2 for intensive care and blood cancer units, 1 for progressive care units and 0 for other wards. Importantly, the observed incidence in a ward depends directly on the frequency of sampling and specimen types NOTE. a The concentration index estimates the probability that two antibiotic ddds taken at random were prescribed in the same ward. b Total consumption of systemic-use antibiotics (ATC class J01) including those not considered in the 9 specific drug groups. 3GC, 3rd-generation cephalosporin; ddd, defined daily dose.
(e.g., respiratory vs urinary tract specimens), leading to sampling bias. To correct for this bias, all models included a baseline incidence value as a control covariate, defined as the ward-level incidence predicted by sampling alone, assuming that the probability of pathogen presence in a given sample is constant across wards (see Methods). As expected, the control value was strongly correlated with both antibiotic use and the incidence of infections in all prevalent variants ( Supplementary Fig 1 and 2). The incidence of each pathogen variant was then modelled as a separate Poisson-distributed GLM ( Figure 1) adjusted for sampling bias. [In unadjusted, bivariate analysis, the incidence of each variant was strongly correlated with both antibiotic use (Supplementary Figure 3) and connectivity (Supplementary Figure 4), with the exception of the very rare CRAB and VREF variants.] In the adjusted GLMs, global antibiotic use was significantly associated with infection incidence in 8 pathogen variants independent of connectivity, ward size and type ( Figure 1). The maximum effect size was found in CRKP with a coefficient of 0.4, which predicted approximately a 50% increase of incidence per 2-fold increase of antibiotic use. Strikingly, the magnitude of association of antibiotic use with incidence increased with resistance levels within all taxa excepted CREC ( Figure 1). This pattern supported a general influence of antibiotic use on within-species AMR. Antibiotic use was significantly associated with a decreased incidence in only one variant, namely SA.
The influence of connectivity on infection incidence was more variant-specific than that of antibiotic use. Connectivity had a positive effect for two resistant variants (CRPA and VREF) and four less-resistant variants (EC, PA, SA, EF). The effect unambiguously increased with resistance in P. aeruginosa and E. faecium but not in other species, although connectivity exhibited comparatively larger effects for carbapenem-resistant variants in enterobacteria ( Figure 1). The effect of ward size was weaker and generally insignificant compared to antibiotic use and connectivity. Notably, ward type had a negative, significant effect in four variants (EC, 3GCREC, 3GCREB, and MRSA), suggesting that these infections preferentially occur in general wards with less fragile patients.
Possibly causal associations between antibiotic use and resistance. The metapopulation models illustrated in Figure 1 identified positive associations between total antibiotic use in hospital wards and increased incidences of infections with the more resistant variants of several ESKAPE 2 species. Yet, a correlation with AMR does not establish a causal role of antibiotics. For instance, a high incidence of resistant infections in a ward can increase antibiotic use through prolonged or combined therapies 19 . Conversely, the prescription of antibiotics always inactive against a variant is unlikely to be motivated by this variant's incidence and such antibiotics are more likely to provide a direct benefit to the resistant variant. Based on this rationale, we propose stringent criteria to identify possibly causal associations between specific antibiotics and pathogen variants (see Methods). Under the hypothesis that antibiotic use is either a consequence of AMR or Figure 1. Antibiotic use and connectivity predict the incidence of infection with ESKAPE2 pathogen variants. Shown are the coefficients (points) and 95% confidence intervals (bars) of Poisson regression models of the incidence of each variant in each ward (n=357) for antibiotic use, connectivity (estimated no. of patients colonized with the same variant entering the ward), ward size (no. of beds) and ward type, coded 2 for intensive care and blood cancer units, 1 for progressive care units and 0 for other wards. All models were adjusted for sampling bias. Models involving A. baumannii and E. faecium, which exhibited larger 95% CIs due to smaller incidence of the resistant variants, are shown with separate scales (panel b) for readability. Coefficients approximately represent the relative change of incidence of infections per doubling of the predictor.
spuriously correlated with AMR, the strength of an association between the use of an antibiotic and the incidence of a variant should not depend on whether the association fulfills the criteria for possible causality.
To test this hypothesis, we identified possibly causal associations in our data and examined whether they were equally likely to be positive and significant compared to other associations. We constructed Poisson regression models where the total antibiotic use was replaced with the use of specific antibiotics, along with the incidence control and connectivity covariates (Figure 2a). These 17 variant-specific models, each built using 9 antibiotic groups as predictors, yielded 153 coefficients of which 17 (11.1%) represented possibly causal associations (see Methods). Positive and significant coefficients were found in 6/17 possibly causal associations (35.3%), more frequently so than in other associations (10/136, 7.4%) with an odds-ratio of 6.7 (95% CI, 1.7 to 25.6; p=0.003, Fisher's exact test, two-sided). The median regression coefficient in possibly causal associations was also significantly higher than in other associations (p=0.009, Mann-Whitney U-test, two-sided; Figure 2b). Four of the 6 significant, possibly causal associations involved CTX/CRO, which selected for 3GCREC, 3GCRKP, PA and EF; the other two involved carbapenems selecting for CRPA, and AMC selecting for MRSA. Overall, the enrichment of possibly causal associations among significant model coefficients confirmed that the local, ward-level selection of drug-resistant variants by antibiotics is measurably pervasive throughout our hospital network.
Quantifying the drivers of cefotaxime/ceftriaxone and carbapenem resistance. From a clinical standpoint, the most immediate consequence of AMR is the failure to control sepsis with empirical antibiotics, mainly carbapenems and non-antipseudomonal 3GCs such as cefotaxime. Because such failure can equally result from acquired or intrinsic resistance, the incidence of intrinsically resistant pathogens such as EF is of equal clinical importance as that of pathogen variants with acquired resistance mechanisms. To examine the impact of antibiotics on both intrinsic and acquired resistance, we modeled the cumulative incidence of infections with 3GC-and/or carbapenem-resistant (3GCR and CR) variants of the ESKAPE 2 pathogens (see Methods).
In these models, antibiotic use was not only the sole predictor with a positive effect on both CR and 3GCR incidence, but also the strongest predictor (Figure 3a). Connectivity predicted CR, but not 3GCR, incidence, in line with the comparatively greater impact of connectivity on individual CR variants ( Figure 1). Ward size had no measurable effect in either model. Ward type, reflecting patient fragility, had no effect on CR incidence but was negatively associated with 3GCR incidence, in line with similar associations found for 3GCREC and 3GCREB (Figure 1). These findings provide an unambiguous link between antibiotic use and global resistance to empirical sepsis therapy that was robust to confusion by sampling bias, connectivity and other ward characteristics.
To determine which antibiotics have the strongest impact on global carbapenem and 3GC resistance, we examined the effect of replacing the total antibiotic use in our models by individual antibiotics, similar to the approach described in the previous section. Antibiotics whose use significantly predicted either 3GCR or CR incidence were CTX/CRO, IPM/MEM, CTZ/FEP and TZP (Table 3). To visualize their respective impact, we plotted the average ward-level infection incidence predicted by variations of the consumption volumes in the models adjusted for connectivity and sampling bias (Figure 3b). The resulting pattern of association highlighted, again, possibly causal associations: 3GCR incidence was significantly predicted by the consumption of CTX/CRO but not IPM/MEM, while CR incidence was significantly predicted by the consumption of IPM/MEM, but not CTX/CRO. CTZ/FEP had a barely significant influence on CR incidence. Strikingly, TZP consumption predicted both 3GCR and CR infection incidences and, in both models, TZP coefficients outweighed other coefficients by a large margin in terms of amplitude and significance. Overall, these results demonstrate a specific effect of CTX/CRO and IPM/MEM consumption on resistance to the same antibiotic group, but not other groups, and identify a major role of TZP consumption in predicting the incidence of both 3GCR and CR infections. To propose a unified ranking of the impact of antibiotics on 3GC and CR resistance, a final model was constructed by pooling all 3GCR and CR variants together (Table 3). In this model, TZP and CTX/CRO had positive and significant coefficients; CTZ/FEP had positive but insignificant coefficients; FQ and OXA had negative, insignificant coefficients; and VAN/TEC, 1GC/2GC and AMC had negative and significant coefficients.
Discussion
The impact of antibiotic use on resistance has been demonstrated at different scales including hospitals, regions and countries [30][31][32] , but a quantitative assessment of this impact within hospitals is still lacking. By applying metapopulation ecology to explain variations of ESKAPE 2 infection incidences across a large network of hospital wards, we demonstrate that both antibiotic use and inter-ward patient transfers independently contribute to ward-level AMR in several species. Our study also provides the first quantitative ranking of the impact of several key Previous theoretical work based on modeling and simulation has predicted how patient transfers contribute to AMR prevalence through pathogen dissemination [33][34][35] . Our study provides an empirical confirmation of these predictions and identifies the species and variants most influenced by connectivity (Figure 1). Understanding the respective impacts of antibiotic use and connectivity on the burden of resistant infections is essential for optimizing interventions against AMR. If the resistant infections in a ward mostly result from the admission of already colonized patients, one would expect antibiotic restrictions to have a limited impact on AMR compared to infection control measures to prevent the further dissemination of the pathogens. Conversely, a weak influence of connectivity suggests that resistant pathogens are either selected locally or introduced from sources outside the network. Consequently, connectivity's influence on incidence should be higher in pathogen variants endemic in the hospital, and lower in community-associated variants or variants whose resistance is selected locally. This model is consistent with our finding that connectivity had the strongest influence in the typical nosocomial pathogens P. aeruginosa and E. faecium and a comparatively lower influence in communityassociated variants (3GCREC and -KP). The low influence of connectivity on the incidence of resistant E. cloacae complex variants can be explained by their local selection. While resistance in E. coli and K. pneumoniae typically requires gene acquisition 34,35 , E. cloacae complex can resist cephalosporins and carbapenems through increased AmpC betalactamase and decreased porin expression [36][37][38] . Such resistance emerges through adaptation and de novo mutations that are rapidly selected from the local reservoir of susceptible progenitors 39,40 . In contrast, the incidence of global CR infections was strongly influenced by connectivity (Figure 3), with an effect size comparable to that of antibiotic consumption, while connectivity had no measurable influence on global 3GCR infections. This suggests that infection control measures could be particularly effective at preventing the spread of CR pathogens between wards.
The rationale behind hospital-based antibiotic stewardship is based on the assumption that AMR evolves in hospitals. Yet, there is surprisingly limited evidence to support this assumption [36][37][38] . Ecological studies have repeatedly identified associations between the use of antibiotics and AMR prevalence, but such associations do not necessarily reflect AMR selection 19 . However, more specific associations at the antibiotic and variant level are not all equally likely to be causal or consequential. Based on medical and biological reasoning, we identified associations representing possible selection and showed they outweighed other associations in terms of significance and amplitude (Figure 2). Other significant associations could not be classified as possibly causal but might reflect co-selection. The association of CTX/CRO with CRKP but not CREC likely reflected, in our setting, the selection for the highly frequent 3GC-resistance in carbapenemase-producing K. pneumoniae, contrasting with E. coli Oxa 48 producers that frequently remain 3GC-susceptible but TZP-resistant 39 , consistent with the strong association between TZP use and CREC incidence. These findings confirm that the associations between antibiotic use and the incidence of specific resistant variants are preferentially causal in nature and, consequently, that AMR evolves in our hospital network.
Because intrinsic and acquired resistances to an antibiotic equally lead to treatment failure, we modeled the pooled incidences of infections with 3GC-or carbapenem-resistant variants of the ESKAPE 2 pathogens, including those with intrinsic resistance. This approach allowed us to rank antibiotics by their measured global impact: the use of TZP and CTX/CRO predicted 3GC resistance and the use of TZP, IMP/MEM and, to a much lesser extent, CTZ/FEP predicted carbapenem resistance (Figure 3). The strikingly positive effect of TZP on both 3GCR and CR infections deserves further attention. Based on its in vitro efficacy against extended-spectrum beta-lactamase (ESBL) -producing enterobacteria, TZP has been repeatedly considered as an alternative drug of choice in carbapenem-sparing strategies 40,41 . The strategy of replacing carbapenems with TZP assumes that: (1), TZP treatment is as effective as carbapenems on TZP-susceptible pathogens; and (2) AMR is less selected under TZP pressure than under carbapenem pressure, as reflected by a recent consensus-based ranking of beta-lactams for deescalation therapy 16 . Yet, in several recent reports including a multicenter randomized clinical trial, TZP treatment of sepsis with ESBL-producing enterobacteria was associated with poorer outcome compared with carbapenem treatment [42][43][44] . Additionally, studies of the respective associations of TZP and carbapenem use with AMR yielded conflicting results. At the population level, the incidence of CR enterobacteria was negatively associated with TZP use in a 5-year, single-hospital trend analysis study 45 . At the patient level, however, TZP exposure was associated with CRPA in a meta-analysis 46 and with CR Gramnegative bacilli in a single-hospital prospective cohort study 47 . Our finding that TZP use correlates with both 3GCR and CR infections, more strongly so than carbapenems and CTX/CRO, challenges the rationale of recommending TZP over other drugs for ecological reasons. Noteworthy, the link of TZP use with global 3GCR and CR resistance resulted from the accumulation of small, positive associations with most 3GCR and CR variants (Figure 2). Because of this diffuse effect, links between TZP use and 3GCR or CR infection incidence might go undetected in studies focusing on individual pathogen variants. It should be noted that our observation that TZP use selects for 3GCR and CR variants does not imply selection for acquired resistance through any specific mechanism such as carbapenemase production.
Our study has several limitations. Our inferences focused on infection incidence at the level of the ward and, as such, should not be readily translated to individual patients 19 . Because we modeled infection incidence, our analyses did not consider the contribution of asymptomatic carriers to AMR. We did not consider the movements of healthcare workers in our estimation of connectivity and therefore we could not determine their relative impact on AMR compared to patient transfers. We also modeled our network as a closed system, ignoring the effect of patient admissions from the community. The small sample sizes of VREF and CRAB limited our ability to draw robust inferences regarding these variants of utmost importance in other settings 48,49 . Finally, our findings reflect the AMR ecology of a Western European area, with generally lower prevalences of carbapenemase-producing pathogens and VREF than in other regions of the world.
To conclude, the metapopulation modeling of ESKAPE 2 pathogens in hospital wards shows that connectivity has a measurable, variant-specific impact on infection incidence, which supports the need to tailor strategies against AMR depending on the targeted pathogen. Along with novel hospital-level insights into the driving forces of AMR, our work illustrates the application of the methodological framework of metapopulation ecology to the problem of hospital AMR. Applying this framework to other healthcare settings could help inform the local and regional antibiotic stewardship and infection control strategies.
Methods
Data collection and compilation. We obtained data on infection incidence from the information system of the Institut des Agents Infectieux, the clinical microbiology laboratory of the Hospices Civils de Lyon, a group of university hospitals serving the Table 1.
Resistance was based on available results for susceptibility to, where applicable, ceftriaxone, cefotaxime, ceftazidime, cefepime, imipenem, meropenem, oxacillin and vancomycin. Antibiotic use in defined daily doses (ddd) of all systemic antibacterial drugs (ATC classification term J01), as well as of specific (groups of) molecules defined in Table 2, were extracted from the pharmacy department information system. For each pair of wards, the number of patient transfers was extracted from the hospital information system along with, for each ward, the number of beds, the type of medical activity and the number of patient admissions. Because of the aggregated nature of the data, informed consent was not sought, in accordance with French regulations. Our main response variable was the number of patients per ward infected with each pathogen variant, expressed as incidence over 1y.
Sampling bias control. To correct for bias due to varying microbiological sampling frequencies and locations across wards, we computed for each pathogen variant the expected incidence explained by sampling alone, under the assumption of constant pathogen prevalence across wards. Sampling locations were assigned to 6 location groups, namely, skin and soft tissues, respiratory tract, urinary tract, digestive tract, vascular access devices, and sterile tissues (such as CSF and peripheral blood cultures). For each location group, the probability P(Variant | Location) that a sample is positive for a given pathogen variant was aggregated for all wards in the network. For each ward and location, we computed the average number of samples per sampled patient N(Location | Ward). For each patient in a ward, the probability of being tested positive at least once for the variant of interest was the complement of the probability that all samples from all locations remained negative, P(Variant | Ward)=1-∏l ∈ Locations[1 -P(Variant | l )] N(l | Ward) . Finally, the expected incidence of infections in a ward was computed as the patient-level probability of being tested positive at least once, times the number of tested patients N, written Incidence(Variant | Ward) = N x P(Variant | Ward).
Clearly, variations of the incidence control value between wards depend only on the number and locations of microbiological samples taken, thus reflecting the incidence and types of bacterial infections at ward level but not variations of pathogen community structure. This value, in turn, is expected to correlate with both the amount of antibiotics used and the probability of detecting patients, leading to spurious correlation between incidence and antibiotic use in unadjusted models. Indeed, the incidence control exhibited strong correlation with both the observed cumulative incidence of all bacteria (R² = 0.96, p=10 Figure 2). These correlations remained significant for most pathogen variants and specific antibiotics. The incidence control was added as a covariate in all models predicting infection incidence. Such adjusted models, thus, predicted the incidence of infections in excess of what would be expected based on variations in sampling intensity alone.
Connectivity and other ward characteristics. Habitat quality in wards was described using explanatory variables adapted from Hanski's metapopulation models 22,23 , namely patch size and connectivity, along with additional variables capturing patient fragility and antibiotic selection pressure. We considered each ward within the hospital system as a distinct habitat patch, i. We used the number of beds as a measure of "patch area," A i . The connectivity between wards was implemented as a proxy to the true but unobservable number of introductions of each variant in each ward during the study period 50 . To estimate this quantity, we measured directional, partial connectivity S j,i from ward j to ward i as the number of patients transferred from j to i, times the observed probability that each patient tested positive for the variant. Partial connectivity, thus, was the expected number of positive patients transferred from j to i. Finally, connectivity for ward i was the sum of all directional connectivities, Si =∑jSj,i.
Along with size and connectivity, wards were characterized by patient fragility and antibiotic consumption. Fragility was coded on a 3-level ordinal scale. Lower values were assigned to wards with more robust patients and higher values to wards with more fragile patients. General wards were coded as 0, intermediate (progressive) care units as 1, and intensive care and blood cancer units as 2. Antibiotic use was normalized by dividing with the number of beds in each ward and expressed in ddd/bed/y. Statistical analysis. The statistical unit was the individual ward (n=357) in all analyses. We used the asymptotic Simpson index 51 , also known as the Hunter-Gaston index 52 , to determine the probability that two random isolates of a given variant were isolated in the same ward or that two random doses of a given antibiotic were delivered in the same ward. The index is defined as where ni is the number of isolates (or antibiotic doses) isolated (or delivered) in ward i and Ni =∑ni is the total number of isolates (or antibiotic doses). While typical applications of the Simpson index in ecology examine the distribution of sampled taxa relative to a sampling location, we examine the distribution of sampling locations relative to the taxa. Hence, we used the term 'concentration index' to avoid confusion with a diversity measure. The iNext R package was used to determine bootstrap-based 95% confidence intervals 53 . Models of infection incidence were constructed using Poisson regression. All continuous explanatory variables including the incidence control, ward size, connectivity and antibiotic use were log 2transformed before further analyses. To avoid negative infinity values from this transformation, all zeroes were first converted to half the minimum nonzero value. This transformation was associated with better model fit (using the model structure of Figure 1), in terms of Akaike information criterion, compared with: (1) replacing zeroes with the minimum non-zero value before taking logs; (2) adding 1 to zero values before taking logs; or (3) avoiding log transformation. The log 2 transformed data were used for all subsequent analyses. Analyses used R software version 3.6.0.
Possibly causal associations between antibiotic use and resistance. We examined criteria to identify a priori possibly causal associations between antibiotic use and resistance. The criteria were based on medical and biological considerations, namely, that antibiotics inactive against a variant are unlikely to be prescribed in response to this variant's prevalence; and that antibiotics are most likely to select for a variant when resistance provides a specific advantage, hence, when the variant is not resistant to more potent antibiotics (e.g, CTX/CRO is more likely to select for PA than for CRPA in which carbapenem resistance provides no additional benefit under CTX/CRO pressure). This rationale led to the following criteria: (1) the variant is always resistant to the antibiotics of interest; (2) the variant is not resistant to antibiotics more potent (in terms of spectrum or efficacy) than the antibiotics of interest; and (3) Pooled analysis of CTX/CRO-and IPM/MEMresistant variants. To model the cumulative incidences of 3GCR and CR infections, variants were pooled into resistance categories. When resistance to CTX/CRO or IPM/MEM was not determined by design (such as 3GC resistance in 3GCREC) or by intrinsic resistance (such as 3GC resistance in E. faecium), variants were classified as resistant when the proportion of resistance in our setting was above 80%. The rationale for this choice was that, contrary to the criteria for possible causality whose stringency was desirable to avoid ambiguity, excluding variants that are mostly resistant to an antibiotic group would bias pooled analyses. Applying the 80% threshold for the proportion of resistance led to classifying CRKP and CREB as 3GCR (91% and 93% 3GC resistance, respectively) but not CREC (61% 3GC resistance); and EF and VREF as CR (84% and 100% carbapenem resistance, respectively, inferred from ampicillin resistance 54 ). Overall, the 3GCR category included 3GCREC, 3GCRKP, CRKP, 3GCREB, CREB, PA, CRPA, AB, CRAB, EF, VREF and MRSA; and the CR category included CREC, CRKP, CREB, CRPA, CRAB, EF, VREF and MRSA.
Data Availability
All data that support the findings of this study are available from the corresponding author upon reasonable request.
Code Availability
All code that support the findings of this study are available from the corresponding author upon reasonable request.
|
2019-09-26T09:05:09.453Z
|
2019-09-19T00:00:00.000
|
{
"year": 2019,
"sha1": "83d8658ffb54b50227d2c02c75c369a09232c75c",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.7554/elife.54795",
"oa_status": "GREEN",
"pdf_src": "BioRxiv",
"pdf_hash": "54fb150998ed7d966d9aed136a04282d4582ed4f",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
17840602
|
pes2o/s2orc
|
v3-fos-license
|
Arginine 199 and Leucine 208 Have Key Roles in the Control of Adenosine A2A Receptor Signalling Function
One successful approach to obtaining high-resolution crystal structures of G-protein coupled receptors is the introduction of thermostabilising mutations within the receptor. This technique allows the generation of receptor constructs stabilised into different conformations suitable for structural studies. Previously, we functionally characterised a number of mutants of the adenosine A2A receptor, thermostabilised either in an agonist or antagonist conformation, using a yeast cell growth assay and demonstrated that there is a correlation between thermostability and loss of constitutive activity. Here we report the functional characterisation of 30 mutants intermediate between the Rag23 (agonist conformation mutant) and the wild-type receptor using the same yeast signalling assay with the aim of gaining greater insight into the role individual amino acids have in receptor function. The data showed that R199 and L208 have important roles in receptor function; substituting either of these residues for alanine abolishes constitutive activity. In addition, the R199A mutation markedly reduces receptor potency while L208A reduces receptor efficacy. A184L and L272A mutations also reduce constitutive activity and potency although to a lesser extent than the R199A and L208A. In contrast, the F79A mutation increases constitutive activity, potency and efficacy of the receptor. These findings shed new light on the role individual residues have on stability of the receptor and also provide some clues as to the regions of the protein responsible for constitutive activity. Furthermore, the available adenosine A2A receptor structures have allowed us to put our findings into a structural context.
Introduction
Previous research has shown that mutating a number of residues can both thermostabilise G-protein coupled receptors and modify receptor pharmacology [1][2][3][4]. Using alanine scanning mutagenesis coupled with ligand binding analysis, it has been possible to generate mutants that are stabilised in different conformations. This approach has led to a number of different high resolution structures of the turkey b 1 adrenergic receptor [5,6], the human adenosine A 2A (A 2A R) receptor [7,8], the neurotensin receptor [9] and the corticotropin-releasing factor receptor 1 [10].
In the case of the A 2A R, a thermostabilised construct, A 2A -StaR2, modified to include eight thermostabilising mutations, a further point mutation to remove an N-linked glycosylation site and a C-terminal truncation yielded three different high-resolution crystal structures of the inactive state receptor. Further mutagenesis in the presence of 59-N-Ethylcarboxamidoadenosine (NECA) resulted in an alternative construct, GL31 containing L48A, A54L, T65A, Q89A and N154A substitutions, in a preferentially agonistbound conformation. This construct yielded structures of the receptor in complex with both the agonist NECA (PDB accession code: 2YDV) and the natural ligand adenosine (PDB accession code: 2YDO). The conformation of the receptor in these structures is suggested to be intermediate between the active and inactive states [8]. Thus, this mutagenesis approach has the potential to facilitate the determination of high resolution GPCR structures in a number of different conformations.
Recent work in our group has explored the effect that thermostabilisation by mutagenesis has on receptor signalling activity [11]. Using a yeast functional assay which allows distinction between agonist-induced and constitutive receptor activities, we were able to show a correlation between the increased thermostability of the Rant5, Rant21 and Rag23 mutants [2] of the A 2A R and the loss of constitutive activity.
Analysis of the mutants intermediate between the Rant5, which is in an inactive conformation, and the wild-type (WT) revealed that the effects of the mutations on the pharmacological profile of the receptor were additive [11]. The results of this analysis confirmed earlier findings [12] showing the importance of Threonine 88 for agonist binding and activation, although our study also revealed that the T88A mutant retained the ability, albeit markedly reduced, to bind to the natural ligand adenosine [11]. Thus the thermostable mutants generated for structural studies together with the yeast functional assay provide a means of gaining greater insight into the roles of individual amino acids in the mechanism of action of the receptor.
Here we have explored the mutants intermediate between the WT A 2A R and Rag23 [2], a thermostabilised mutant in an agonist binding conformation containing the following five mutations: F79A, A184L, R199A, L208A and L272A. All possible combinations of the mutations were generated, leading to a total of 30 constructs, which were characterised in a yeast signalling assay [13]. The data showed that R199 and L208 have important roles in receptor function; substituting either of these residues for alanine abolishes constitutive activity. In addition the R199A mutant markedly reduces receptor potency while L208A reduces receptor efficacy. These findings shed new light on the role individual residues have in stability of the receptor and also provide some clues as to the regions of the protein responsible for constitutive activity. Furthermore, the available adenosine A 2A receptor structures have allowed us to put our findings into a structural context.
Material
Yeast nitrogen base (YNB) and yeast extract were purchased from Difco. Peptone, amino acids and 3-Amino Triazole (3AT) were obtained from Sigma-Aldrich and Dimethyl Sulphoxide (DMSO) from Acros Organics. NECA was obtained from Tocris. The Lightning Quikchange site directed mutagenesis kit was obtained from Stratagene/Agilent. Fluorescein-Di-b-D-glucopyranoside was purchased from Invitrogen.
Construct generation and mutagenesis
The Rag23 A 2A R mutant was obtained from GeneArt (Regensburg, Germany). The synthetic gene encoded the fulllength A 2A R gene and contained a FLAG tag at the N terminus. The gene was cloned into the pDDGFP S. cerevisiae expression plasmid [14]. The construct comprises the receptor gene upstream of the gene coding for GFP-His8. The pDDGFP plasmid was then digested using BamHI and HindIII which excised the complete gene coding for the A 2A R + GFP-His8 fusion protein. This gene was then ligated into the integrating p306GPD [13] vector. The intermediate mutants were generated from the Rag23 synthetic gene by site-directed mutagenesis using the QuickChange Lightning Site-Directed Mutagenesis kit and the primers detailed in Table S1 in File S1. A complete list of all the mutants can be found in Table 1. Expression All the A 2A R constructs were fusions with a C-terminal GFP-8His tag in the p306GPD vector, transformed using the lithiumacetate procedure [15] and chromosomally integrated at the ura3 locus in the MMY24 (MATa fus1::FUS1-HIS3 LEU2::FUS1-lacZ far1 sst2 ste2 gpa1::ADE2 his3 ura3 TRP1::GPA1-Gai3) yeast strain [13]. Estimation of the expression levels for all the constructs using GFP as previously described [14] gave values between 0.4-1.4 mg/ml (Table S2 in File S1).
Yeast cell growth assay
The yeast cell growth assay was performed as previously described [11]. Briefly, the individual colonies of MMY24 cells containing each mutant were inoculated into Synthetic Complete medium lacking uracil ('-URA medium'; 6.7% YNB, 2% Dglucose, 1. is thus a measure of growth of the culture, which is a measure of receptor activity. Different concentrations of agonist (0.17 pM-0.2 mM), were added. The yeast growth was assessed by fluorescence measurement using a Spectramax M2e platereader (Molecular Devices) following 23 h incubation at 30uC. Log 10 [NECA] against fluorescence curves were plotted and fitted to non-linear regression, providing EC 50 values. Data were analyzed using GraphPad Prism 6.0 (GraphPad Software, San Diego, CA, USA). All the data were normalized to WT.
Functional profiles of the Wild-Type and Rag23 receptors
The activity of each of the receptor constructs is summarised in Table 1. The numerical data for some of the mutants is also shown in Figures 1-5C. The graphical data for all constructs is shown in S1-4 in File S1. The WT (indicated in pink in Figures 1-5) exhibits two distinct activities: constitutive activity in the absence of agonist and agonist induced activity in the presence of increasing concentrations of NECA as observed previously [11]. The constitutive activity makes up almost half of the maximal activity of the WT receptor ( Figure 1A, Table 1) in agreement with the high levels of constitutive activity seen for the receptor in mammalian cells [16]. In contrast, the Rag23 construct containing all five of the mutations F79A, A184L, R199A, L208A, L272A exhibits high agonist induced activity but almost no constitutive activity ( Figure 1A) as shown previously [11]. In addition Rag23 has an increased potency (pEC 50 ) compared to the WT as observed previously [11].
F79A enhances constitutive activity, potency and efficacy
The effects of the F79A mutation are illustrated by the single mutant Rag23.30. This has increased constitutive activity and potency compared with the WT while retaining similar efficacy ( Figure 1A). These effects are also observed in constructs where F79A is combined with A184L and/or L272A (Rag23.13, Rag23.22 and Rag23.25; Figure 1B). All these constructs exhibit WT or higher levels of potency and constitutive activity. In contrast, constructs combining F79A with R199A or L208A (Rag23.23 and Rag23.24; Figure 1B), exhibit markedly lower constitutive activity than the WT. However, the presence of the F79A mutation still increases the constitutive activity of the receptor compared with the equivalent constructs containing R199A or L208A alone (Rag23.27 and Rag23.28; Figure 1A). The same trend is also observed for receptor potency demonstrating that the effects of F79A on potency and constitutive activity are antagonized by the presence of the R199A and L208A. One exception to that is Rag23.14, the combination of F79A, A184L and L208A produces an overall profile similar to WT. It is not clear from either the data presented here or the structures why this WT-like behaviour is seen for this construct.
Interestingly, the F79A mutation is not thermostabilising but was included in the Rag23 because it was preferentially in an agonist-binding conformation [2]. The increased constitutive activity and lack of thermostabilisation of the F79A mutation is consistent with our previous findings demonstrating a correlation between loss of constitutive activity and thermostabilisation [11].
Structural basis of the effects of the F79A mutation A number of high resolution A 2A R structures have been obtained using a variety of techniques and in the presence of different ligands [7,8,[17][18][19]. Given that here we are exploring the roles of mutations in a thermostabilised receptor in the preferentially agonist conformation [2], we have used the structure of a thermostabilised A 2A R containing 5 point mutations in complex with the agonist NECA [8] to provide further context to the results of this study (PDB accession code: 2YDV). In the structure, F79 is located on trans-membrane helix 3 (TM3; Figure 6A and 6B), a region of the protein with key roles in GPCR structure and function. The recent analysis of the known GPCR structures by Venkatakrishnan et al [20] revealed that TM3 interacts with all other TMs except TMs1 and 7 and is thus central for maintaining the overall GPCR scaffold. Furthermore, residues in TM3 have been shown to form interactions with the ligand in nearly all the receptors for which a high-resolution structure is available [20]. In addition, once the receptor is activated, TM3 forms a critical interaction with the G-protein as exemplified by Arg 3.50 interacting with a backbone carbonyl of the C terminus of the G protein [21]. It is therefore not a surprise that GPCR activity is very sensitive to mutations in TM3, since these often lead to loss of function or markedly increased constitutive activity [12,20,22]. A comparison of the structure of the A 2A R bound to NECA (PDB accession code: 2YDV) with the structure of the A 2A R-T4L bound to ZM241385 (PDB accession code: 3EML) reveals a 2 Å upward movement of TM3 in the active-like conformation of the receptor [8]. In the inactive conformation (PDB accession code: 3PWH), F79 forms van der Waals interactions with a number of surrounding residues including F62 and L137. These interactions are lost when F79 is mutated to an alanine possibly resulting in a less stable inactive conformation. The F79A receptor construct is therefore more likely to adopt an active conformation, with the associated 2 Å upward movement of TM3 explaining the increase in constitutive activity observed for this mutant in our study.
R199A reduces constitutive activity and potency
All constructs containing the R199A mutation exhibited an almost complete lack of constitutive activity and reduced potency Table 1). The data strongly indicate that R199 has a key role in the constitutive activity of the A 2A R. This is further illustrated by Rag23.24, which contains both F79A and R199A mutations. The profile of this mutant shows that the increase of constitutive activity due to the F79A mutation is almost completely overcome by the R199A mutation ( Figure 2) and as mentioned above the R199A further reduces the overall constitutive activity of this mutant compared with WT. The R199A mutation also reduces receptor potency compared with WT levels. However, the effects are less dominant than those observed for the constitutive activity. F79A alone increases potency but in combination with R199A produces a construct with WT potency. These findings indicate that the R199A cancels out the increase in potency caused by F79A, but does not further affect potency in this construct compared with WT. The Rag23.6 triple mutant (R199A, L208A and L272) also exhibits no constitutive activity and reduced potency but this construct also shows a markedly reduced efficacy (Figure 2). One possible reason for this could be reduced expression; however, comparison of the expression levels of the different receptors (Table S2 in File S1) shows that Rag23.6 expresses to the same level (1 mg/L) as the WT.
L208A mutation reduces constitutive activity and efficacy
Almost all the constructs containing the L208A mutations exhibit very low or undetectable constitutive activity and markedly reduced efficacy compared with WT ( Figure 3). As for the R199A mutation, the effect of the L208A mutation on the constitutive activity almost completely overcomes the enhancing effect of the F79A, except in the presence of A184L, Rag23.14 ( Figure 3B).
The effects of L208A on efficacy are illustrated by the quadruple mutant, Rag23.1 which lacks the L208A substitution and has markedly increased efficacy compared with all the constructs containing L208A. This indicates that the low levels of efficacy observed in Rag23.2, 3, 4 and 5 as well as Rag23 are due to the presence of the L208A mutation ( Figure 3A). Although there is variation in the expression level of these mutants, this does not account for the changes in efficacy observed (Table S2 in File S1). Indeed several of these constructs exhibit similar or higher expression levels compared with the WT receptor. The L208A mutation influences the potency in a similar way to the R199A mutation.
Structural basis of the effects of the R199A and L208A mutations
In the structure of the A 2A R, R199 and L208 are both located in the cytoplasmic portion of trans-membrane helix 5 (TM5; Figure 6A and 6C). This region of TM5 is involved in the interaction between the b 2 adrenergic receptor and the G as [21] and is thus likely to be involved in the interaction of the A 2A R with the G a1 -related chimera protein used in the yeast cell growth assay.
Both the R199A and L208A mutations affect the constitutive activity dramatically. However, only the L208A mutation affects the efficacy. The structures of the A 2A R and the b 2 AR in complex with G S provide clues to explain this difference. Residues R199 and L208 are conserved in the b 2 AR as R221 and L230 respectively. Due to the lack of a structure of the A 2A R in an active conformation, we used the structure of the b 2 AR in complex with the G s protein to aid interpretation of our data. In this structure, R221 of the b 2 AR forms a hydrogen bond with a threonine (equivalent to an arginine in the A 2A R) on TM3 when the receptor is in complex with the G s protein. This hydrogen bond no longer exists when R221 is mutated to an alanine. Here we have demonstrated that the R199A mutation completely abolishes constitutive activity of the A 2A R without affecting efficacy suggesting that the interaction between TM5 and TM3 is crucial for constitutive activity but not for agonist-induced activity.
The L230 residue of the b 2 AR forms a direct interaction with the leucine at the extreme C-terminal end of the G protein.
Mutating the equivalent residue on the A 2A R, L208, seems to prevent constitutive G-protein activation but only reduces agonist induced activation. This suggests that this interaction is crucial for formation of a constitutively active complex but it is of less importance in the formation of the agonist-induced active complex.
The fact that L208 is in direct contact with the G protein, while R199 is involved in making intra-receptor contacts, may explain why the L208A mutation has an effect on both constitutive activity and efficacy while the R199A mutation affects only the constitutive activity.
Our previous study suggested that the agonist induced and constitutively active conformations of the A 2A R are distinct [11], thus inhibiting constitutive activity of the receptor does not necessarily have any negative effects on agonist-induced activity.
The effects of A184L and L272A are overcome by the other mutations As can be seen from the data for the A184L single mutant construct, Rag23.29, this substitution has effects on the three key functional parameters assessed here (Figure 4). It reduces the constitutive activity, potency and efficacy of the receptor (0%, 6.3%, and 63%, respectively; Figure 4B). An almost identical functional profile is seen for Rag23.19 (Figure 4), which combines the A184L and L272A mutations. However, these effects are not observed in those constructs, which combine the A184L with L208A, R199A and/or F79A. For example, when A184L (Rag23.29) is combined with the strong positive effects of F79A Figure 6. Position of the F79 (red), R199A and L208 (magenta), A184 and L272 (blue) residues in the high-resolution crystal structure of the A 2A R GL31 thermostable mutant (PDB accession code: 2YDV) (A). F79 is located on TM3 (B) while R199 and L208 are both located at the cytoplasmic end of TM5 (C) and A184 and L272 are located at the extracellular end of TM5 and TM7 respectively (D). doi:10.1371/journal.pone.0089613.g006 on constitutive activity, potency and efficacy (Rag23.30), this leads to an increase in all three parameters of the resulting mutant (Rag23.25; Figure 4) relative to WT. In contrast, addition of A184L with R199A (Rag 23.21; Figure 4) or L208A (Rag23.20; Figure 4) results in an almost complete loss of constitutive activity and reduced potency (with R199A) and efficacy (with L208A).
L272A alone has a moderate negative effect on the constitutive activity and potency of the A 2A R (Rag23.26, Figure 5). Much like A184L, these effects are markedly influenced by the presence of other more dominant mutations. For example, when L272A is combined with F79A in Rag23.22 (Figure 5), this mutant still has increased constitutive activity and potency compared to WT, as a result of the strong influence of F79A. In contrast, when L272A is combined with R199A in Rag23.17, this mutant has very low levels of constitutive activity and reduced potency compared with WT due to the influence of R199A.
Structural basis for the effects of the A184L and L272A mutations A184L and L272A are located on the extracellular ends of TM5 and TM7 respectively ( Figure 6A and 6D). Both of these residues are a significant distance from both the ligand-binding pocket and the G protein-binding region. Based on their location in the crystal structure [8] these residues are unlikely to be directly involved in G protein coupling or ligand binding ( Figure 6D). The data presented here, together with the available structural data, are not sufficient to explain why A184L and L272A have the observed functional effects in the receptors with these single mutations (i.e. Rag23.29 and Rag23.26). However, this does provide clues as to why F79A, R199A and L208A have more dominant effects on the function of the receptor.
Interestingly, the dominance of the individual mutations is in accordance with their stabilisation effects observed by Magnani and colleagues [2]. One exception is the F79A mutation, which is not thermostabilising [2]. For example, the R199A (Rag23.28) and L208A (Rag23.27) single mutants retain 101% and 108% of the WT binding activity after heating at 30uC for 30 minutes respectively while the A184L (Rag23.29) and L272A (Rag23.26) retained 75% and 79% respectively. In this analysis WT binding activity after heating is taken as 50%. The F79A mutant shows similar levels of activity after heating as the WT receptor.
In conclusion, the R199A and L208A mutations inhibit constitutive activity of the A 2A R receptor while F79A enhances constitutive activity. In addition, the L208A mutation also affects the efficacy of the receptor. The effects of A184L and L272A are overcome by the more dominant F79A, R199A and L208A mutations. Analysis of the mutations alone and in combination using the yeast assay, together with the known A 2A R structures provides information on the role of individual amino acids in receptor function. However as with all types of analysis of this kind it can be difficult to fully dissect the activity of an individual amino acid from the contributions of all others. A full understanding of the roles of all the amino acid residues will only be revealed through multiple crystal structures in a range of different conformations coupled with detailed dynamics studies.
Supporting Information
File S1 Supporting Figures and Tables. Figure S1, NECA-induced activity of the WT A 2A R, Rag23 and the quadruple mutants intermediate between the WT and Rag23. See Table 1 for the precise details of each mutant. The receptor constructs were expressed in the MMY24 S. cerevisiae strain using the p306GPD vector. The activity of cells containing empty vector is shown as a control. Figure S2, NECA-induced activity of the WT A 2A R and the triple mutants intermediate between the WT and Rag23. See Table 1 for the precise details of each mutant. The receptor constructs were expressed in the MMY24 S. cerevisiae strain using the p306GPD vector. The activity of cells containing empty vector is shown as a control. Figure S3, NECA-induced activity of the WT A 2A R and the double mutants intermediate between the WT and Rag23. See Table 1 for the precise details of each mutant. The receptor constructs were expressed in the MMY24 S. cerevisiae strain using the p306GPD vector. The activity of cells containing empty vector is shown as a control. Figure S4, NECA-induced activity of the WT A 2A R and the single mutants intermediate between the WT and Rag23. See Table 1 for the precise details of each mutant. The receptor constructs were expressed in the MMY24 S. cerevisiae strain using the p306GPD vector. The activity of cells containing empty vector is shown as a control. Table S1, Oligos used to generate the mutant receptor constructs. Table S2, Expression levels of the thirty mutants and the wild-type calculated using the eGFP fluorescence as described by Drew et al.
|
2016-05-12T22:15:10.714Z
|
2014-03-03T00:00:00.000
|
{
"year": 2014,
"sha1": "c90e83b4cf759885060cbd65c2db3091c31dcd83",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0089613&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c90e83b4cf759885060cbd65c2db3091c31dcd83",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
225770943
|
pes2o/s2orc
|
v3-fos-license
|
A Systematic Review of the Role of BIM in Building Sustainability Assessment Methods
.
Introduction
The scientific community has already proved the relationship between the built environment and environmental problems [1]. Different actions have been taken to reduce buildings' negative impact and fight against environmental issues. Among them are the Building Sustainability Assessment (BSA) methods, which aim at implementing and spreading sustainable principles, evaluating building performance and gathering information to support decision-making [2]. They are usually characterised by assessing several building features and aggregate the results into a sustainability score. Several methods have been developed all over the world by private and public organisations, according to their needs, characteristics and culture [2]. Despite the existence of different BSA methods, the following three are recognised as the basis of all the other approaches [3,4]: Building Research Establishment Environmental Assessment Method (BREEAM), Leadership in Energy and Environmental Design (LEED) and Sustainable Building Tool (SBTool). These will be used in this research and are presented in the next section.
BREEAM
The Building Research Establishment Environmental Assessment Method (BREEAM) was created in 1990 by the Building Research Establishment (BRE) in the United Kingdom. BREEAM was launched as a credit award system for new office buildings but quickly developed systems for other buildings, such as homes, supermarkets or industrial buildings. BREEAM credits are divided over ten categories: Energy, Health and Wellbeing, Innovation, Land Use, Materials, Management, Pollution, Transport, Waste and Water [29]. Each category is subdivided into a set of assessment issues, each with its own aim and benchmarks. Every benchmark needs to be determined by a BREEAM expert assessor before credits can be assigned to the project. Once the assessment is entirely performed, the final score is determined by the sum of the weighted category scores [30]. BREEAM encompasses both mandatory and optional credits and allows to "trade" compulsory credits in different categories, while always setting minimum standards in essential areas.
Nowadays, the BRE has different BREEAM Standards available for Communities, Infrastructures, New Construction, In-use and Refurbishment and Fit-out, and it is recognised in more than 60 countries.
BRE has already started research about the capabilities of BIM. Currently, they have available different BIM-related services, such as certification, consultants, training and some research projects [31]. BRE has also released a BREEAM API to explore and integrate its rating data on thousands of certified building assessments across 50 countries, available for different tools, websites or software [16].
LEED
The first version of the Leadership in Energy and Environmental Design (LEED), developed by the United States Green Building Council (USGBC), dates from 1998. The aim was to provide building owners and operators with a concise framework to identify and implement green building solutions. It is mostly used in the United States of America, and it is recognised in more than 30 countries [9].
LEED is a point-based system, with a balance between known effective practices and emerging concepts, following six major categories: Sustainable Sites, Water Efficiency, Energy and Atmosphere, Materials and Resources, Indoor Environmental Quality and Innovation in Design. Using existing validated technologies, LEED assesses the environmental performance of buildings from an overall point of view during their lifecycle. The number of points that the project earns determines the certification level. In addition to credits, some sections of LEED include prerequisites that also must be satisfied, even though they do not count towards the building's total points [30].
Nowadays, LEED has several rating systems, in four main areas: Building Design and Construction, Operations and Maintenance, Interior Design and Construction and Neighborhood Development. These systems cover different types of buildings, from residential, hospitals, retail, schools and warehouses, among others [32].
BIM applications on LEED are usually initiatives from researchers, private organisations or designers. Several authors have developed their specific applications for LEED, according to their needs. In 2014, the USGBC released some applications for LEED automation: Autodesk apps for LEED, COMNET Energy Modeling Portal, Greengrade LEED Management Software, Green Wizard, IES Tap for LEED, Tracker Plus LEED and Trane [17].
SBTool
The international initiative for a Sustainable Built Environment (iiSBE) developed the Sustainable Building Tool (SBTool). This method is considered one of the most comprehensive of all the BSA methods and has the flexibility to be adjusted to the local conditions of each region [2,3]. This feature allows to compare the sustainability level of buildings from different countries.
SBTool has influenced the national rating systems of Austria, Spain, Japan and South Korea. Custom versions have been introduced in Italy, Czech Republic and Portugal [33]. It can be adapted to assess the sustainability level of different type of buildings, such as houses, offices, schools or medical facilities, and already has versions for urban neighbourhoods.
SBTool has a set of parameters with different weights according to the national standards and practices. Each parameter is classified with a qualitative "score" that results from the comparison between two benchmarks: best and conventional practice. After weighing all the parameters, a final sustainable classification is given to the building. The parameter weights and the benchmarks must reflect a country's characteristics and specific factors [2]. The system covers a wide range of sustainable building issues. The scope can be modified to be as narrow or as broad as desired, from more than 100 criteria to half a dozen. Parameter weights can also be adjusted to region-specific and site-specific factors.
Methodology
Facing the existing literature gap about the application of BIM to evaluate BSA methods criteria, the main objective of this study is to understand the actual practical implementation of BIM to evaluate BSA criteria. The goal is to identify which BSA criteria are available (and proved) to be assessed with BIM, as well as the most effective BIM software for such kind of analysis. It is also intended to analyse the topic trend in the past 10 years and the attractiveness level of BIM integration in the SBTool method, facing the two most known BSA schemes-LEED and BREEAM. Therefore, the following main research questions were established: • What is the actual practical implementation of BIM to assess BSA criteria? • Which percentage of BSA criteria can be assessed with BIM? • What is the BIM software commonly used to assess BSA criteria? • Which are the most preferred journals by researchers on the topic? • Facing the current integration of BIM in LEED and BREEAM, will a BIM-automated assessment for SBTool be attractive enough?
To accurately answer the formulated research questions, a systematic review will be carried out, adapted from Tawfik et al.'s [34] guide. Figure 1 summarises all the procedure sequences for this study. After the research question(s) definition, a preliminary search will be performed to identify similar review studies and establish the contribution of this study. Then, the search strategy will be defined in terms of scope and keyword combinations. This review will only focus on publications that directly address the practical assessment of, at least, one criterion from the selected BSA methods-LEED, BREEAM and SBTool. These BSA methods were selected as they provide the basis for all the other existing frameworks [3,4].
The research boundaries were defined by identifying the inclusion and exclusion criteria. The considered period is between 2009 and 2019. There were no restrictions regarding country of origin, BSA method version and applied BIM software, but only English language publications were considered. Publications for which the full text is unavailable and abstract only publications were excluded from the analysis.
Regarding the database, Web of Science was chosen as a research engine, due to its broader citations database. It encompasses registers from most of the existing high-impact journals.
By applying all the criteria, publications for consideration will be gathered and exported to a reference citation manager to remove duplications and for filtering. First by title and abstract reading and, then, by full-text reading. Finally, after identifying all the key publications for the research, a manual search will be performed to add a couple of publications about the topic that did not appear when using the selected keywords. To analyse all the results, the key publications will be organised through tables, covering the following information: • year of publication; This data will be used to carry out a statistical analysis, where the following aspects will be identified: percentage of assessed credits from each BSA method, the most assessed categories, the most common applied software, the topic trend in the past 10 years and the journals with most publications on the topic. Based on the current state of implementation of BIM in LEED and BREEAM, the attractiveness of a BIM-based assessment for SBTool will be investigated, as well as the replicability level of the applied procedures in those schemes (when applicable). This data will be used to carry out a statistical analysis, where the following aspects will be identified: percentage of assessed credits from each BSA method, the most assessed categories, the most common applied software, the topic trend in the past 10 years and the journals with most publications on the topic. Based on the current state of implementation of BIM in LEED and BREEAM, the attractiveness of a BIM-based assessment for SBTool will be investigated, as well as the replicability level of the applied procedures in those schemes (when applicable).
Related Review Studies-Idea Validation
A quick search about the application of BIM in BSA methods shows that different proposals are being used to include BIM in several BSA methods and versions. The results also show an increased interest in the topic, creating a need to identify which criteria were already assessed with BIM. Up to date, some systematic reviews have been done on the use of BIM in building sustainability, as presented in Table 1. The most common journal to publish BIM-based reviews is Automation in Construction with three publications, followed by the Sustainable Cities and Society journal with two publications. Understandably, most of the reviews are focused on the use of BIM in generic sustainability applications, as well as in different project lifecycle stages to improve building sustainability [8,14,20]. A trend was also verified, which concerns the review of BIM-based Life Cycle Assessments [35,36] and BIM. Common journals and top authors/citations were also already identified [15].
Concerning the application of BIM in BSA methods, Santos et al. [15] have identified the potential of BIM to automate the BSA evaluation. Ansah et al. [22] have reviewed the application of BIM in several BSA methods, identifying the most-assessed scheme. However, the Ansah et al. review has focused both on frameworks and practical applications, with a limited characterisation about the applied software in each assessed criterion and category. Few insights were also given about the recent topic trend, most selected journals and BSA methods/version. Thus, the present review's contribution stands out by aiming to extend the analysis by including more and only practical
Related Review Studies-Idea Validation
A quick search about the application of BIM in BSA methods shows that different proposals are being used to include BIM in several BSA methods and versions. The results also show an increased interest in the topic, creating a need to identify which criteria were already assessed with BIM. Up to date, some systematic reviews have been done on the use of BIM in building sustainability, as presented in Table 1. The most common journal to publish BIM-based reviews is Automation in Construction with three publications, followed by the Sustainable Cities and Society journal with two publications. Understandably, most of the reviews are focused on the use of BIM in generic sustainability applications, as well as in different project lifecycle stages to improve building sustainability [8,14,20]. A trend was also verified, which concerns the review of BIM-based Life Cycle Assessments [35,36] and BIM. Common journals and top authors/citations were also already identified [15].
Energy and Buildings
Critical review of BIM-based LCA method to buildings [36] Review of academic publications centred on BIM-based LCA
academic publications
The integration of BIM-LCA has mainly been developed in new buildings or projects; its utility from early stages of design has been mostly recognised. Furthermore, this paper concluded that almost half of the case studies developed an environmental impact assessment based on LCA but focused on the energy lifecycle.
Automation in Construction
Building Information Modeling (BIM) for green buildings: A critical review and future directions [8] Applications of BIM in supporting the design, construction, operation, and retrofitting processes of green buildings; the various functions of BIM for green building analyses such as energy, emissions, and ventilation analysis and; the applications of BIM in supporting green building assessments (GBA) Over 400 academic publications BIM is an essential tool for the design stage of green buildings and has potential value for the construction, facility and operation management phases. Primary BIM functions to assess the sustainability level of a building include the following analysis: energy performance, carbon emissions, natural ventilation, solar radiation, natural and artificial lighting, water usage, acoustics performance and evaluation of thermal comfort. Green BIM applications could bring several benefits for GBA, such as estimating scores, managing application documents, and improving the process efficiency
Automation in Construction
Informetric analysis and review of literature on the role of BIM in sustainable construction [15] Current state of the literature on sustainable construction and BIM, including environmental, economic, and social dimensions and their search combinations
academic publications
The number of published scientific works on BIM and sustainable construction registered exponential growth in previous years. There is a higher synergy between environmental and economic dimensions, and between environmental and social dimensions.
There is a lack of research that considers all dimensions of sustainability. Top 10 researchers and journals on the subject were identified.
Concerning the application of BIM in BSA methods, Santos et al. [15] have identified the potential of BIM to automate the BSA evaluation. Ansah et al. [22] have reviewed the application of BIM in several BSA methods, identifying the most-assessed scheme. However, the Ansah et al. review has focused both on frameworks and practical applications, with a limited characterisation about the applied software in each assessed criterion and category. Few insights were also given about the recent topic trend, most selected journals and BSA methods/version. Thus, the present review's contribution stands out by aiming to extend the analysis by including more and only practical applications of BIM to assess BSA criteria. This review also intends to identify the applied BIM software, the research topic trend in the past 10 years as well as the most selected journals for publications. Thus, it will be possible to close the knowledge gap about the practical application of BIM in BSA methods and to analyse the percentage of criteria that can be BIM-automated from LEED and BREEAM. This result will provide a basis to analyse the future integration of BIM in the SBTool method facing the two most-recognised assessment schemes, both in terms of attractiveness and processes replicability.
Publications for Consideration
By applying the keyword combinations of the systematic review ( Figure 1), the research on the Principal Collection of Web of Science returned a total of 245 peer-reviewed publications. By applying the reference manager citation filter for duplicated publications, only 83 publications were left for consideration. However, when reading the title and abstract from all the remaining publications, only 41 were left over regarding the assessment of BSA criteria from the selected schemes. The final filtration stage was performed by reading the 41 full-text publication (whenever available), resulting in 23 publications concerning the research question(s): 19 for LEED, two for BREEAM, one for both and one for SBTool. Finally, by performing a quick manual research, three additional publications were found outside the keyword combinations. One for LEED, one for BREEAM and another one for both schemes, achieving the final number of 26 publications for consideration. Figure 2 presents the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) flow diagram with the search phases and number of records. The first exclusion phase represents the title and abstract filtering. Full-text publications that were excluded did not address the practical assessment of one or more BSA criteria.
Initial insights are in line with other review studies, pointing out the trend in research in assessing LEED criteria with BIM [26]. LEED has more than 400% of the studies compared with BREEAM. Concerning the SBTool, only one publication was found regarding a framework for a BIM-based assessment, which will be further explored later. All 26 publications will be used to conduct the statistical analysis. more BSA criteria.
Initial insights are in line with other review studies, pointing out the trend in research in assessing LEED criteria with BIM [26]. LEED has more than 400% of the studies compared with BREEAM. Concerning the SBTool, only one publication was found regarding a framework for a BIMbased assessment, which will be further explored later. All 26 publications will be used to conduct the statistical analysis.
BIM Application in BREEAM
From the performed analysis, five papers regarding the practical application of BIM in the BREEAM method are identified in Table 2. Between 2013 and 2019, different BREEAM versions have been addressed in all five publications. With the data from Table 2, Figure 3 was organised to present the BREEAM categories that were assessed using BIM. A commonly assessed category is Energy, attended in 4 out of 5 studies, followed by the Materials category, addressed in three studies. In total, the identified studies have assessed 20 different BREEAM criteria, in the categories of Materials, Energy, Land Use and Ecology, Management, Water, Waste, Health and Wellbeing and Pollution (8 categories out of 10). The Innovation and Transport categories have no assessed criteria. A common BREEAM version was used to understand the percentage of credits available to assess with BIM. As all the addressed versions are different, it is hard to define a common percentage for all. Thus, this analysis was only made for BREEAM UK Refurbishment and Fit-out 2014, as it is the publication with most assessed criteria (eight credits). For this case, a BIM-based procedure was possible to apply for the assessment of 24% of the scheme version credits (8 out of 34). Nevertheless, identified authors usually were able to assess approximately seven credits from each BREEAM version. With regard to the software use ( Figure 4), Autodesk Revit, IES-VE (IES, Glasgow, United Kingdom) and Visual Studio (Microsoft Corporation, Redmond, CA, USA) were identified as commonly applied to assess BREAM criteria (all used in 2 out of 5 studies). In total, 16 different software were used to assess 20 BREEAM criteria.
Concerning the preferred journal by researchers, Automation in Construction has been chosen for the publication of 2 out of 5 studies. Only one publication was found for the remaining three journals.
BIM Application in LEED
According to the analysis, LEED is the most-used BSA method by researchers regarding the use of BIM. As presented in Table 3, 22 from the 26 identified studies have addressed, at least, one LEED credit, between 2011 and 2019. The most used LEED version by researchers is BD+C: New Construction v3 (2009) is addressed in nine publications, followed by BD+C: New Construction v4 addressed in five papers. Figure 5 presents the different versions applied in the identified publications. Note also for the application of different LEED versions in school buildings, which have happened in three publications. With regard to the software use ( Figure 4), Autodesk Revit, IES-VE (IES, Glasgow, United Kingdom) and Visual Studio (Microsoft Corporation, Redmond, CA, USA) were identified as commonly applied to assess BREAM criteria (all used in 2 out of 5 studies). In total, 16 different software were used to assess 20 BREEAM criteria.
Concerning the preferred journal by researchers, Automation in Construction has been chosen for the publication of 2 out of 5 studies. Only one publication was found for the remaining three journals.
BIM Application in LEED
According to the analysis, LEED is the most-used BSA method by researchers regarding the use of BIM. As presented in Table 3, 22 from the 26 identified studies have addressed, at least, one LEED credit, between 2011 and 2019. The most used LEED version by researchers is BD+C: New Construction v3 (2009) is addressed in nine publications, followed by BD+C: New Construction v4 addressed in five papers. Figure 5 presents the different versions applied in the identified publications. Note also for the application of different LEED versions in school buildings, which have happened in three publications. Concerning the preferred journal by researchers, Automation in Construction has been chosen for the publication of 2 out of 5 studies. Only one publication was found for the remaining three journals.
BIM Application in LEED
According to the analysis, LEED is the most-used BSA method by researchers regarding the use of BIM. As presented in Table 3, 22 from the 26 identified studies have addressed, at least, one LEED credit, between 2011 and 2019. The most used LEED version by researchers is BD+C: New Construction v3 (2009) is addressed in nine publications, followed by BD+C: New Construction v4 addressed in five papers. Figure 5 presents the different versions applied in the identified publications. Note also for the application of different LEED versions in school buildings, which have happened in three publications. All LEED categories have been assessed somehow in the identified studies, as presented in Figure 6. The most common assessed credits are from the Materials and Resources category, addressed in 9 out of 22 publications. The following categories are Energy and Atmosphere (8), Sustainable Sites (7) and Indoor Environmental Quality (6). All the other categories have only been assessed in one publication. In total, the selected articles have assessed 84 different credits and 11 prerequisites from the categories of Sustainable Sites, Energy and Atmosphere, Materials and Resources, Indoor Environmental Quality, Innovation in Design Process, Regional Priority, Water Efficiency and Pilot-Credits in different LEED versions. Regarding LEED v3-the most addressed version-a total of five prerequisites and 33 credits have been assessed, representing 67% of all the scheme items (excluding Pilot Credits). All LEED categories have been assessed somehow in the identified studies, as presented in Figure 6. The most common assessed credits are from the Materials and Resources category, addressed in 9 out of 22 publications. The following categories are Energy and Atmosphere (8), Sustainable Sites (7) and Indoor Environmental Quality (6). All the other categories have only been assessed in one publication. In total, the selected articles have assessed 84 different credits and 11 prerequisites from the categories of Sustainable Sites, Energy and Atmosphere, Materials and Resources, Indoor Environmental Quality, Innovation in Design Process, Regional Priority, Water Efficiency and Pilot-Credits in different LEED versions. Regarding LEED v3-the most addressed version-a total of five prerequisites and 33 credits have been assessed, representing 67% of all the scheme items (excluding Pilot Credits). All LEED categories have been assessed somehow in the identified studies, as presented in Figure 6. The most common assessed credits are from the Materials and Resources category, addressed in 9 out of 22 publications. The following categories are Energy and Atmosphere (8), Sustainable Sites (7) and Indoor Environmental Quality (6). All the other categories have only been assessed in one publication. In total, the selected articles have assessed 84 different credits and 11 prerequisites from the categories of Sustainable Sites, Energy and Atmosphere, Materials and Resources, Indoor Environmental Quality, Innovation in Design Process, Regional Priority, Water Efficiency and Pilot-Credits in different LEED versions. Regarding LEED v3-the most addressed version-a total of five prerequisites and 33 credits have been assessed, representing 67% of all the scheme items (excluding Pilot Credits). With regard to the most preferred journals for LEED-related publications, 4 out of 22 papers were published in Conference Proceedings. Automation in Construction followed with three publications, followed by the Journal of Architectural Computing and Journal of Cleaner Production, both with 2 articles.
BIM Application in SBTool
The application of BIM in SBTool is still in an initial stage, with the proposal of conceptual approaches. The only identified study regarding SBTool and BIM dates from 2019 and has identified a BIM-based framework to assess the SBTool PT -H-Portuguese method-to assess the sustainability of residential buildings [5]. This study proposed the creation of an Autodesk Revit API, which can directly and/or indirectly support the evaluation of 24 out of the 25 sustainability criteria. Autodesk Revit was identified as the most useful BIM software in the SBTool PT -H case. It has the capability to support the assessment of more than a dozen criteria. This is due to the criteria characteristics, which are mainly quantitative data from the building model. Authors have also identified several common software that can be used to assess the remaining criteria, such as Autodesk Green Building Studio (GBS, developed by Autodesk, Inc, San Rafael, USA), Google Maps or Microsoft Excel [5].
A practical application of the proposed framework was already preformed for 17 criteria on the categories of Land Use and Biodiversity (5 out of 5), Energy Efficiency (2 out of 2), Materials and Waste Management (5 out of 5) and Occupant's Health and Comfort (5 out of 5). From all, 12 of these criteria were assessed by creating shared parameters and using the schedule function of Autodesk Revit (and Microsoft Excel as an interface). The two criteria from the Energy Efficiency category (and one from the Occupant's Health and Comfort category) were assessed by exporting a 3D-model for Cypetherm REH (Cype Ingenieros, Alicante, Spain) and GBS, to perform the energy analysis. The two remaining criteria from the Occupant's Health and Comfort category were evaluated by exporting the Autodesk Revit model to Cypetherm EPlus and Cypesound RRAE (both from Cype Ingenieros, Alicante, Spain). Currently, seven criteria are still requiring practical validation, namely on the water efficiency category (2), accessibility category (2), lifecycle environmental impact (1) and the economic dimension (2). With a BIM-based API, authors aim at optimising and automating the assessment procedure of SBTool PT -H and support designers during the project phase. This study was published in the Automation in Construction journal. With regard to the most preferred journals for LEED-related publications, 4 out of 22 papers were published in Conference Proceedings. Automation in Construction followed with three publications, followed by the Journal of Architectural Computing and Journal of Cleaner Production, both with 2 articles.
BIM Application in SBTool
The application of BIM in SBTool is still in an initial stage, with the proposal of conceptual approaches. The only identified study regarding SBTool and BIM dates from 2019 and has identified a BIM-based framework to assess the SBTool PT -H-Portuguese method-to assess the sustainability of residential buildings [5]. This study proposed the creation of an Autodesk Revit API, which can directly and/or indirectly support the evaluation of 24 out of the 25 sustainability criteria. Autodesk Revit was identified as the most useful BIM software in the SBTool PT -H case. It has the capability to support the assessment of more than a dozen criteria. This is due to the criteria characteristics, which are mainly quantitative data from the building model. Authors have also identified several common software that can be used to assess the remaining criteria, such as Autodesk Green Building Studio (GBS, developed by Autodesk, Inc, San Rafael, USA), Google Maps or Microsoft Excel [5].
A practical application of the proposed framework was already preformed for 17 criteria on the categories of Land Use and Biodiversity (5 out of 5), Energy Efficiency (2 out of 2), Materials and Waste Management (5 out of 5) and Occupant's Health and Comfort (5 out of 5). From all, 12 of these criteria were assessed by creating shared parameters and using the schedule function of Autodesk Revit (and Microsoft Excel as an interface). The two criteria from the Energy Efficiency category (and one from the Occupant's Health and Comfort category) were assessed by exporting a 3D-model for Cypetherm REH (Cype Ingenieros, Alicante, Spain) and GBS, to perform the energy analysis. The two remaining criteria from the Occupant's Health and Comfort category were evaluated by exporting the Autodesk Revit model to Cypetherm EPlus and Cypesound RRAE (both from Cype Ingenieros, Alicante, Spain). Currently, seven criteria are still requiring practical validation, namely on the water efficiency category (2), accessibility category (2), lifecycle environmental impact (1) and the economic dimension (2). With a BIM-based API, authors aim at optimising and automating the assessment procedure of SBTool PT -H and support designers during the project phase. This study was published in the Automation in Construction journal.
Discussion
From the performed analysis, it is possible to validate previous conclusions [22] about the most addressed BSA method. Between 2011 and 2019, 22 papers were published about the practical evaluation of LEED criteria, making it the most-preferred scheme for authors. Only five publications were found regarding BREEAM and one about a BIM-based framework for SBTool. Despite the research period that was set between 2009 and 2019 (the past 10 years), no publications about the topic were found in 2009 and 2010, including review articles.
A clear publication trend is noticed in the past years ( Figure 8). Until 2015, the subject of BIM integration in BSA methods was still with low general interest, with a couple of publications per year. However, in 2016 the interest peak was witnessed with the publication of eight related papers. Despite the publication decrease in the following years, since 2018 an increased interest was again noticed, with a positive forecast for the next years. As BIM platforms and tools are continuously being developed, new approaches and processes are created to support building sustainability assessment.
From the performed analysis, it is possible to validate previous conclusions [22] about the most addressed BSA method. Between 2011 and 2019, 22 papers were published about the practical evaluation of LEED criteria, making it the most-preferred scheme for authors. Only five publications were found regarding BREEAM and one about a BIM-based framework for SBTool. Despite the research period that was set between 2009 and 2019 (the past 10 years), no publications about the topic were found in 2009 and 2010, including review articles.
A clear publication trend is noticed in the past years ( Figure 8). Until 2015, the subject of BIM integration in BSA methods was still with low general interest, with a couple of publications per year. However, in 2016 the interest peak was witnessed with the publication of eight related papers. Despite the publication decrease in the following years, since 2018 an increased interest was again noticed, with a positive forecast for the next years. As BIM platforms and tools are continuously being developed, new approaches and processes are created to support building sustainability assessment.
Furthermore, the global concerns about environmental impacts will also promote research about building sustainability, supporting the positive prediction for the subsequent years. According to the Web of Science database, in 2019, five articles were published about the practical assessment of BSA methods with BIM. From those five articles, three were regarding LEED, one about BREEAM and one concerning SBTool. Regarding the preferred journals, Automation in Construction stands out with six publications (Figure 9)-three on LEED, two on BREEAM and one on SBTool-representing 23,1% of the authors' choices. As some of the most recognised and cited BIM-related articles (such as [9,13,18]) belongs to this journal, new researchers tend to try publications within this journal. Papers in conference proceedings have provided four related articles, representing 15,4%. A significant increase in these types of publications is expected in the following years. Papers that address only one or two criteria are usually insufficient for journal publication. Journal of Cleaner Production and Journal of Architectural Computing are the following ones, both with two publications each (7,7% each). All the other 12 identified journals had one related publication within the research period (3,8% each). Furthermore, the global concerns about environmental impacts will also promote research about building sustainability, supporting the positive prediction for the subsequent years. According to the Web of Science database, in 2019, five articles were published about the practical assessment of BSA methods with BIM. From those five articles, three were regarding LEED, one about BREEAM and one concerning SBTool.
Regarding the preferred journals, Automation in Construction stands out with six publications (Figure 9)-three on LEED, two on BREEAM and one on SBTool-representing 23,1% of the authors' choices. As some of the most recognised and cited BIM-related articles (such as [9,13,18]) belongs to this journal, new researchers tend to try publications within this journal. Papers in conference proceedings have provided four related articles, representing 15,4%. A significant increase in these types of publications is expected in the following years. Papers that address only one or two criteria are usually insufficient for journal publication. Journal of Cleaner Production and Journal of Architectural Computing are the following ones, both with two publications each (7,7% each). All the other 12 identified journals had one related publication within the research period (3,8% each). The most commonly assessed categories are materials and energy-related ones, both covered by the BREEAM and LEED versions, as presented in Figure 10. Twelve of the selected articles have addressed, at least, one criterion from those categories. Site-related and indoor environment-related categories are the following ones, approached in 10 and 8 papers, respectively. Overall, these are the most commonly assessed categories with BIM for both schemes. The identified articles have also evaluated the design, water and region-related criteria for LEED and BREEAM. Operation-related criteria were only assessed for the BREEAM method. This type of results was expectable since the existence and development of several BIM energy analysis tools adapted to region-specific contexts (data for energy and indoor environment-related categories). Material-related categories (quantitative data) are usually assessed through schedules, with the support of Microsoft Excel both for LEED and BREEAM. Site-related categories (majority assessed for LEED in eight publications) can benefit from the use of the LEED Sustainable Sites software. It allows designers to perform a full and concise assessment of the Sustainable Sites category from LEED. The most commonly assessed categories are materials and energy-related ones, both covered by the BREEAM and LEED versions, as presented in Figure 10. Twelve of the selected articles have addressed, at least, one criterion from those categories. Site-related and indoor environment-related categories are the following ones, approached in 10 and 8 papers, respectively. Overall, these are the most commonly assessed categories with BIM for both schemes. The identified articles have also evaluated the design, water and region-related criteria for LEED and BREEAM. Operation-related criteria were only assessed for the BREEAM method. This type of results was expectable since the existence and development of several BIM energy analysis tools adapted to region-specific contexts (data for energy and indoor environment-related categories). Material-related categories (quantitative data) are usually assessed through schedules, with the support of Microsoft Excel both for LEED and BREEAM. Site-related categories (majority assessed for LEED in eight publications) can benefit from the use of the LEED Sustainable Sites software. It allows designers to perform a full and concise assessment of the Sustainable Sites category from LEED. The most commonly assessed categories are materials and energy-related ones, both covered by the BREEAM and LEED versions, as presented in Figure 10. Twelve of the selected articles have addressed, at least, one criterion from those categories. Site-related and indoor environment-related categories are the following ones, approached in 10 and 8 papers, respectively. Overall, these are the most commonly assessed categories with BIM for both schemes. The identified articles have also evaluated the design, water and region-related criteria for LEED and BREEAM. Operation-related criteria were only assessed for the BREEAM method. This type of results was expectable since the existence and development of several BIM energy analysis tools adapted to region-specific contexts (data for energy and indoor environment-related categories). Material-related categories (quantitative data) are usually assessed through schedules, with the support of Microsoft Excel both for LEED and BREEAM. Site-related categories (majority assessed for LEED in eight publications) can benefit from the use of the LEED Sustainable Sites software. It allows designers to perform a full and concise assessment of the Sustainable Sites category from LEED. In total, 33 different software types have been used in LEED and BREEAM publications (25). Figure 11 presents all the software that have been used, at least, in two different studies. A clear trend on the use of Autodesk Revit is noticed, which has been selected in 20 out of 25 publications. Autodesk Revit is mostly used to create and edit BIM models (and then export to specific BIM analysis tools). Still, its capabilities are also used to assess quantitative criteria with the schedule function. Similar conclusions about the trend use were also reached by [35]. Microsoft Excel was the second most used, which was applied in six publications. Twenty-four other software types were also used in the identified publications. On average, 2,8 software types are used in each publication, with a minimum of 1 and a maximum of 8.
Appl. Sci. 2020, 10, 4444 26 of 27 In total, 33 different software types have been used in LEED and BREEAM publications (25). Figure 11 presents all the software that have been used, at least, in two different studies. A clear trend on the use of Autodesk Revit is noticed, which has been selected in 20 out of 25 publications. Autodesk Revit is mostly used to create and edit BIM models (and then export to specific BIM analysis tools). Still, its capabilities are also used to assess quantitative criteria with the schedule function. Similar conclusions about the trend use were also reached by [35]. Microsoft Excel was the second most used, which was applied in six publications. Twenty-four other software types were also used in the identified publications. On average, 2,8 software types are used in each publication, with a minimum of 1 and a maximum of 8. In what concerns the development of a BIM-based assessment for SBTool, only one publication was identified. In this study, a BIM framework is proposed for the assessment of the Portuguese residential version-SBTool PT -H. This can be related to the need to adapt the international SBTool to region-specific factors. LEED and BREEAM have developed international versions, which can be almost directly applied worldwide.
Nevertheless, based on Carvalho et al. [5] and in the current work, in the SBTool PT -H framework, 24 out of 25 criteria were theoretically identified as possible to be evaluated with the support of BIM. Only one criterion regarding the building user guide (which is a sort of checklist) cannot benefit at all from the BIM methodology.
By practical implementation the theoretical framework, 17 criteria were already validated with the support of different BIM tools. Autodesk Revit and Microsoft Excel themselves can support the evaluation of 12 out of 25 criteria (site, material and indoor environment-related). These are mainly related to quantitative data from the site and building but also about specific building conditions. Five other criteria (energy and indoor environment-related) were also assessed by exporting an Autodesk Revit model for Cype and GBS software.
From the seven criteria that were not validated yet, two region-related criteria can be assessed with similar procedures (Google Maps API) as the ones applied by Chen et al. [48,51] for LEED. The water-related category (two criteria) can be assessed by using Autodesk Revit and GBS to forecast water consumption and water-saving measures. Three other criteria (LCA-related and economyrelated) require the use of Autodesk Revit and Cype Arquimedes (Cype Ingenieros, Alicante, Spain) in combination with other software, such as GBS, Microsoft Excel or Cypetherm REH.
Overall, to assess 24 out of 25 criteria from SBTool PT -H with a BIM-based process, a total of eight different software types are required. However, half of the criteria can be evaluated only by Autodesk Revit and Microsoft Excel. The current practical integration of BIM in SBTool PT -H allows the In what concerns the development of a BIM-based assessment for SBTool, only one publication was identified. In this study, a BIM framework is proposed for the assessment of the Portuguese residential version-SBTool PT -H. This can be related to the need to adapt the international SBTool to region-specific factors. LEED and BREEAM have developed international versions, which can be almost directly applied worldwide.
Nevertheless, based on Carvalho et al. [5] and in the current work, in the SBTool PT -H framework, 24 out of 25 criteria were theoretically identified as possible to be evaluated with the support of BIM. Only one criterion regarding the building user guide (which is a sort of checklist) cannot benefit at all from the BIM methodology.
By practical implementation the theoretical framework, 17 criteria were already validated with the support of different BIM tools. Autodesk Revit and Microsoft Excel themselves can support the evaluation of 12 out of 25 criteria (site, material and indoor environment-related). These are mainly related to quantitative data from the site and building but also about specific building conditions. Five other criteria (energy and indoor environment-related) were also assessed by exporting an Autodesk Revit model for Cype and GBS software.
From the seven criteria that were not validated yet, two region-related criteria can be assessed with similar procedures (Google Maps API) as the ones applied by Chen et al. [48,51] for LEED. The water-related category (two criteria) can be assessed by using Autodesk Revit and GBS to forecast water consumption and water-saving measures. Three other criteria (LCA-related and economy-related) require the use of Autodesk Revit and Cype Arquimedes (Cype Ingenieros, Alicante, Spain) in combination with other software, such as GBS, Microsoft Excel or Cypetherm REH.
Overall, to assess 24 out of 25 criteria from SBTool PT -H with a BIM-based process, a total of eight different software types are required. However, half of the criteria can be evaluated only by Autodesk Revit and Microsoft Excel. The current practical integration of BIM in SBTool PT -H allows the evaluation of site-related, energy-related, material-related and indoor environment-related categories. These are the same sustainability assessment categories that is possible to assess in LEED and BREEAM. Figure 12 presents a comparison between the criteria that can be currently assessed with BIM for LEED, BREEAM and SBTool. LEED NC v3 and BREEAM UK Refurbishment and Fit-out 2014 were used as they are the versions with more assessed criteria. For SBTool, both the theoretical proposal and the practical assessment were used to understand the actual and future expected BIM integration. When comparing all the schemes, it is possible to realise that BREEAM UK 2014 has the lowest BIM integration, with only 24% of the criteria being possible to be assessed with BIM. On the other hand, 67% of LEED v3 criteria (excluding Pilot Credits) and 68% of SBTool PT -H criteria can already be evaluated with BIM. According to the theoretical proposal, SBTool PT -H has the potential to be 96% assessed with the support of BIM (seven more criteria than the actual integration). However, these criteria are still requiring further practical validation. These features give enough attractiveness to a BIM-based assessment for SBTool PT -H. The use of BIM will enable the evaluation of, at least, the same percentage of criteria as the most assessed scheme, in identical categories and with fewer resources. However, it must be noticed that SBTool PT -H is the adaptation to a national context of the international scheme. Some adjustments should be made when replicating the BIM framework for other countries and/or building types.
Appl. Sci. 2020, 10, 4444 27 of 27 evaluation of site-related, energy-related, material-related and indoor environment-related categories. These are the same sustainability assessment categories that is possible to assess in LEED and BREEAM. Figure 12 presents a comparison between the criteria that can be currently assessed with BIM for LEED, BREEAM and SBTool. LEED NC v3 and BREEAM UK Refurbishment and Fit-out 2014 were used as they are the versions with more assessed criteria. For SBTool, both the theoretical proposal and the practical assessment were used to understand the actual and future expected BIM integration. When comparing all the schemes, it is possible to realise that BREEAM UK 2014 has the lowest BIM integration, with only 24% of the criteria being possible to be assessed with BIM. On the other hand, 67% of LEED v3 criteria (excluding Pilot Credits) and 68% of SBTool PT -H criteria can already be evaluated with BIM. According to the theoretical proposal, SBTool PT -H has the potential to be 96% assessed with the support of BIM (seven more criteria than the actual integration). However, these criteria are still requiring further practical validation. These features give enough attractiveness to a BIM-based assessment for SBTool PT -H. The use of BIM will enable the evaluation of, at least, the same percentage of criteria as the most assessed scheme, in identical categories and with fewer resources. However, it must be noticed that SBTool PT -H is the adaptation to a national context of the international scheme. Some adjustments should be made when replicating the BIM framework for other countries and/or building types.
Conclusions
The construction industry is more and more embracing BIM as societies' and authorities' concerns about the negative impacts of buildings are increasing, with new approaches to improve their sustainability being sought. The application of BIM for sustainability purposes can reduce the number of required resources, as well as improve the overall quality of a building. Therefore, less energy will be required, and fewer emissions will be produced.
BSA methods are also taking advantage of BIM to foster and automate their assessment procedures. The potential of BIM lies in information share among the involved stakeholders and on process efficiency, significantly reducing the necessary time to perform a sustainability assessment. BIM also provides designers with detailed information to compare the impacts of different sustainable solutions and to assess the sustainability of their buildings since the project early stages.
The analysis made in this study has identified that, currently, the BIM method is mostly used to assess LEED sustainability criteria (22 out of 26 studies). With regard to the BSA categories, globally, energy-related and material-related categories have been attended to in 50% of the studies. Siterelated categories have been addressed in 42% and indoor environment-related categories in 35%.
Conclusions
The construction industry is more and more embracing BIM as societies' and authorities' concerns about the negative impacts of buildings are increasing, with new approaches to improve their sustainability being sought. The application of BIM for sustainability purposes can reduce the number of required resources, as well as improve the overall quality of a building. Therefore, less energy will be required, and fewer emissions will be produced.
BSA methods are also taking advantage of BIM to foster and automate their assessment procedures. The potential of BIM lies in information share among the involved stakeholders and on process efficiency, significantly reducing the necessary time to perform a sustainability assessment. BIM also provides designers with detailed information to compare the impacts of different sustainable solutions and to assess the sustainability of their buildings since the project early stages.
The analysis made in this study has identified that, currently, the BIM method is mostly used to assess LEED sustainability criteria (22 out of 26 studies). With regard to the BSA categories, globally, energy-related and material-related categories have been attended to in 50% of the studies. Site-related categories have been addressed in 42% and indoor environment-related categories in 35%. Concerning the software, overall, Autodesk Revit was commonly used by researchers, adopted in 81% of the identified articles, followed by Microsoft Excel (27%). This happens due to the Autodesk Revit capacity to create, edit and export/import BIM models. Autodesk Revit is also frequently used when specific API is required or to gather quantitative data from the model. Regarding journals, a pattern was not completely identified. Nevertheless, Automation in Construction has provided 23% of the papers for this research. Conference Proceedings have provided 15% of the publications, and the Journal of Architectural Computing and Journal of Cleaner Production have both provided 7,8%. The remaining publications came from several different journals.
Overall, at least 67% of the LEED criteria and 24% of BREEAM criteria can currently be assessed with BIM. According to the analysis, a theoretical proposal aims to reach a 98% assessment of the SBTool criteria using BIM. At the moment, only 68% is already practically validated. Nevertheless, facing the current BIM integration on the three schemes, SBTool has a great attractiveness potential. It can evaluate the same (or more) criteria than LEED and BREEAM, on identical related categories (energy, materials, site and indoor environment).
Additionally, only by using Autodesk Revit and Microsoft Excel, it is possible to support the assessment of 48% of the SBTool criteria. It constitutes a comprehensive basis for the designer's decision-making since the earlier design stages. Currently, despite the increased use of BIM to assess BSA methods, there is still a knowledge gap between them. BIM is not yet properly oriented towards sustainable building. As BSA methods are based on multi-disciplinary information, there is still a need to use several different BIM tools. Interoperability problems are also commonly found, requiring time for model checking. Moreover, there is a need to create common procedures and standards to support designers in performing a BSA with BIM. Procedures must be established and validated, so designers could achieve reliable and comparable results.
BSA developers are also aware of this paradigm and are continually developing new strategies to integrate BIM into their systems. All the studied methods already have conceptual or developed frameworks, which can be embedded in the BIM workflow, to improve and speed up the assessment procedures. Thus, BSA can be easier articulated with all the other project disciplines, improving information-sharing. From the analysis of the current and future applications of BIM in BSA methods, it is expected that the relation between both will be more reliable, smoother and faster. It will enable the total integration of BSA in the collaborative process and promote the efficient development of high-performance buildings.
This study outcomes reinforce the actual knowledge on the topic and establish a basis for future research. It identified which BSA criteria/categories can already be assessed using BIM and which software is commonly used to implement this process. The attractiveness of a new BIM-automated assessment for SBTool and the replicability of the new approach to the BREEAM and LEED methods was also analysed.
For future research (and based on the limitations of the actual study), more databases as well as more keyword combinations should be included in a more comprehensive review. Furthermore, other BSA methods, such as Green Star, DGNB or BEAM, should be included to create a broader basis and knowledge on the topic.
|
2020-07-02T10:29:08.529Z
|
2020-06-28T00:00:00.000
|
{
"year": 2020,
"sha1": "0de2370b3b5387bf16f0b32fa17ff88b97b96cda",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3417/10/13/4444/pdf",
"oa_status": "GREEN",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "ffad659686d8876b564315a4c4264cb584e728c4",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
17641526
|
pes2o/s2orc
|
v3-fos-license
|
Clostridium septicum Gas Gangrene in Colon Cancer: Importance of Early Diagnosis
The Clostridia species are responsible for some of the deadliest diseases including gas gangrene, tetanus, and botulism. Clostridium septicum is a rare subgroup known to cause atraumatic myonecrosis and is associated with colonic malignancy or immunosuppression. It is a Gram-positive, anaerobic, spore-forming bacillus found in the gastrointestinal tract and can lead to direct, spontaneous infections of the bowel and peritoneal cavity. The anaerobic glycolysis of the tumor produces an acidic, hypoxic environment favoring germination of clostridial spores. Tumor-induced mucosal ulceration allows for translocation of sporulated bacteria from the bowel into the bloodstream, leading to fulminant sepsis. C. septicum bacteremia can have a variable presentation and is associated with greater than 60% mortality rate. The majority of deaths occur within the first 24 hours if diagnosis and appropriate treatment measures are not promptly started. We report a case of abdominal myonecrosis in a patient with newly diagnosed colon cancer. The aim of this study is to stress the importance of maintaining a high suspicion of C. septicum infection in patients with underlying colonic malignancy.
Introduction
The Clostridia species are opportunistic pathogens. Nonetheless, they are responsible for some of the deadliest diseases including gas gangrene, tetanus, and botulism. Clostridial infections were previously known to be a complication of traumatic or surgical wounds causing necrotizing skin or soft tissue infections. Clostridium septicum is a rare subgroup found to cause atraumatic myonecrosis, and in over 80% of cases it is associated with underlying malignancy [1]. It has been reported that the association between C. septicum and malignancy is due to mucosal ulceration, allowing patients with colon cancer, acute leukemia, and cyclical neutropenia to have an ideal portal of infection for the organism [2]. C. septicum sepsis is associated with a high mortality rate, with the majority of deaths occurring within the first 24 hours [3]. We report a unique case of newly diagnosed colon cancer and subsequent development of abdominal myonecrosis to emphasize the importance of having a high suspicion for C. septicum in patients with malignancy. This will allow for prompt intervention with broad-spectrum antibiotics and possible surgical debridement.
Case Presentation
A 54-year-old male with a past medical history of hypertension presented with a ten-day history of severe bilateral lower abdominal pain radiating to his back. He reported a twenty-pound weight loss over the past six months. Upon presentation, he was afebrile and his vital signs were stable. On exam, the abdomen was diffusely tender to palpation and bowel sounds were normal with no peritoneal signs. A CT abdomen/pelvis showed multiple hepatic and ascending colonic lesions, with pericolonic fat infiltration and periportal lymphadenopathy. On day two of admission, an ultrasound guided liver biopsy was performed and pathology showed metastatic adenocarcinoma consistent with primary colonic malignancy. Patient underwent staging with a chest CT, which was negative for metastatic disease. On day three of admission, the patient became hypotensive with a blood pressure (BP) of 105/62 mmHg which was thought to be secondary to pain medications and responded well to fluid resuscitation. Subsequently, patient's lab values revealed an elevated creatinine (Cr) of 2.6 mg/dL, which was believed to be due to acute tubular necrosis secondary to hypotensive episodes. On day five of admission, the patient was persistently hypotensive with a BP of 100/60 mmHg, which did not respond to intravenous (IV) fluid resuscitation, and he was transferred to the intensive care unit to initiate therapy with vasopressors. Peripheral blood cultures were drawn, and the patient was empirically started on IV piperacillin/tazobactam 3.375 g every 6 hours and metronidazole 500 mg every 8 hours as per Infectious Disease recommendations. His lab values revealed leukocytosis of 16.4 K/ L, creatinine of 4.1 mg/dL, total bilirubin of 3.3 mg/dL, AST of 407 U/L, ALT of 90 U/L, alkaline phosphatase of 259 U/L, and lactic acid of 4.1 mmom/L. Final blood cultures were positive for C. septicum; no anaerobic susceptibilities are performed at our hospital. The patient was continued on the initial broadspectrum antibiotic regimen of IV piperacillin/tazobactam and metronidazole for a planned 14-day treatment. Repeat CT of abdomen/pelvis showed gas collections in the liver, peritoneum (Figures 1 and 2), multiple soft tissue, and bone ( Figures 3 and 4), areas suggestive of clostridial gas gangrene. Lab work indicated worsening liver and kidney functions and the patient developed multiorgan failure. Upon discussion with the patient's family, the decision was made for comfort measures only. The patient expired on hospital day 13.
Discussion
Clostridium septicum was first isolated from the blood of a cow in 1877 by L. Pasteur and J. Joubert. In 1881, R. Koch proved the organism to be responsible for malignant edema, which is defined as acute, rapidly fatal toxaemia usually caused by Clostridium species. C. septicum is a Gram-positive, anaerobic, spore-forming bacillus that normally grows in soil and is a causative agent of atraumatic myonecrosis [4]. C. septicum produces multiple exotoxins, including alpha, beta, gamma, and delta toxins. Of these, the alpha toxin is lethal, hemolytic, and necrotizing; however, unlike the alpha toxin of C. perfringens, the mechanism by which the alpha toxin of C. septicum contributes to pathogenesis is unknown. Nevertheless, it remains an important virulence factor in C. septicum mediated myonecrosis [5]. Although rare, in the setting of malignancy or immunosuppression, it is associated with direct, spontaneous infections of the bowel and peritoneal cavity. The anaerobic glycolysis of the tumor produces an acidic, hypoxic environment favoring germination of clostridial spores [6]. Tumor-induced mucosal ulceration causes disruption of the normal barrier, which allows for translocation of the sporulated bacteria from the bowel into the bloodstream leading to fulminant sepsis. Once the malignancy outgrows its blood supply, the anaerobic environment created is ideal for bacterial growth [7]. Mucosal disruption can also be caused by bowel perforation, surgery, radiation, or a medical procedure such as colonoscopy or barium enema. Impaired host immunity from alcohol abuse, steroids, atherosclerosis, diabetes, or neutropenia is also believed to facilitate translocation. C. septicum is more aerotolerant than C. perfringens; thus it is more likely to infect healthy tissue. The clinical spectrum of C. septicum varies and can present as cellulitis, fasciitis, myonecrosis, abscess, aortitis, or septic shock. However, this bacterium can also present with nonspecific symptoms including abdominal pain, fever, and malaise [8].
Case Reports in Infectious Diseases 3 Figure 4 Clostridial infections at a single institution were reviewed to determine impact on mortality. Of the cases reviewed, 281 patients had culture proven clostridial infection and C. septicum was found to be the responsible species in 11.4% ( = 32) of cases. There was 56% mortality rate in C. septicum patients as opposed to 26% mortality rate in all clostridial infections. An associated malignancy was found in 50% of C. septicum cases, and the remaining 50% of patients had evidence of immunosuppression [9]. In another study, 241 clostridial infections were identified, of which 7.8% were C. septicum. There was 25% mortality rate for all clostridial infections in comparison to 80% mortality rate for C. septicum species alone [10].
Treatment of C. septicum bacteremia consists of early surgical debridement and antibiotic therapy. The empiric antibiotics of choice include IV piperacillin/tazobactam 4.5 g every 6 hours and IV metronidazole 500 mg every 8 hours. For Clostridium species, other appropriate antibiotics include penicillin, clindamycin, cefoxitin, ampicillin/sulbactam, and imipenem/cilastatin. The optimal duration of IV antibiotic treatment has not been defined, although treatment should continue until no further surgical debridement is needed and the patient's hemodynamic status has stabilized [11].
As previously stated, C. septicum is a rare and lethal diagnosis, and therefore early identification and initiation of treatment are crucial to decrease mortality. There should be a high suspicion of C. septicum infection in patients who present with an underlying colonic malignancy with signs of sepsis. Blood cultures should be obtained early in order to achieve a timely diagnosis [3]. In patients whom C. septicum infection is diagnosed without a clear underlying etiology, there should be a strong suspicion for an associated malignancy.
The best known association between bacterial infections and malignancy is Streptococcus bovis and colon carcinoma; however, the connection between C. septicum and large bowel malignancies is well demonstrated in multiple literature reviews. A review of 162 published cases of C. septicum infection was performed, demonstrating that 81% of patients had an associated malignancy, of which 34% had an associated colon carcinoma and 40% had an associated hematologic malignancy [2]. Therefore, in the absence of hematological malignancy, colonoscopy is warranted to evaluate colon carcinoma [6]. The majority of deaths occur within the first 24 hours if diagnosis and appropriate treatment measures are not promptly started.
Conclusion
C. septicum infections are strongly associated with malignancy. In septic patients with hematologic or colorectal cancer, concern for C. septicum bacteremia should remain high. Aerobic and anaerobic cultures should be drawn prior to starting empiric antibiotics. Early diagnosis and aggressive initiation of treatment, including antibiotics and surgical intervention, are crucial in order to improve prognosis and potentially be lifesaving in this deadly infection.
|
2018-04-03T03:56:15.634Z
|
2015-12-17T00:00:00.000
|
{
"year": 2015,
"sha1": "e2118ed36c28f1cbd7ece7539deaaa793af0f6e4",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/criid/2015/694247.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "413816c7655b152e336fc754994fe69a570a41e0",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
253173616
|
pes2o/s2orc
|
v3-fos-license
|
Benign paroxysmal positional vertigo with congenital nystagmus: A case report
BACKGROUND Benign paroxysmal positional vertigo (BPPV) is a form of temporary vertigo induced by moving the head to a specific position. It is a self-limited, peripheral, vestibular disease and can be divided into primary and secondary forms. Congenital nystagmus (CN), an involuntary, rhythmic, binocular-symmetry, conjugated eye movement, is found at birth or within 3 mo of birth. According to the pathogenesis, CN can be divided into sensory-defect nystagmus and motor-defect nystagmus. The coexistence of BPPV and CN is rarely seen in the clinic. CASE SUMMARY A 62-year-old woman presented to our clinic complaining of a 15-d history of recurrent positional vertigo. The vertigo lasting less than 1 min occurred when she turned over, sometimes accompanied by nausea and vomiting. Both the patient and her father had CN. Her spontaneous nystagmus was horizontal to right; however, the gaze test revealed variable horizontal nystagmus with the same degree when the eyes moved. The patient’s Dix-Hallpike test was normal, except for persistent nystagmus, and the roll test showed severe variable horizontal nystagmus, which lasted for about 20 s in the same direction as her head movement to the right and left, although the right-side nystagmus was stronger than the left-side. Since these symptoms were accompanied by nausea, she was diagnosed with BPPV with CN and treated by manual reduction. CONCLUSION Though rare, if BPPV with CN is correctly identified and diagnosed, reduction treatment is comparably effective to other vertigo types.
INTRODUCTION
Benign paroxysmal positional vertigo (BPPV) is defined as a disorder of the inner ear characterized by repeated episodes of positional vertigo [1]. BPPV is among the common diseases that cause aural vertigo, and 24.1% of patients with dizziness or vertigo are diagnosed with BPPV [2]. Spontaneous nystagmus refers to a continuous, involuntary, and rhythmic movement of the eyeball in the absence of inducing factors and is divided into congenital nystagmus (CN) and acquired nystagmus. The latter is a type of nystagmus commonly seen in the clinic. CN is an ocular motor disorder in which patients are afflicted by periodic involuntary ocular oscillations affecting both eyes [3,4]. It develops during the first 3 to 6 mo of a patient's life and has a prevalence of 14 per 10000 people in the United Kingdom [5]. The etiology of CN is largely unknown, but we know that most patients have lifelong nystagmus, although it can gradually relieve with age in some patients. Some BPPV patients present with spontaneous nystagmus, but BPPV with CN is rare: To date, we have not seen such cases reported. In this report, we present the case of a BPPV patient with CN and her nystagmus findings.
Chief complaints
A 62-year-old woman presented at our clinic, complaining of positional vertigo that had recurred for 15 d.
History of present illness
Fifteen days previously, the patient's symptoms had begun with severe dizziness when she rose, which recurred when she rolled over or lied down. She sometimes experienced nausea and vomiting at the onset of these symptoms.
History of past illness
The patient was physically healthy in the past.
Personal and family history
Both the patient and her father had a history of CN.
Physical examination
The patient's physical examination revealed no abnormal findings.
Laboratory examinations
The patient did not undergo laboratory examinations.
Imaging examinations
The patient did not undergo imaging examinations. November 6, 2022 Volume 10 Issue 31
FINAL DIAGNOSIS
The patient was ultimately diagnosed with BPPV with CN.
TREATMENT
The patient was prescribed a barbecue roll maneuver to treat her right, lateral, semicircular canal BPPV of geotropic type. This treatment required her to lie down with the affected ear facing downward. Then, she rolled over to the opposite side for 90 degrees until she returned to the original position. Her vertigo symptoms disappeared after the therapy.
OUTCOME AND FOLLOW-UP
The patient's follow-up comprised three telephone appointments at 1 wk, 1 mo, and 6 mo after treatment. She was asymptomatic, without any recurrence of vertigo.
DISCUSSION
BPPV is generally categorized as posterior semicircular canal, anterior semicircular canal, and horizontal semicircular canal types. Of these categories, posterior semicircular canal BPPV is the most common (affecting 80%-90% of patients), followed by horizontal semicircular canal BPPV (10%-20%), while anterior semicircular canal BPPV is rare (3%) [6,7]. The Dix-Hallpike maneuver is considered the gold standard test to diagnose posterior canal BPPV, and the supine roll test is considered the gold standard for diagnosing horizontal semicircular canal BPPV[8]. Upbeat-torsional nystagmus is provoked by vertical semicircular canal BPPV. The pathogenesis of BPPV remains unclear; however, risk factors include age, mental stress, osteoporosis, insomnia, and hypertension [9,10]. Currently, the following two theories are widely accepted. First, canalithiasis suggests that when the head is moved relative to gravity, otoliths residing on the macula utriculi migrate into the semicircular canal and are displaced relative to the semicircular canal wall because of gravity, causing endolymph flow and resulting in the deviation of the cupula terminalis and, in turn, corresponding signs and symptoms. When the otolith moves due to gravity to the lowest point in the semicircular canal lumen, the endolymph stops, the cupula terminalis returns to its original position, and signs and symptoms disappear.
Second, eupulolithiasis suggests that the detached otoliths on the macula utriculi adhere to the cupula terminalis, changing the density of the latter relative to the endolymph and making it sensitive to gravity, resulting in the corresponding symptoms and signs [11]. CN usually occurs at birth or within 3 mo of birth. Although this nystagmus persists throughout most patients' lives, some patients' symptoms gradually relieve with age. CN is divided into two categories. The first is congenital motor defect nystagmus, in which eye movement includes fast and slow phases. The second is congenital sensory defect nystagmus, also known as "pendular nystagmus", in which the eye moves at one speed. CN is clinically rare, with an incidence of about 0.005%-0.286% [3]. To our knowledge, BPPV with CN has not been previously reported.
In the United States, according to statistics, about 5.6 million patients clinically complain of dizziness per year, and 17%-42% of patients with vertigo are diagnosed with BPPV [12][13][14]. BPPV treatment can be categorized as canalith repositioning maneuvers or vestibular rehabilitation [1]. Diagnosis of horizontal semicircular canal BPPV relies on the supine roll test. During examinations, clinicians should observe whether the direction of the nystagmus is geotropic or apogeotropic and which side of the nystagmus is stronger to enable identification of the patient's affected side. In our case, the Dix-Hallpike test showed the signs of horizontal nystagmus without vertigo and geotropic nystagmus, which was stronger on her right side during the roll test (Figure 1). Unlike other patients with BPPV, she exhibited persistent horizontal nystagmus on the right side after intense nystagmus lasting more than 10 s, which was accompanied by vertigo. The lasting nystagmus is suggestive of CN, which is similar to that observed in patients with spontaneous nystagmus but absent in BPPV patients without spontaneous nystagmus. Spontaneous nystagmus is very common clinically among patients with vertigo.
The following aspects should be used to distinguish CN from other central and peripheral spontaneous forms of nystagmus. The first and most important is a patient's medical history. Nystagmus is always present during CN; someone in the patient's family will have CN because of the heritability of the disease, while other types of spontaneous nystagmus only appear at the onset of the disease. Second, the direction of peripheral nystagmus is constant, while that of CN is variable. The test can be conducted with Frenzel glasses to observe the nystagmus accurately. The direction of the nystagmus will be seen to remain the same in the peripheral nystagmus; however, the direction of the the highest slow phase velocity (SPV) was 162°/s and then weakened gradually to 8°/s, and vertigo disappeared; C and D: When the head was to the left, the intensive nystagmus occurred between 9 s and 30 s with dizziness; the highest SPV was 62°/s and then weakened gradually to 9°/s, and vertigo disappeared.
Figure 2 Spontaneous nystagmus and nystagmus in the gaze test.
A: The intensity of spontaneous nystagmus was 20°/s, and it was constant; B: When the eyes moved to right, the direction of nystagmus changed to left, and the slow phase velocity (SPV) of nystagmus was 7°/s; C: When the eyes moved to left, the nystagmus was still to the left, and the SPV of nystagmus was 8°/s. nystagmus is consistent with the eye movement in central nystagmus and variable nyatagmus in CN ( Figure 2). Third, a CN patient usually experiences horizontal nystagmus of variable intensity, while other pathologic central nystagmus types may entail vertical and horizontal nystagmus with generally persistent intensity.
CONCLUSION
CN is rare in the clinic. If individuals experience spontaneous nystagmus with constant intensity and variable direction, a careful medical history should be taken to eliminate CN, which may influence the diagnosis. The treatment for BPPV with CN is the same as that for BPPV. In the case reported here, the patient was diagnosed with BPPV with CN, and the result was good.
|
2022-10-28T15:22:49.743Z
|
2022-11-06T00:00:00.000
|
{
"year": 2022,
"sha1": "4934ec35255d70244b29bb28b6757cfd1c62de64",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.12998/wjcc.v10.i31.11625",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1667904cc4024300be16917d5277f99d9a4c77bd",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
251877139
|
pes2o/s2orc
|
v3-fos-license
|
On the Molecular Driving Force of Protein–Protein Association
: The amount of water-accessible-surface-area, WASA, buried upon protein–protein association is a good measure of the non-covalent complex stability in water; however, the dependence of the binding Gibbs free energy change upon buried WASA proves to be not trivial. We assign a precise physicochemical role to buried WASA in the thermodynamics of non-covalent association and perform close scrutiny of the contributions favoring and those contrasting protein–protein association. The analysis indicates that the decrease in solvent-excluded volume, an entropic effect, described by means of buried WASA, is the molecular driving force of non-covalent association in water.
Introduction
Protein-protein association is an archetypal example of molecular recognition whose relevance in governing almost all biological processes is well-established [1][2][3]. In the last years, it emerged that molecular recognition is fundamental for the growth of amyloid-beta fibrils [4], other types of fibrils [5], and the formation of biomolecular condensates through liquid-liquid phase separation [6]. Proteins do not have eyes, but do have very rugged surfaces; nevertheless, they are able to find the right partner to construct the correct noncovalent complex. Notwithstanding the huge number of experimental studies carried out by means of different techniques to determine the structural features of the complexes and the thermodynamics of their formation, a general consensus on the molecular driving force of non-covalent association is still lacking. The present work is not a review of this huge matter, but a study aimed at offering a partial explanation of the important results obtained by Lynne Regan some years ago. She and co-workers [7] tried to make a step forward by performing a detailed analysis on 113 non-covalent heterodimers whose structures have been solved at a resolution of better than 3 Å and are present in the Protein Data Bank, PDB [8], and whose binding constants have carefully been determined and are present in the PDBbind v2011 database [9]; they found that the magnitude of the binding constant increases with the amount of buried WASA [10], ∆WASA, but did not find a clear correlation with the chemical nature (i.e., polar or nonpolar) of buried WASA. Note, in this respect, that buried WASA in the 113 heterodimers proved to be, on the average, 60% of nonpolar nature; this finding is not surprising nor unexpected (even though it is generally believed that nonpolar side chains are clustered in the protein interior), because it is in line with the average value determined for the nonpolar fraction of the surface of the native structure of a large set of globular proteins [11,12] (i.e., the buried WASA that drives protein-protein association has nothing special); moreover, Regan and co-workers constructed a plot of the ratio of the binding Gibbs free energy changes, at 298 K, to the buried WASA, ∆G b /∆WASA, versus the buried WASA for the 113 heterodimers, obtaining the following, somewhat unexpected, results (look at Figure 1, that is a reconstruction of Figure 1B of the article by Regan and co-workers): ∆G b /∆WASA decreases almost linearly for |∆WASA| < 2000 Å 2 , and remains practically constant for |∆WASA| ≥ 2000 Å 2 ; this means that 1 Å 2 of buried WASA contributes more to the binding constant when |∆WASA| < 2000 Å 2 . A tentative Biophysica 2022, 2 241 explanation is that when the buried WASA is not large, there are few hot spots that are enough to render tight the non-covalent complex; in contrast, when the buried surface increases, the number of interaction sites is large, and there is no need of hot spots to have a tight association. In fact, for |∆WASA| ≥ 2000 Å 2 , ∆G b /∆WASA ≈ 17 J mol −1 Å −2 so that, at 300 K, K b = 8.3 × 10 5 M −1 for ∆WASA = −2000 Å 2 , and K b = 7.6 × 10 8 M −1 for ∆WASA = −3000 Å 2 . The above K b estimates are in line with experimentally determined values of the binding constant [7]; note, in this respect, that K b cannot be too large because non-covalent complexes have to be as stable as necessary to perform their biological function and to dissociate when requested by the cell.
unexpected, results (look at Figure 1, that is a reconstruction of Figure 1B of the article by Regan and co-workers): ΔG b /WASA decreases almost linearly for WASA < 2000 Å 2 , and remains practically constant for ΔWASA 2000 Å 2 ; this means that 1 Å 2 of buried WASA contributes more to the binding constant when ΔWASA < 2000 Å 2 . A tentative explanation is that when the buried WASA is not large, there are few hot spots that are enough to render tight the non-covalent complex; in contrast, when the buried surface increases, the number of interaction sites is large, and there is no need of hot spots to have a tight association. In fact, for ΔWASA 2000 Å 2 , ΔG b /WASA ≈ 17 J mol −1 Å −2 so that, at 300 K, K b = 8.310 5 M −1 for ΔWASA = −2000 Å 2 , and K b = 7.610 8 M − 1 for ΔWASA = −3000 Å 2 . The above K b estimates are in line with experimentally determined values of the binding constant [7]; note, in this respect, that K b cannot be too large because non-covalent complexes have to be as stable as necessary to perform their biological function and to dissociate when requested by the cell. The dependence of ΔG b /ΔWASA versus WASA should merit special attention because it contains the rules governing the thermodynamics of protein-protein association (even though in a non-transparent fashion); however, to the best of our knowledge, no explanation has been provided up to now for this interesting trend. Buried WASA is a fundamental factor in driving protein-protein association, but other contributions play an important role. To shed light on the matter, it is necessary to devise a theoretical approach able to provide a molecular-level rationalization of the heuristic finding that buried WASA is a fundamental factor; this is exactly the aim of the present study.
The Solvent-Excluded Volume Effect
The presence of a solute molecule, at a fixed position, in a liquid causes a solvent-excluded volume effect because the center of solvent molecules cannot penetrate the solvent-accessible-surface-area of the solute molecule (exactly because the latter occupies that space) [12]. The solvent-accessible-surface-area is the surface generated by a sphere, The dependence of ∆G b /∆WASA versus |∆WASA| should merit special attention because it contains the rules governing the thermodynamics of protein-protein association (even though in a non-transparent fashion); however, to the best of our knowledge, no explanation has been provided up to now for this interesting trend. Buried WASA is a fundamental factor in driving protein-protein association, but other contributions play an important role. To shed light on the matter, it is necessary to devise a theoretical approach able to provide a molecular-level rationalization of the heuristic finding that buried WASA is a fundamental factor; this is exactly the aim of the present study.
The Solvent-Excluded Volume Effect
The presence of a solute molecule, at a fixed position, in a liquid causes a solventexcluded volume effect because the center of solvent molecules cannot penetrate the solvent-accessible-surface-area of the solute molecule (exactly because the latter occupies that space) [12]. The solvent-accessible-surface-area is the surface generated by a sphere, corresponding to the solvent molecule, that rolls over the van der Waals surface of the solute molecule [10]. In water, it is called water-accessible-surface-area, WASA, and the rolling sphere usually has a radius of 1.4 Å. On inserting a solute molecule in water, keeping constant pressure and temperature, the liquid volume increases by the partial molar volume of the solute, but this fact does not cancel the solvent-excluded volume effect, because the latter is caused by the impossibility of the center of water molecules to penetrate the solute WASA; it corresponds to a decrease in the configurational space accessible to water molecules (i.e., solvent molecules in the general case), and so to a loss in the translational entropy of water molecules (not only the ones that contact the solute van der Waals surface, because the residence time in the hydration shell is very very short). The important question is: how is it possible to account for the solvent-excluded volume effect in theoretical treatments of processes and phenomena occurring in water or other liquids? The answer is that the solvent-excluded volume effect has been associated with the theoretical concept of cavity creation in a liquid [13]. The existence of a cavity in water, due to the occurrence of molecular scale density fluctuations at equilibrium, requires that the center of water molecules cannot enter the cavity WASA. Therefore, a loss in translational entropy of water molecules is associated with cavity creation and determines the corresponding Gibbs free energy cost. All this reasoning implies that the ∆G c magnitude increases, even keeping constant the van der Waals volume of the cavity, with cavity WASA (i.e., WASA of the solute molecule to be hosted), and this expectation is confirmed by calculations [14,15]. Since the solvent-excluded volume effect is ubiquitous in processes occurring in liquids, especially in water, it should not be a surprise that several experimental measurements have shown a strong dependence on solute WASA.
Among others the experimental determination of the binding constant for the formation of protein-protein heterodimers, as pointed out by Lynne Regan and co-workers [7]; moreover, the geometric origin of the solvent-excluded volume effect implies that the distinction between nonpolar WASA and polar one is irrelevant (the chemical nature does not matter); in fact, no correlation between Kimplies that the and the buried nonpolar WASA was detected by Regan and co-workers [7]. The large WASA decrease associated with the formation of a non-covalent complex is schematically shown in Figure 2. WASA burial leads to a re-gain of translational entropy of water molecules (i.e., it corresponds to an increase in the configurational space accessible to water molecules), and provides an always negative Gibbs free energy change driving association. molar volume of the solute, but this fact does not cancel the solvent-excluded volume effect, because the latter is caused by the impossibility of the center of water molecules to penetrate the solute WASA; it corresponds to a decrease in the configurational space accessible to water molecules (i.e., solvent molecules in the general case), and so to a loss in the translational entropy of water molecules (not only the ones that contact the solute van der Waals surface, because the residence time in the hydration shell is very very short). The important question is: how is it possible to account for the solvent-excluded volume effect in theoretical treatments of processes and phenomena occurring in water or other liquids? The answer is that the solvent-excluded volume effect has been associated with the theoretical concept of cavity creation in a liquid [13]. The existence of a cavity in water, due to the occurrence of molecular scale density fluctuations at equilibrium, requires that the center of water molecules cannot enter the cavity WASA. Therefore, a loss in translational entropy of water molecules is associated with cavity creation and determines the corresponding Gibbs free energy cost. All this reasoning implies that the G c magnitude increases, even keeping constant the van der Waals volume of the cavity, with cavity WASA (i.e., WASA of the solute molecule to be hosted), and this expectation is confirmed by calculations [14,15]. Since the solvent-excluded volume effect is ubiquitous in processes occurring in liquids, especially in water, it should not be a surprise that several experimental measurements have shown a strong dependence on solute WASA.
Among others the experimental determination of the binding constant for the formation of protein-protein heterodimers, as pointed out by Lynne Regan and co-workers [7]; moreover, the geometric origin of the solvent-excluded volume effect implies that the distinction between nonpolar WASA and polar one is irrelevant (the chemical nature does not matter); in fact, no correlation between K b and the buried nonpolar WASA was detected by Regan and co-workers [7]. The large WASA decrease associated with the formation of a non-covalent complex is schematically shown in Figure 2. WASA burial leads to a re-gain of translational entropy of water molecules (i.e., it corresponds to an increase in the configurational space accessible to water molecules), and provides an always negative Gibbs free energy change driving association.
Theory Section
Several theoretical approaches have been developed to quantitatively describe molecular recognition phenomena [16][17][18]. In particular, the ones devised by Honig and coworkers [19], and by Jackson and Sternberg [20] are similar to that described below; the main difference lies in the idea of and the role assigned to the solvent-excluded volume effect. In order to devise a theoretical framework to describe the formation of a proteinprotein non-covalent complex, it is useful to consider the process of bringing two protein molecules from a fixed position at infinite separation to a fixed position at contact distance in water, keeping constant pressure and temperature [21]. In this manner, the translational and rotational entropy loss due to non-covalent complex formation is neglected (but it has to be considered later to make a correct comparison with experimental data that also account for this entropy loss). The binding Gibbs free energy change proves to be: where ∆G(dir) represents the Gibbs free energy change due to the direct interaction of the two protein molecules in the non-covalent complex, and is independent of water; it consists of both an energetic contribution and an entropic one: where E(M-M) is the energy gain due to direct Monomer-Monomer interactions in the complex, and ∆S(M-M) represents the loss in conformational entropy of the side chains exposed on the surfaces as a consequence of non-covalent complex formation (i.e., the interdigitation of side chains protruding from the surfaces of the two Monomers causes both an energy gain and an entropy loss; this situation is clearly different from that considered for the association of two rigid bodies, such as two large and flat plates, in which the ∆S(M-M) term is neglected [22]). ∆G(ind) represents the indirect part of the Gibbs free energy necessary to carry out the association, and accounts for the features of water, the liquid in which the non-covalent complex (i.e., the Dimer) formation takes place. ∆G(ind) is exactly related to the Ben-Naim standard hydration Gibbs free energy of the Dimer and Monomer, respectively [22]: where ∆G • is the Gibbs free energy change associated with the transfer of a solute molecule from a fixed position in the ideal gas phase to a fixed position in the water, at constant temperature and pressure. Statistical mechanics allows the exact division of ∆G • in two contributions [13]: where ∆G c is the Gibbs free energy change associated with the creation, at a fixed position in the water, of a cavity, adapt to host the solute molecule, and E a is the energy gain associated with switching on the solute-water attractive interactions. Clearly, solute insertion in water causes a structural reorganization of water-water H-bonds; the latter process is characterized by an almost complete enthalpy-entropy compensation [13]: ∆H r = T·∆S r (5) and so it does not affect the binding Gibbs free energy change. Note that the validity of Equation (5) has directly been verified for the pocket-ligand association by means of molecular dynamics simulations in the TIP4P water model by McCammon and colleagues [23,24]. Inserting Equation (4) into Equation (3), the latter becomes: To use this equation, it is necessary: (a) to know the structure of the two Monomers and that of the Dimer; (b) to calculate ∆G c and E a for these structures in water, and to perform the requested differences; this would be a formidable task, especially because the calculation of the reversible work of cavity creation in water is not computationally feasible for large and complex molecules such as globular proteins; however, analyses performed by means of classic scaled particle theory [25], SPT, indicated that ∆G c scales linearly with cavity WASA for simple cavity shapes [15]; the latter scaling has been confirmed by means of computer simulations in detailed water models [14,26]; moreover, it holds also for the E a quantity [14,27]. On this basis, Equation (6) can be re-written as: The (∆G c /WASA) ratio is always a positive quantity, the (E a /WASA) ratio is always a negative quantity, and ∆WASA is a negative quantity for the formation of a non-covalent complex. If the two protein molecules interact through two large and complementary surfaces, it is feasible to conclude that non-covalent complex formation causes a large WASA burial with the loss of a large fraction of protein-water attractive interactions and the gain of a lot of protein-protein attractive interactions (i.e., the occurrence of well-designed attractions between the two surfaces is an obligatory necessity to partially balance the energy loss due to the breaking of protein-water attractions, for instance, a large number of protein-water H-bonds). A reliable assumption is that these two contrasting contributions compensate each other to a large extent, so that: where f is a number smaller than 1, representing the fraction of the Monomer-water interaction energy that is compensated for by the direct Monomer-Monomer attractions. In this manner, the overall binding Gibbs free energy change is: Now, it is necessary to assign numerical values to the different contributions present in Equation (9) and to make a comparison with the experimental ∆G b data of Regan and co-workers [7].
Results and Discussion
Using classic SPT relationships [15,25], it is straightforward to calculate ∆G c for spherical cavities of increasing radius, and to construct a plot of ∆G c /WASA c versus WASA c . The plot is shown in Figure 3 and refers to 298 K and 1 atm; it is characterized by a linear region for small cavities and a plateau region for large cavities, as it is firmly established [14]; this trend seems to be the mirror image of that shown in Figure 1. At the plateau, the ratio ∆G c /WASA c ≈ 300 J mol −1 Å −2 and so it is markedly larger than the plateau of ∆G b /∆WASA ≈ 17 J mol −1 Å −2 . WASA burial implies a decrease in the solvent-excluded volume effect and so a re-gain of translational entropy of water molecules; the latter provides a favorable contribution to the formation of a non-covalent complex. The above numerical comparison, however, indicates that WASA burial also leads to unfavorable contributions that are able to almost cancel the ∆G c /WASA c term. First of all, the two surfaces that produce the non-covalent complex have good energetic attractions among each other, but, in all probability, these attractions are not able to fully compensate for the loss of Monomer-water attractions due to WASA burial. At 298 K, the ratio (E a /WASA) ≈ −250 J mol −1 Å −2 for n-alkanes in water [27]; for a protein surface that is 60% nonpolar and 40% polar, the ∆E a /WASA) ratio can amount to about −350 J mol −1 Å −2 ; it should be reliable to assume that about 2/5 of this value, −150 J mol −1 Å −2 , are not compensated by E(M-M) and must be subtracted to the (∆G c /WASA c ) contribution. On the other hand, WASA burial and side chain interdigitation lead to a conformational entropy loss whose magnitude rises on increasing ∆WASA (i.e., the freezing of side chain dihedral angles is more effective in enlarging the interacting surfaces). By taking into account the existing estimates of side chain conformational entropy [28] and the side chain WASA [29], the T·∆S(M-M)/WASA] ratio should amount to about −100 J mol −1 Å −2 , at 298 K.
Finally, it is necessary to account for the loss of translational and rotational entropy caused by the formation of a non-covalent complex (i.e., two molecules become a single object), a contribution neglected in the theoretical treatment because it is convenient to assume the two molecules be in a fixed position. According to Janin and Finkelstein [30] These two numbers are surprisingly close to the plateau value in Figure 1, as originally determined by Regan and co-workers [7]. In all probability, such a good agreement is in part the consequence of error cancelation. Nevertheless, the theoretical analysis is correct in singling out the contributions favoring and those contrasting non-covalent association in water; it is interesting that all the terms in Equation (9) depend strongly on buried WASA, confirming the pivotal role played by this geometric measure in shedding light on both structural and thermodynamic features of proteins [10]; this is the physical ground of the several computational procedures devised to quantitatively characterize the energetics of molecular recognition phenomena [31][32][33]. On the other hand, WASA burial and side chain interdigitation lead to a conformational entropy loss whose magnitude rises on increasing ΔWASA (i.e., the freezing of side chain dihedral angles is more effective in enlarging the interacting surfaces). By taking into account the existing estimates of side chain conformational entropy [28] and the side chain WASA [29], the TS(M-M)/WASA] ratio should amount to about −100 J mol −1 Å −2 , at 298 K. Finally, it is necessary to account for the loss of translational and rotational entropy caused by the formation of a non-covalent complex (i.e., two molecules become a single object), a contribution neglected in the theoretical treatment because it is convenient to assume the two molecules be in a fixed position. According to Janin and Finkelstein [30], this entropy loss produces a positive Gibbs free energy change of about 60 kJ mol −1 at 298 K. If ΔWASA = −2000 Å 2 , this unfavorable contribution would be −30 J mol −1 Å −2 ; if WASA = −3000 Å 2 , this unfavorable contribution would be −20 J mol −1 Å −2 . Therefore, putting together all our estimates, one obtains: These two numbers are surprisingly close to the plateau value in Figure 1, as originally determined by Regan and co-workers [7]. In all probability, such a good agreement is in part the consequence of error cancelation. Nevertheless, the theoretical analysis is correct in singling out the contributions favoring and those contrasting non-covalent association in water; it is interesting that all the terms in Equation (9) depend strongly on buried WASA, confirming the pivotal role played by this geometric measure in shedding light on both structural and thermodynamic features of proteins [10]; this is the physical ground of the several computational procedures devised to quantitatively characterize the energetics of molecular recognition phenomena [31][32][33].
Conclusions
Protein-protein association to form a non-covalent complex is an archetypal example of molecular recognition; it seems to be ruled by the amount of WASA buried in the complex, such as other non-covalent association phenomena in water (i.e., micelle formation). Regan and co-workers performed an interesting analysis of the dependence of the ∆G b /∆WASA ratio upon ∆WASA for 113 non-covalent heterodimers [7]. Since the database is large, the results are robust and merit a tentative rationalization. We have highlighted that the amount of buried WASA is a measure of the decrease in solvent-excluded volume effect, and so a measure of the gain in translational entropy of water molecules upon complex formation. The latter entropy gain is the molecular driving force of non-covalent association, even though contrasting contributions are operative and produce the unexpected plateau in the trend of ∆G b /∆WASA versus |∆WASA|.
Author Contributions: Conceptualization, G.G.; methodology, G.G. and R.R.; validation, G.G. and R.R.; writing-original draft preparation, R.R. and G.G.; writing-review and editing, R.R. and G.G.; supervision, G.G.; project administration, G.G.; funding acquisition, G.G. All authors have read and agreed to the published version of the manuscript.
Conflicts of Interest:
The authors declare no conflict of interest.
|
2022-08-28T15:12:59.473Z
|
2022-08-25T00:00:00.000
|
{
"year": 2022,
"sha1": "b96dad142b6fd983add8b1b46ddc9044b3a425a3",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2673-4125/2/3/23/pdf?version=1661751091",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "41931dd959a3eb97407b19be8d2c1b1684860cce",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": []
}
|
257944880
|
pes2o/s2orc
|
v3-fos-license
|
Gallstone Ileus With Cholecystoenteric Fistula in an Elderly Female: A Case Report
Mechanical small-bowel obstruction can occur due to various reasons, including the impaction of a gallstone in the ileum after it has passed through a cholecystoenteric fistula. Gallstone ileus is an infrequent yet significant cause of this condition. This case report documents an instance of gallstone ileus, which accounts for less than 1% of patients with mechanical small bowel obstruction. We report a 75-year-old female patient who presented with colicky pain in both upper quadrants, hyporexia, and constipation that worsened during a period of nine days, which subsequently was accompanied by nausea and vomiting of bilious appearance in the next three days. Abdominal CT reported a dilated common bile duct (1.7 cm) with multiple stones inside measuring between 5 and 8 mm associated with pneumobilia of intrahepatic bile ducts and dilatation of small intestinal loops produced by a high-density image of approximately 2.5 cm. Laparoscopic exploration showed an obstructive mass measuring 15 cm from the ileocecal valve corresponding to a 2.54 x 2.35 cm gallstone, which was removed and enterorrhaphy was performed. The sine qua non condition for gallstone ileus to occur is the presence of a fistula between the gallbladder and the gastrointestinal tract. The treatment is mainly surgical and should be aimed primarily at the intestinal obstruction and secondarily at the cholecystoenteric fistula. This condition tends to have a high rate of complications and consequently long hospital stays. Making a timely diagnosis provides us with the tools for a surgical approach aimed at intestinal obstruction and subsequently in the management of the biliary fistula.
Introduction
Mechanical small bowel obstruction can occur due to various reasons, including the impaction of a gallstone in the ileum after it has passed through a cholecystoenteric fistula. It is defined as a mechanical intestinal obstruction due to impaction within the gastrointestinal tract of one or more gallstones. Gallstone ileus occurs in fewer than 0.5% of patients who present with mechanical small bowel obstruction and it is most frequent in elderly patients and females [1].
Since this pathology is associated with relatively high rates of morbidity and mortality, a reflection of the advanced age of the patients, deteriorated clinical conditions, as well as the high incidence of concomitant diseases, an early and accurate diagnosis is very important [2].
In this report, we present a case of a 75-year-old female diagnosed with a bowel obstruction due to a gallstone that passed through a cholecystoenteric fistula. The patient was admitted and operated on in an academic hospital.
Case Presentation
A 75-year-old Latin female with a morbid history of arterial hypertension, arthritis, cholelithiasis, and obesity came to the emergency room due to colicky abdominal pain of 7/10 on a Numeric Rating Scale (NRS), with 0 being no pain and 10 being the worst pain imaginable. Additionally, she reported hyporexia and constipation that worsened during a period of nine days, which subsequently was accompanied by nausea and vomiting of bilious appearance in the next three days every time she ingested any food. During the initial assessment, the patient reported urinating only once all day, suggesting a decreased urine output and indicating that she was experiencing oliguria. The patient reported being compliant with her medications, which included atenolol, chlorthalidone, aspirin, and celecoxib. The patient denied any recent changes to her medications or diet. On physical examination, the patient looked critically ill, with a depressible abdomen, tenderness on superficial and deep palpation in all the abdomen diffusely, a negative Murphy's sign, no peritoneal irritation, and no masses or visceromegalies. The patient's vital signs were recorded as follows: blood pressure of 130/80 mmHg, heart rate of 78 beats per minute, respiratory rate of 16 breaths per minute, and body temperature of 37.1°C. Oxygen saturation was measured at 100%.
The initial laboratory test results were obtained and are presented in Table 1. An abdominal X-ray was done showing dilation of intestinal loops associated with air-fluid levels, absence of distal intracolonic air, and well-defined radiopacity projected in the right hypochondrium ( Figure 1).
FIGURE 1: Plain abdominal radiograph: (A) standing and (B) sitting
Abdominal CT reported a dilated common bile duct (1.7 cm) with multiple stones inside that measure between 5 and 8 mm associated with pneumobilia of intrahepatic bile ducts. Dilatation of small intestinal loops produced by a high-density image of approximately 2.5 cm, identified in the hypogastrium topography. Also, few diverticula in the ascending and sigmoid colon and free fluid in the pelvic cavity were noted (Figures 2, 3).
FIGURE 2: Non-contrast-enhanced computed tomography showing a 2.35 x 2.54 cm object inside intestinal loops (red arrow) FIGURE 3: Non-contrast-enhanced computed tomography showing a cholecystoenteric fistula and multiple gallstones (red circle)
The patient was admitted by the general surgery department with a diagnosis of intestinal obstruction and surgical treatment was decided. The patient was properly hydrated with 0.9% saline via IV. The surgical approach was by exploratory laparoscopy, finding an obstructive mass 15 cm from the ileocecal valve corresponding to a 2.54 x 2.35 cm gallstone. The surgical technique used in this case involved an initial incision made on the right iliac fossa (RIF), followed by an exploration of the intestinal loops until the obstructive mass was located. Once the mass was identified, a longitudinal incision was made on the segment of the obstruction. The gallstone causing the obstruction was then carefully removed, and the segment was repaired using a technique called enterorrhaphy. This involved suturing the incision made on the bowel segment, thereby restoring the continuity of the intestinal lumen ( Figure 4). The primary aim was to resolve the obstruction initially while deferring the management of the fistula to a subsequent surgical intervention, which would entail a cholecystectomy. The patient came out of surgery with a bladder catheter, a nasogastric tube, and a Jackson-Pratt drainage with serohematic content. An endoscopic retrograde cholangiopancreatography (ERCP) was performed on the third day after surgery, where choledocholithiasis and an image suggestive of a gallbladder-dependent bilioenteric fistulous tract were observed, and approximately 11 stones were extracted ( Figure 5). On the fourth day after surgery, stools were evident on several occasions with peristalsis present, so a liquid diet was started. On the seventh postoperative day, drainage was removed. After tolerating a soft solid diet, the patient was discharged on the eighth postoperative day with a follow-up appointment in seven days. The surgeon then proceeded to locate the gallbladder, identified the cystic duct and artery, and dissected them meticulously. The gallbladder was detached from the liver bed with caution to avoid any damage to surrounding structures. The site was checked for any signs of bleeding or leakage before closing the right subcostal incision with sutures. The patient was kept under observation in the recovery room before being transferred to a hospital room for postoperative monitoring. No complications were reported.
Discussion
Gallstone ileus is a rare complication of biliary pathology and occurs in 0.3% to 4% of patients with cholelithiasis [3]. According to recent studies, it has been found that the incidence of small bowel obstruction due to this condition is relatively low in patients under 65 years of age, with less than 4% of cases being reported. However, as patients age, their risk of developing this condition increases significantly, with a significant rise in incidence reported in patients aged 65 years or older, where the incidence rate is approximately 25% [4,5]. Its prevalence is higher in women, with a female-to-male ratio of 3.5-3.6:1 [3]. Due to the advanced age of the patients, coexisting medical conditions, delayed hospital admission, and postponed therapeutic intervention, the morbidity rate associated with gallstone ileus can reach 50%, while the mortality rate ranges from 12 to 27%. [3][4][5][6].
The sine qua non condition for gallstone ileus to occur is the presence of a fistula between the gallbladder and the gastrointestinal tract [1]. The pathogenesis involves a concurrent episode of acute cholecystitis. The inflammation in the gallbladder leads to adhesion formation between the surrounding structures [1]. The pressure effect of the gallstone leads to necrosis and erosion through the wall of the gallbladder and the fistula formation, with cholecystoduodenal fistula being the most frequent type; however, cholecystocolonic and cholecystogastric fistulas can also result in gallstone ileus. When the gallbladder is free of calculi, it becomes a blind sinus tract and contracts down to a small fibrous remnant [1].
In a 32-year retrospective review of 24 cases, 90% of obstructing stones were greater than 2 cm in diameter, with the majority measuring over 2.5 cm [7]. The minimum diameter of the stone necessary to produce intestinal obstruction is 2.5 cm unless there is an alteration of the previous intestinal dynamics or some cause of stenosis [8]. In classic gallstone ileus, the stone is more frequently impacted in the distal ileum (70%), followed by the proximal ileum and jejunum (25%), colon in less than 4.8%, and duodenum in 5% (Bouveret's syndrome) [9]. In the small bowel, the gallstone usually goes to the most distal parts and may become impacted and cause an obstruction, which are the terminal ileum and the ileocecal valve because of their relatively narrow lumen and less active peristalsis. The factors that determine the impaction are the size of the gallstone, the site of fistula formation, and the bowel lumen [10].
Clinical symptoms vary, depending on the site of the obstruction. The most common presenting clinical picture is a mechanical intestinal obstruction with abdominal distention and pain, vomiting, constipation or obstipation, and fluid imbalance. The patient may also have jaundice. In cases of chronic evolution, there will be recurrent episodes of pain caused by the passage of gallstones through the intestine, along with a period of asymptomatic time, reaching complete obstruction in several stages. During the abdominal exploration, signs include distension and increased bowel sounds. Physical examination and laboratory tests do not point to a particular cause of intestinal obstruction. Laboratory studies may show an elevated white blood cell count, which was the case in this patient with a slightly elevated count of 11.5 x 10³U/L, an abnormal liver function test, which was also the case, with elevated levels of aspartate aminotransferase (AST) and alanine aminotransferase (ALT) of 73 U/L and 75 U/L, respectively [3]. Additionally, it has been reported that an electrolyte imbalance may be present in very few cases [3]. However, in this case, such an imbalance was not found, which further supports the negative diagnostic significance of this finding.
Plain abdominal radiography frequently demonstrates a nonspecific pattern of intestinal obstruction and its diagnostic utility in the identification of gallstones is limited. Ultrasonography is most helpful in demonstrating the impacted stone as well as in confirming residual cholelithiasis or choledocholithiasis. A CT scan can objectify intestinal obstruction by identifying the stone and the level of obstruction, having a sensitivity greater than 90% [11]. Rigler's triad is the set of diagnostic imaging features used to diagnose gallstone ileus, which involves the distention of intestinal loops, the presence of radiopaque stones (in fewer than 10% of cases), and pneumobilia (known as Gotta-Mentschler sign) [3]. The diagnosis can be established with the presence of two out of these three signs [3].
The treatment is mainly surgical and should be aimed primarily at the intestinal obstruction and secondarily at the cholecystoenteric fistula and may be performed simultaneously or not depending on the patient's condition [3,11]. Intestinal obstruction is addressed with an enterolithotomy via laparoscopy or laparotomy. Cholelithiasis and cholecystoenteric fistula are generally treated in a second surgical stage (or concomitantly with enterolithotomy in low-risk patients), with a combined biliary procedure, which involves cholecystectomy and closure of the fistula [12]. Compared with enterolithotomy alone, the one-stage procedure reduces recurrences of gallstone ileus; prevents malabsorption and weight loss from a persistent cholecystoenteric fistula; and prevents cholecystitis, cholangitis, and gallbladder carcinoma, but with the risk of greater surgical morbidity and mortality [13].
Conclusions
Gallstone ileus is a rare entity that can cause intestinal obstruction mainly in elderly women, with a high rate of complications and consequently long hospital stays. This case report highlights the importance of considering gallstone ileus as a potential diagnosis in patients with a history of cholecystitis and symptoms of bowel obstruction. It also emphasizes the value of utilizing imaging studies to confirm the diagnosis and identify any potential underlying causes. Furthermore, our report underscores the need for timely surgical intervention to prevent complications and improve patient outcomes. Making a timely diagnosis provides us with the tools for a surgical approach aimed at intestinal obstruction and subsequently in the management of the biliary fistula.
Additional Information Disclosures
Human subjects: Consent was obtained or waived by all participants in this study. Comité de Ética de Investigación (CEI), CEDIMAT issued approval CEI-643. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
|
2023-04-05T15:17:07.698Z
|
2023-04-01T00:00:00.000
|
{
"year": 2023,
"sha1": "165a293d3a6c2baaac735006024ea6c1267528f6",
"oa_license": "CCBY",
"oa_url": "https://assets.cureus.com/uploads/case_report/pdf/147019/20230403-21736-5w662j.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3dc98186b7c13b6e608d36a1ba26cc67542a34a0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
225196176
|
pes2o/s2orc
|
v3-fos-license
|
Plant community structure and possible vegetation changes after drought on a granite catena in the Kruger National Park , South Africa
The Earth’s environment is dominated by three great natural components, namely, climate, vegetation and soil. Climate is considered the most important factor influencing the distribution and composition of vegetation on a micro and sub-continental scale (Campbell et al. 2008; Furley 2010; Scholes 1997; Schulze 1997). Vegetation development is controlled largely by light, temperature and moisture (Bond, Midgley & Woorward 2003; Schulze 1997). Topography and the chemical and physical compositions of the soil also influence vegetation and, in conjunction with climate, are responsible for the intricate interactions that govern the worldwide distribution of vegetation (Campbell et al. 2008; Furley 2010; Scholes 1997). Understanding how these interactions regulate the ecology of plant communities is critical for characterising the impacts of global change on biodiversity at local and regional scales.
Introduction
The Earth's environment is dominated by three great natural components, namely, climate, vegetation and soil. Climate is considered the most important factor influencing the distribution and composition of vegetation on a micro and sub-continental scale (Campbell et al. 2008;Furley 2010;Scholes 1997;Schulze 1997). Vegetation development is controlled largely by light, temperature and moisture (Bond, Midgley & Woorward 2003;Schulze 1997). Topography and the chemical and physical compositions of the soil also influence vegetation and, in conjunction with climate, are responsible for the intricate interactions that govern the worldwide distribution of vegetation (Campbell et al. 2008;Furley 2010;Scholes 1997). Understanding how these interactions regulate the ecology of plant communities is critical for characterising the impacts of global change on biodiversity at local and regional scales.
The savanna biome is unique because it consists of both woody vegetation and a grass layer. Climate and other regulating factors likely affect these two components differently, resulting in spatio-temporal heterogeneity of tree:grass compositions. Severe droughts, for example, may remove trees, leading to negative effects on woody plant diversity (Swemmer 2016;Walker et al. 1987;Zambatis & Biggs 1995). By reducing tree densities, droughts in savanna provide opportunities for drought-adapted flora to thrive, for instance, by promoting seedling recruitment of fast-growing, palatable shrub species and the re-establishment of a grassy layer (Swemmer et al. 2018;Vetter 2009). In this way, drought can help maintain the balance between trees and grasses (Swemmer 2016). Grasses, on the other hand, can take decades to recover their A preliminary study investigated the associations between vegetation communities along catenary soil gradients in 2015. The severe drought of 2016 in South Africa presented the opportunity to study post-drought savanna vegetation changes. This hillslope transect was surveyed for five successive seasons. The Braun-Blanquet method was used, and the data were analysed by means of the TWINSPAN algorithm, which resulted in the classification of different communities on the crest, sodic site and riparian area. Change in herbaceous and grassy vegetation composition and diversity in the transect is compared between rainfall years, wet and dry seasons, and three different zones (crest, sodic site and riparian areas). Spatial and temporal autocorrelation of the woody component shifted the focus to variance within the graminoid and herbaceous layers. Clear vegetation changes were observed on the crest and the sodic sites, whereas changes in the riparian area were less obvious. In all three habitats, species richness decreased after the drought and did not reach pre-drought levels even after two years. However, plant species diversity was maintained as climax species were replaced by pioneer and sub-climax species. These changes in community structure, which had reverted to systems dominated by climax species by the end of the sampling period, might be an indication of the savanna ecosystem's resilience to drought conditions. http://www.koedoe.co.za Open Access productive potential or might recover comfortably before the next drought (Swemmer et al. 2018). The herbaceous layer thus also regularly experiences negative responses to drought (Zambatis & Biggs 1995); however, Abbas, Bond and Midgley (2019) indicated that grasses can resprout vigorously after the onset of rainfall events. In fact, this layer usually responds to droughts and other climate changes first, primarily because of the shallow depth of root penetration. Upper soil layers are more susceptible to desiccation than the deeper strata penetrated by many woody plants. Furthermore, the extensive root structures of trees increase their access to subterranean reserves of ground water. Shorter term responses of grassy and herbaceous vegetation were highlighted by Buitenwerf, Swemmer and Peel (2011), who showed that dynamics of this savanna component are mainly controlled by interannual changes in rainfall. The response of the grass layer to climate is of importance for conservation planning and application, because it is an important food source for grazer populations (Staver, Wigley-Coetzee & Botha 2018).
The savanna regions of South Africa are considered semiarid, receiving rainfall mostly during the summer months between October and April (Walker et al. 1987). Fluctuations in annual rainfall, including droughts, are a regular and recurrent feature of the climate .
In more than half of the 80 summer rainfall districts identified by Rouault and Richard (2003), droughts were recorded during 1926during , 1933during , 1945during , 1949during , 1952during 1970during , 1983during and 1992during (Fauchereau et al. 2003Gommes & Petrassi 1996). Rouault and Richard (2003) and Staver et al. (2018) indicated that the 1982-1983 drought was the worst drought recorded since 1922; however, Swemmer (2016) indicated that the drought of 2015-2016 was the worst drought that the Lowveld experienced in the past 33 years. In the savanna areas of KwaZulu-Natal, this drought was shown to be the worst in 50 years by Abbas et al. (2019). Research by Hu and Fedorov (2019) indicated that the drought of 2015-2016 was worse than the droughts of 1982 and 1997. These studies show that, since the 1960s, drought is more often associated with El Niño events; notably, however, annual rainfall during wet years has also increased since the 1970s.
South African savannas experienced drought conditions during the rainfall seasons of 2014-2015 and 2015-2016. In the Kruger National Park (KNP), and the surrounding areas of the Lowveld, below average rainfall occurred at annual (255 mm) and monthly scales (Swemmer 2016). This resulted in devastating effects on vegetation, animal and human welfare in certain areas. These years were also marked by unusually high temperatures, resulting in higher evaporation rates, further reducing water availability (Swemmer 2016). The severity of these conditions provided us with the opportunity to study their effects on short-term responses of vegetation, specifically on the grassy and herbaceous component. We conducted a study of seasonal and annual plant community dynamics along a granitic catenal gradient. This catena forms part of a research supersite, where long-term research is needed to establish baselines for monitoring and understanding ecological change (Smit et al. 2013). We describe taxonomic community changes, as well as testing for shifts in diversity, over two wet and two dry seasons through the drought period and compare these with pre-drought conditions (April 2015) described elsewhere (Theron, Van Aardt & Du Preez 2020). We focused only on the herbaceous and grassy components of the vegetation because we were interested in resolving short-term responses in savanna plant resilience to drought.
Study area
The study site is in the southern parts of KNP south of Skukuza (see study area figure in Theron et al. 2020) at 25.111ºS and 31.579ºE. Kruger National Park falls within the arid 'BSh' (hot semi-arid climate) climate type according to the Köppen-Geiger classification system (Venter, Scholes & Eckhardt 2003). 'BSh' is one of the four climate types within this category. The main features of 'BSh' climate are distinct seasonal rainfall and temperature variations. Mean annual precipitation in KNP is generally in the range of 650 mm annually (Smit et al. 2013). On a local scale, MAP of the Granite Lowveld varies between 450 and 900 mm along the eastern plains and the western escarpment, respectively (eds. Mucina & Rutherford 2006). However, the average annual total rainfall as recorded at the Skukuza Meteorological Station is 553 mm (Zambatis 2006). The mean annual temperature in the vicinity of the study area varies between 21ºC and 22ºC (Khomo et al. 2011;Scholes, Bond & Eckhardt 2003). This area experiences an insignificant seasonal and diurnal temperature variation with extreme periods of inundation and aridity (Kruger, Makamo & Shongwe 2002). The study site is underlain by the Nelspruit Suite geological formation and consists of granite and gneiss mostly occurring in the eastern parts of KNP (Alard 2009;Smit et al. 2013;Van Zijl & Le Roux 2014). Granite gneiss is widespread in the eastern regions of KNP and results in shallow, nutrient-poor soils that vary from grey to red to brown in colour (Venter 1990). Descriptions of the different soil forms found along the catena at the site were provided in Figure 2 within the article by Theron et al. (2020). The vegetation type at the study site is mostly Granite Lowveld (SVI3), characterised by a ground layer of tall grasses with
Data collection
The same hillslope transect was surveyed for five seasons; the first survey was conducted prior to the onset of severe drought conditions (Theron et al. 2020) during December 2015 and April 2016 (Figure 1). The second and fourth surveys represent the start of the rainy summer season, while the third and fifth surveys reflect the end thereof ( Figure 1). Relevés of 10 m 2 were aligned along a 500 m transect. Cover abundance was recorded per species according to the modified Braun-Blanquet scale (Kent 2012;Kent & Coker 1992;Van der Maarel & Franklin 2013;Theron et al. 2020).
Classification, richness and diversity analysis
The analysis done by Theron et al. (2020)
Classification
VegCap (unpublished database tool designed by N. Collins) was used to capture vegetation data into a macro-enabled Excel spreadsheet. From there, the data were imported into JUICE© (Tichý & Holt 2006) where a Modified TWINSPAN Classification (Roleček et al. 2009) analysis was carried out. Parameters for this analysis included the following: pseudospecies cut level (5); analysis was constrained to a minimum group size of 3-54 clusters; and division reached an endpoint if dissimilarity went lower than 0.3 based on average Sorensen dissimilarity. The resulting clusters were then arranged within both JUICE© and Excel to form the final vegetation communities. Although all the species were recorded during the field surveys, woody species were removed from the data in order to look at the change in graminoids and herbaceous species after the drought. This follows, for example, Rouault and Richard (2003), who indicated that trees and other vegetation with extensive root structures have access to subterranean reserves of groundwater and will thus not be immediately affected by the drought. The naming of communities and subcommunities was carried out according to the guidelines presented in Brown et al. (2013). In order to obtain diagnostic, constant and dominant species, we made use of the Analysis of Columns of a Synoptic
Diversity and richness
In addition to descriptions of community composition and how this changed over time, we evaluated changes in diversity and compared these across time for each of the three communities. We compared changes in species richness as well as changes in alpha-diversity. We used the Chao estimator as an indicator of species richness, as this index accounts for the occurrences of singletons and doubletons, and the Shannon index was used to quantify alpha-diversity. For each sample (i.e. per season and per habitat), ordinal abundance data as scored by the Braun-Blanquet system were converted to abundance cover data, rounded to integer values, following Van der Maarel (2007): r = 1; + = 2; 1 = 3; 2a = 8; 2b = 18; 3 = 38; 4 = 63; 5 = 88. Diversity estimates were computed using the iNext package (Hsieh, Ma & Chao 2016) for R (R Core Team 2015). The iNext function was used for extrapolation and prediction of diversity indices based on rarefaction procedures, with the expected means and standard errors extrapolated from the asymptotes of the fitted accumulation curves (see Figure 2). In all cases, accumulation curves approached or reached an asymptote, and observed data represented between 80 and 100% of extrapolated estimates (in the case of species richness), and between 94% and 100% of extrapolated estimates (for Shannon diversity), depending on the sample. Thus, sampling effort is considered sufficient for reliable estimations of diversity in these communities.
Ethical considerations
Ethical approval was obtained from the Interfaculty Animal Ethics Committee of the University of the Free State (UFS-AED2019/0121).
Results and discussion
Classification Different plant communities were classified for each topographical unit as defined by Theron et al. (2020). In this article, the data for 2015 were not included in the classification in order to prevent a repetition of information.
Crest communities (December 2016-April 2018)
These communities located on the crest zone and upslope beyond the sodic site occur on the Clovelly, Pinedene, Fernwood, Estcourt, Mispha and Sterkspruit soil forms (Theron et al. 2020). The soil depth varies from 533 to 620 mm deep, with an average pH H₂O of 5.95-6.08. Soil texture is mostly loamy sand to coarse loamy sand (Theron et al. 2020).
Sodic site communities (December 2016-April 2018)
The communities occur between the crest and the riparian area on the mid-slope of the hill, and are also sodic sites. Soils are mostly of the Sterkspruit form; however, there were also instances of Mispah soil forms present. The depth varies between 180 mm and 500 mm with an average pH H2O of 6.20-6.43. Soil texture is coarse sandy loam. The vegetation classification resulted in two communities and four sub-communities (Online Appendix 2). In terms of vegetation composition, these communities can be compared to the Dactyloctenium aegyptium-Sporobolus nitens (community 4) of Theron et al. (2020) The vegetation found in this sub-community mostly represents species from growing season 4 with a single occurrence of season 2. Species from Species Group A (Online Appendix 2) define this sub-community. These species are completely absent or occur with very low cover-abundance values in other communities and sub-community on the sodic site. Urochloa panicoides, which defines this sub-community, is known as a pioneer annual tufted grass and will thus only be present for one season (Van Oudtshoorn 2018). In this subcommunity, this grass co-occurs with Sporobolus nitens, which defined the communities found in 2015 before the drought: Vegetation in this sub-community is mostly from growing season 5 (April 2018) with a single occurrence of vegetation from growing season 3 (April 2017). Although S. nitens is the diagnostic species for this sub-community, the presence of species from Species Group D defines this sub-community. These species are completely absent from sub-community 2.2. Season 5 marks the return of S. nitens (with high cover abundance) and Dactyloctenium aegyptium (with low cover-abundance and only in some relevés) which dominated the communities found on the sodic site by Theron et al. (2020) in 2015:
Chloris virgata-Eragrostis cylindriflora-Chloris gayana
Sub-community Sub-community 2.2 is defined by the presence of perennial grasses from Species Group E, which are absent from subcommunity 2.1. Although having low cover abundances and not occurring in all relevés, this is the only season in which these grass species were found. All three of these grass species (Chloris gayana, Eragrostis gummiflua and Aristida stipitata) are regarded by Van Oudtshoorn (2018) as sub-climax species, which might indicate that after the third season, the sodic site started to recover from the severe drought of 2015-2016.
Riparian area communities (December 2016-April 2018)
The communities occur between the sodic site on the lower midslope of the hill and the drainage line. Soil forms found in this area include Dundee, Mispah, Bonheim and Sterkspruit. The depth of these soils varies from 100 mm to 600 mm with an average pH H20 of between 6.21-6.73. Soil texture also varies from sandy loam to loamy to sandy clay loam. In contrast to the other terrain units depicted along the catena, the riparian area's classification did not result in communities that could depict the different seasons of sampling. The vegetation classification resulted in five communities (Online Appendix 3). The vegetation of the riparian communities can be compared to communities 1 (Panicum maximum-Pupalia lappacea) and 2 (Themeda triandra-Flueggea virosa) from Theron et al.'s (2020) Eragrostis cylindriflora (Species Group G) and Urochloa mosambicensis (Species Group H) define this community. Species from Species Group A are mostly present in community 1 and absent or occur with low cover-abundance value in other communities in the riparian areas. This community represents sampling seasons 2, 4 and 5. It is notable that none of the relevés done during season 2 (just at the onset of the rainy season) is present in this community. Community 1 also share a lot of species from Species Group B with community 2:
Themeda triandra-Panicum maximum Community
Diagnostic species: None Community 2 is defined by the presence of species from Species Group C, which are mostly restricted to this community although they occur with low cover-abundance values. Notable in this community is the strong presence of Themeda triandra (Species Group D) and Panicum maximum (Species Group H), which were also present as diagnostic species defining the riparian areas in Theron et al. (2020). It seems as if Themeda triandra is mostly limited to this community with high cover-abundance values. However, Panicum maximum occurs throughout all the communities present in the riparian area throughout all the sampling seasons. This community is also mostly represented by sampling seasons 3 and 5 with some instances of sampling season 4: Community 3 is the community with the lowest number of species in all the communities found in the riparian area, and there are no species that clearly distinguish this community from all the other communities in the riparian area. The cover abundance of species in this community is also low, and species do not occur in all the relevés found in this community. It is only the grass Eragrostis superba (Species Group H), known to grow in disturbed areas (Van Oudtshoorn 2018), that occurs in all three relevés that make up the community. Vegetation in this community mostly represents sampling seasons 2 and 5. The reason for the low number of species might be that the vegetation still needed to recover after the drought.
Eragrostis rigidior-Urochloa mosambicensis Community
Diagnostic species: None Vegetation in this community is dominated by species from Species Group E, which are mostly absent from the other communities in the riparian area. Furthermore, Urochloa mosambicensis (Species Group H) also occurs more frequently and with a higher cover abundance in this community. According to Van Oudtshoorn (2018), U. mosambicensis grows in disturbed or overgrazed and trampled areas. The high occurrence of this species in the riparian area might indicate that animals were seeking shade in order to evade the heat of the day during the drought (2015)(2016). He further also indicated that Eragrostis rigidior is known to occur in disturbed soil. It is also important to note that most of the relevés present in this community represent sampling season 2, which was just after the 2015-2016 drought: This is the only community that is solely represented by vegetation sampled during sampling season 4. The vegetation is mostly dominated by the presence of species from Species Group F, which is mostly absent or occurs with low coverabundance values in other communities of the riparian area. The grasses Bothriochloa radicans and Eragrostis trichophora are known to occur in areas with additional moisture or where water collects (Van Oudtshoorn 2018). A possible explanation for this might be that after rains, water can remain close to the surface in the vicinity of the riparian area, which contributes to the additional moisture that is favourable for these grasses.
Bothriochloa radicans-Eragrostis superba Community
Although there is no distinction to be made between the sampling seasons in the riparian area of the study site, there are differences in the vegetation composition over the study period. When comparing the vegetation of the riparian area with communities 1 and 2 (Theron et al. 2020), it is clear that Panicum maximum, Urochloa mosambicenis and Themeda triandra remained an important part of the vegetation composition over all the different sampling seasons.
Richness and diversity of plant communities
From Figure 3a, it seems as if the species richness decreased at all the sites during the drought and subsequently increased more-or-less progressively through time as the communities recovered from the drought between 2015 and the onset of the current sampling period. However, pre-versus post-drought richness estimates are only significantly different for the sodic and riparian habitats (non-overlapping 95% confidence intervals between groups); variance in estimates for the crest communities is high and overlaps with the pre-drought estimate. Interestingly, however, the recovery in species richness in http://www.koedoe.co.za Open Access sodic and riparian habitats appeared to slow or even reverse by the end of the study period (April 2018), although this could be because the final sample was taken in the dry season. Overall, species richness in crest habitats was greater than in both sodic and riparian habitats. Figure 3b represents the changes that took place in diversity over the different sampling seasons. In contrast to richness, species diversity did not differ between pre-and post-drought periods. However, a more cyclic seasonal shift is apparent, in that diversity was often highest in the wet seasons (December samples), compared with both dry season samples (April). The sodic and riparian habitats are an exception to this trend, because diversity in these areas was low in December 2016, perhaps because of a lag in recovery from the drought. As with species richness, diversity was also consistently greater in the crest, compared with the other two habitats.
While these indices of diversity provide some indication about changes in the studied communities, their overall function might be better represented in terms of changes in plant functional groups. Indeed, in all three habitats, the proportional representation of plant functional groups differed between 2015 and 2016, with climax and subclimax species being replaced by pioneers, perennials, annuals and -in some cases, especially in the sodic habitat -bare soil ( Figure 4). By the end of the sampling period, however, the frequency distribution of functional groups at each habitat was qualitatively similar to pre-drought conditions.
General discussion
With this study, we aimed to determine how savanna plant communities along a catenal gradient changed over time following a severe drought. The catenal gradient studied could be divided into three plant communities -crest and midslope with the highest diversity; sodic site, and riparian areas. The crest and sodic sites further showed a definite change in species composition among the different sampling seasons. There was also an association between April sampling seasons for the crest as well as associations between the December and April sampling sites for the sodic site. Vegetation in the riparian section of the study revealed no clear distinction between different sampling seasons or any correlation between April and December. In a study by Scholes (1985), he investigated the drought of 1981-1983 and found that the grasses were more adversely affected by the drought than the trees. Although we excluded data for woody plants from this study, it is clear that vegetation changes took place in the ground layer (graminoids, forbs, herbs and geophytes), especially in the crest and sodic site communities (see Janecke 2020).
Previous studies have indicated that the physical and chemical properties of soils would affect grass mortality rates during drought conditions (Khomo & Rogers 2005;Khomo et al. 2011;Scholes 1985). Specifically referring to the characteristics of the study site and its catenary properties, it is expected that grasses inhabiting the sandy crest and valley bottoms would have a higher mortality rate than those inhabiting the clay-rich sodic sites and downslopes. The physical properties of sandy soils would compound the effects of droughts because they retain less water than do clay soils, and also through exasperating water infiltration and percolation of any available surface water. The effect of soil properties was shown to also affect this catena complex (Theron et al. 2020). This is also comparable to this study because most of the grass species dominating the climax community (sampling S1; 2015) returned to the vegetation composition of communities during sampling season 3 (April 2017). We furthermore found that richness and diversity declined and that recovery was not complete two years after the drought, especially in the sodic and riparian habitats, which have maintained a low level of species richness throughout the sampling period. These shifts coincided with changes in functional group representation following the drought.
Conclusion
Definite changes in plant community composition were seen in the crest, midslope and sodic sites during the different sampling seasons. Shifts were also seen in terms of species composition at certain times of the year. This was not always clear in terms of richness and diversity of plant species. We would, however, be cautious to extrapolate these findings to all vegetation successions along a catena.
In the riparian area, no distinctions were clear between the different sampling seasons and no cyclic correspondence was observed between April and December. This phenomenon might be ascribed to water movement through the process of hydraulic lift from deeper soil layers which lessen the impact of drought on the vegetation.
We recommend that future studies following droughts should be done over more sampling seasons than reported here to better relate seasons to plant assemblages. Lastly, the recovery of the plant growth forms from 2015 to 2018 might be an indication of the resilience of the savanna ecosystem, in spite of the recovery not being complete.
|
2020-08-13T10:10:32.131Z
|
2020-10-29T00:00:00.000
|
{
"year": 2020,
"sha1": "2442d4b6d2ff4e6d946232db5e693fa9c52b0e0e",
"oa_license": "CCBY",
"oa_url": "https://koedoe.co.za/index.php/koedoe/article/download/1585/2656",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "f1168962b5eff069984af98f20233bb04264ffd1",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Geography"
]
}
|
28439502
|
pes2o/s2orc
|
v3-fos-license
|
Identification of Novel Membrane Structures in Plasmodium falciparum Infected Erythrocytes
Little is known about the molecular mechanisms underlying the release of merozoites from malaria infected erythrocytes. In this study membranous structures present in the culture medium at the time of merozoite release have been characterized. Biochemical and ultrastructural evidence indicate that membranous structures consist of the infected erythrocyte membrane, the parasitophorous vacuolar membrane and a residual body containing electron dense material. These are subcellular compartments expected in a structure that arises as a consequence of merozoite release from the infected cell. Ultrastructural studies show that a novel structure extends from the former parasite compartment to the surface membrane. Since these membrane modifications are detected only after merozoites have been released from the infected erythrocyte, it is proposed that they might play a role in the release of merozoites from the host cell.
Identification of Novel Membrane Structures in Plasmodium falciparum Infected Erythrocytes
Carlos A Clavijo, Carlos A Mora, Enrique Winograd */+ Laboratorio de Biología Celular, Instituto Nacional de Salud, Avenida El Dorado con Carrera 50, Bogotá, Colombia *Departamento de Química, Universidad Nacional de Colombia, Bogotá, Colombia Little is known about the molecular mechanisms underlying the release of merozoites from malaria infected erythrocytes.In this study membranous structures present in the culture medium at the time of merozoite release have been characterized.Biochemical and ultrastructural evidence indicate that membranous structures consist of the infected erythrocyte membrane, the parasitophorous vacuolar membrane and a residual body containing electron dense material.These are subcellular compartments expected in a structure that arises as a consequence of merozoite release from the infected cell.Ultrastructural studies show that a novel structure extends from the former parasite compartment to the surface membrane.Since these membrane modifications are detected only after merozoites have been released from the infected erythrocyte, it is proposed that they might play a role in the release of merozoites from the host cell.
Key words: Plasmodium falciparum -parasite release -membrane fusion -parasitophorous vacuolar membrane
The erythrocytic life cycle of the human malarial parasite Plasmodium falciparum is responsible for most of the pathology and mortality associated with this disease (Miller et al. 1994).The cycle is initiated by entry of a merozoite into the host red blood cell by invagination of the erythrocyte plasma membrane (Ward et al. 1993, Dluzewski et al. 1995).During the next 48 hr, the intracellular parasite develops surrounded by two membranes: the erythrocyte plasma membrane and the invaginated membrane closely apposed to the parasite itself, the parasitophorous vacuolar membrane.During the first 20 hr of development, the young feeding parasite (trophozoite) is observed as a "ring form".After this time, an increase in various metabolic activities including the degradation of hemoglobin in digestive vacuoles takes place.Digestion of hemoglobin results in the production of amino acids and an insoluble pigment called hemozoin, which accumulates in and is characteristic of the mature trophozoite stage.During Little is known about the molecular mechanisms underlying the release of merozoites from malaria infected erythrocytes.Video-microscopy studies have shown that the actual rupture of a schizont-infected erythrocyte is preceded by swelling and vesiculation of the host cell membrane, and then merozoites are released with explosive suddenness (Dvorak et al. 1975, Hermentin & Enders 1984).Malarial proteases have been implicated in merozoite release, since rupture of infected erythrocytes can be prevented through the use of protease inhibitors (Banyal et al. 1981, Hadley et al. 1983, Lyon & Haynes 1986).Furthermore, plasmodial proteases with activities against known red cell membrane skeletal proteins, have been isolated and characterized (Deguercy et al. 1990); however it has not been determined whether these proteases actually participate in the rupture of the red cell membrane during merozoite escape.At the time when schizogony is nearly complete, the energetic charge of the infected erythrocyte falls (Yamada & Sherman 1980).Such a reduction in ATP levels could disturb the osmotic gradient across the erythrocytic membrane that is required for maintaining cellular volume.Possibly, pore forming proteins which have been re-ported in other parasitic protozoa (Noronha et al. 1994, Andrews 1994), could function in the escape of merozoites from the infected cell, but these are yet to be identified in the case of malarial parasites.
In this study, a preliminary characterization of membranous structures found in the culture medium at the time when merozites are released from infected erythrocytes has been carried out.It is shown that membranous structures are composed of a surface membrane, a parasitophorous vacuolar membrane, a residual body containing electron-dense material, and a novel structure which extends from the former parasite compartment to the surface membrane.Our results suggest a possible function of these novel membrane modifications in the process of parasite escape from the host cell.
MATERIALS AND METHODS
Isolation of membranous structures -P.falciparum cultures (Trager & Jensen 1976) were synchronized by gelatin flotation (Jensen 1978) followed by sorbitol lysis (Lambros & Vandenberg 1979) of mature infected erythrocytes.Parasite maturation was monitored every 8 hr by Giemsa staining of thin blood-smears.When parasites reached the ring stage, the culture was centrifuged at 230 x g for 5 min.The supernatant containing membranous structures was centrifuged at 900 x g for 10 min, and the resulting pellet was resuspended in a small volume of 10 mM sodium phosphate buffer pH 7.4 containing 0.145 mM NaCl (PBS).The suspension of membranous structures was overlaid on top of a discontinuous gradient composed of 5 and 13% Percoll cushions (vol/vol in PBS), and centrifugation at 900 x g for 15 min was carried out.Membranous structures were isolated from the 5-13% Percoll interphase and washed three times in PBS.
Fluorescence microscopy -Freshly isolated membranous structures were processed for immunofluorescence microscopy as previously described (Winograd & Sherman 1989).Rabbit anti-erythroid spectrin was kindly provided by Dr M Wasserman, and monoclonal antibody 8E7/55 against the parasitophorous vacuolar antigen QF116 was a kind gift from Dr AR Hibbs.Staining with merocyanine 540 was essentially carried out as previously reported (Kara et al. 1988).Membranous structures were labeled with quinacrine by incubation in a 0.1 % (w/v) quinacrine solution in Hepes Buffered Saline (HBS; 10 mM Hepes buffer pH = 7.40 containing 0,15 M NaCl) for 10 min at 37°C.After washing in HBS, cells were examined by fluorescence microscopy.As a control, live infected erythrocytes were labeled with quinacrine at different stages of development.In every instance, infected erythrocytes presented a strong fluorescence signal.Erythrocytes were surface labeled with biotin (Simpson et al. 1981) and infected with P. falciparum as described above.Membranous structures were isolated when parasites reached the ring stage of development.Membranebound biotin was detected by fluorescence microscopy after incubation in streptavidin-fluorescein.
Electron microscopy -Membranous structures were purified by discontinuous Percoll gradients as described above, and immediately fixed for 1 hr in 2% glutaraldehyde in PBS at room temperature.Membranous structures were then processed either for transmission (Winograd & Sherman 1989) or scanning electron microscopy (SEM) (Gruenberg et al. 1983) and observations were either carried out on a Phillips transmission electron microscope Model CM10 or a Phillips scanning electron microscope Model 515.
RESULTS
To learn more about the process of merozoite release from malaria infected erythrocytes, we carried out a characterization of the membranous structures found in the culture medium of P. falciparum.The number of membranous structures in the culture medium increased every 48 hr (not shown), coincident with the time of merozoite release (i.e. the temporal interphase between the schizont and ring stages), suggesting that these structures are implicated in the process of merozoite release from the infected erythrocyte.
Observations by phase contrast microscopy, showed that membranous structures have morphology and dimensions similar to that of an infected erythrocyte, including the presence of a residual body containing dense material (Fig. 1A, C, E).
By immunofluorescence microscopy, the outermost membrane reacted with antibodies directed against erythroid spectrin (Fig. 1B).Furthermore, when the erythrocyte membrane was labeled with biotin prior to infection, streptavidin conjugated fluorescein bound only to the outermost membrane of the structures (results not shown).These results suggest that the external membrane of the membranous structures originates from the infected erythrocyte.
Membranous structures could be labeled with merocyanine 540 (Fig. 1D), a reagent previously shown to interact with the parasitophorous vacuolar membrane (Elford et al. 1985).In addition, membranous structures reacted with the monoclonal antibody 8E7/55 (Fig. 1F), whose specificity has been demonstrated to be against the parasitophorous vacuolar antigen QF116 (Kara et al. 1988).These results indicate that the Observations using transmission electron microscopy, showed that membranous structures are composed of a surface membrane with associated knobs or excrescences and a residual body containing electron-dense material (Fig. 2A).Vesicles or tubules, similar to membrane-bound structures described in schizont-infected erythrocytes (Langreth et al. 1978, Atkinson & Aikawa 1990), were found within the space defined by the outermost membrane and the residual body (Fig. 2A).In addition, larger membrane-bound structures (0.7 to 1.4 µm) appearing to derive from the surface membrane, contain electron-dense material similar to the one present in the residual body (Fig. 2A).In other microscopic sections, the red cell membrane and the membrane surrounding the residual body appear to be continuous (Fig. 2B), and occasionally a well defined structure extending from the erythrocyte membrane to the residual body was observed (Fig. 2C).These results indicate that the former parasite compartment and the extracellular environment are communicated through a novel duct-like structure.
Scanning electron microscopic observations of the membranous structures, showed the presence of an opening which caves in appearing to lead to a duct-like structure (Fig. 2D, E, F).
DISCUSSION
In this study, membranous structures were identified in the culture medium of P. falciparum infected erythrocytes.The time of appearance of these membranous structures was found to be correlated to the event when merozoites are released from infected erythrocytes.Membranous structures were found to be composed primarily of the erythrocyte membrane, the parasitophorous vacuolar membrane, and a residual body containing dense material.Since these subcellular compartments do not participate in the cellular differentiation of merozoites, membranous structures could correspond to cellular structures arising periodically in the culture medium as a consequence of merozoite release from infected erythrocytes.
Ultrastructural studies demonstrated the presence of a membrane extending from the former parasite compartment to the surface membrane, forming an apparent duct-like structure.The fact that these structures are not detectable on schizontinfected erythrocytes either by transmission (Langreth et al. 1978, Atkinson & Aikawa 1990) or SEM (Gruenberg et al. 1983), suggests that the formation of such membrane modifications might be functionally related to the release of merozoites from infected erythrocytes.
In a previous study a parasitophorous duct on erythrocytes infected with mature stages of the parasitophorous vacuolar membrane is also an important component of membranous structures.
When membranous structures were incubated with quinacrine no fluorescent signal was detected, suggesting that membranous structures are devoid of DNA.
Based on the evidence presented above, membranous structures contain subcellular compartments expected to be present in a structure that arises as a consequence of merozoite release from the infected cell, i.e. it is composed of compartments not directly involved in the differentiation of merozoites (i.e.red cell membrane, parasitophorous vacuolar membrane and residual body containing dense material).
parasite, through which macromolecules present in the external culture medium could reach the intraerythrocytic parasite, was proposed (Pouvelle et al. 1991).More recently, (Hibbs et al. 1997) evidence for a functional duct was not found.
Although the exact role of the parasitophorous duct is not currently understood, we would like to suggest that the duct identified in this study, might play a role in the release of merozoites from infected erythrocytes (Fig. 3).The model of parasite release assumes that the parasite compartment within the infected cell becomes topologically continuous with the extracellular environment as a result of a membrane fusion event.
the molecular components of this machinery could define new strategies for controlling infections in individuals afflicted with malaria disease.These findings might provide a conceptual framework to investigate whether evolutionary related organisms also have evolved similar mechanisms for parasite escape.It is proposed that membrane fusion occurs at sites where the parasitophorous vacuolar membrane and the erythrocyte plasma membrane are in close proximity.Such places could be regions where vesicles and large tubules derived from the parasitophorous vacuolar membrane extend to the erythrocyte membrane in the form of a membranous network (Helmendorf & Haldar 1993, Elford & Ferguson 1993).Since protease inhibitors are known to interfere with the release of parasites (Banyal et al. 1981, Hadley et al. 1983, Lyon & Haynes 1986), the fusion event might be preceded by cleavage of selected red cell membrane-skeleton components.The fusion of the two membranes would result in the formation of a duct-like structure whose dimensions permit merozoites to move out of the host cell by random diffusion.
This model of merozoite release in P. falciparum infected erythrocytes would implicate an elaborated cellular machinery, where merozoite release and the formation of a duct-like structure could be highly regulated and coordinated to specific events of the parasite cell cycle.Studying
Fig. 1 :
Fig.1: immunofluorescence microscopy of membranous structures.(A), (C) and (E) are phase-contrast microscopy images; (B), (D) and (F) are the corresponding fluorescence microscopy photographs.In (B) the reaction with anti-erythroid spectrin is shown.In (D) merocyanine 540 staining and in (F) monoclonal antibody 8E7/55 are used to stain the parasitophorous vacuolar membrane.Note that the staining pattern generated either with merocyanine 540 or monoclonal antibody 8E7/55 is different from that with anti-erythroid antibodies.The former represented by a more internal reaction compatible with the distribution of the parasitophorous vacuolar membrane, while the later appears to correspond with the surface membrane.Bars = 2 µm.
Fig. 2 :
Fig. 2: electron microscopy of membranous structures.In (A), (B) and (C) transmission electron microscope images are shown; (D), (E) and (F) were obtained using an scanning electron microscope.In (A) small vesicles or tubules are shown by short arrows and a larger membrane-bound structure (longer arrow) , containing electron dense material similar to the one in the residual body (star) is also observed.In another section, the red cell membrane and the membrane surrounding the internal body converge in a common point (arrow) (B).In (C) the surface membrane and the membrane surrounding the residual body are continuos.Different views of membranous structures under the scanning electron microscope showing large cavity (D), (E) and (F).Bras = 0.5 µm.
|
2018-01-06T17:19:52.571Z
|
1998-01-01T00:00:00.000
|
{
"year": 1998,
"sha1": "adb4de06a1b3b4bd322bf790bef2ea419a8af469",
"oa_license": "CCBYNC",
"oa_url": "https://www.scielo.br/j/mioc/a/vV8CxJyfj4fDT46r5YhtPLb/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "adb4de06a1b3b4bd322bf790bef2ea419a8af469",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
54736409
|
pes2o/s2orc
|
v3-fos-license
|
Note NITROGEN-15 LABELING OF Crotalaria juncea GREEN MANURE
Most studies dealing with the utilization of N labeled plant material do not present details about the labeling technique. This is especially relevant for legume species since biological nitrogen fixation difficults plant enrichment. A technique was developed for labeling leguminous plant tissue with N to obtain labeled material for nitrogen dynamics studies. Sun hemp ( Crotalaria juncea L.) was grown on a Paleudalf, under field conditions. An amount of 58.32 g of urea with 70.57 ± 0.04 atom % N was sprayed three times on plants grown on eight 6-m -plots. The labelled material presented 2.412 atom % N in a total dry matter equivalent to 9 Mg ha This degree of enrichment enables the use of the green manure in pot or field experiments requiring N-labeled material.
INTRODUCTION
The use of the stable isotope 15 N can help to identify nitrogen sources and is important for research on nitrogen dynamics in the soil-plant system.The 15 N labeling of green manures allows the determination of the amount of the nutrient in the soil and in the subsequent crop derived from the green manure which is only feasible with the use of isotopic methods.High degree of labeling of legumes with 15 N is complicated since these plants usually obtain a significant part of their N from the air, through biological N fixation from either soil or inoculated bacteria.Most papers on 15 N do not explain how the leguminous plant material was marked.Ambrosano et al. (1997) established a 15 N labeling technique for legumes growing in a greenhouse, and obtained a dried material with 3.177 and 4.337 atom % 15 N, for velvet bean and sun hemp, respectively.Ambrosano (1995), using the techniques later described by Ambrosano et al. (1997) for velvet bean and sun hemp, determined that 60 to 80% of plant nitrogen remained in the soil, 20 to 30% were absorbed by corn plants, and 5 to 15% were lost from the soil-plant system.Azam at al. (1985) investigated the incorporation of Sesbania aculeata residues labeled with atom % 0.617 in 15 N excess, and determined that only 5% of the N from the legumes was absorbed by the corn plants.In the balance calculated by the authors, losses were around 5% when only Sesbania was applied.However, the low labeling levels used by Azam et al. (1985) affected the results.The accuracy of isotope assays is directly proportional to the labeling levels, i.e., the lower the labeling level, the lower the accuracy.For more accurate (Bartholomew, 1965) and trustable results labeling level of at least 2 atoms % 15 N excess is needed.
Little attention has been given to the effectiveness of green manures in supplying nutrients to crops (Muraoka,1984).The 15 N labeling technique provides more precise information on nitrogen dynamics in the soil-plant system.Once the green manure is labelled, the fate of the nitrogen release from the legumes can be traced.
Crotalaria juncea is widely distributed over the tropics.It grows as a shrub, with a straight trunk, and its fibers have high quality cellulose, adequate for paper and other uses.Crotalaria grows fast, and can reach 3.0 to 3.5 m height, with an average yield ranging from 10 to 15 Mg ha -1 of dry material when sown in the summer.Since it is considered a bad host for galls and cystsforming nematodes, it is highly recommended as a green manure.The crop cycle can last 180 days, but, when grown as green manure, cutting is suggested at about 120 days, during the peak of flowering (Salgado et al. 1987).
Legumes are important in crop rotations including sugar cane.Crotalaria is usually chosen because of its high biomass production, high biological nitrogen fixation, and capacity for controlling nematode infestation (Mascarenhas et al., 1994).However, field experiments require large amounts of 15 N-labeled material and there is a lack of information on ways of producing 15 N-labaled legumes under field conditions.Therefore, the objective of the present study was to establish procedures for 15 N labeling techniques of Crotalaria juncea grown in the field.
MATERIAL AND METHODS
The IAC 1-2 variety of Crotalaria juncea L was used in this study and was grown with no fertilization in a Paleudalf in Piracicaba, SP, Brazil.Crotalaria was sown (25 seeds per meter) on December 4, 2000 and emerged nine days after.Seeding was delayed to mid summer to avoid high growth rates, which could lead to higher 15 N isotopic dilution.
The experimental site consisted of 12 plots containing 6 rows of Crotalaria, 2 m long, spaced 0.5 m.The 6 m 2 Crotalaria plots were placed in the middle of 140 m 2 plots which were planed to be grown subsequently with sugar cane.In an adjacent area Crotalaria was cultivated in the same manner so that dry matter yields could be periodically assessed without affecting the experiment.
Eight plots were used for the labeling the Crotalaria plants and four were left as control (T1).The eight labeled plots were divided in two groups or four plots, which were meant for different treatments in the sugar cane trial that would follow.Although they received the same amounts of 15 N-urea, these plots were referred to as T2 and T3.Initially 5 applications of 15 N-labeled urea were planned but only 3 applications were made because of the fast growth rate of Crotalaria and the very fast isotopic dilution could negatively affect the 15 N enrichment.
Urea (58.32 g), with 70.57% + 0.04 15 N atoms % was used to label the 8 Crotalaria plots.For the first application 11.66 g of urea were diluted in 1000 mL of water and exactly 125 mL of this solution were sprayed to each plot.For the second and third applications, 2000 mL of solution, with the same urea concentration, were prepared, using 250 mL per plot.The 15 N-urea solutions were sprayed 29, 59 and 74 days after plant emergency.The first application was made with a small spray bottle (Dompel brand, 350-mL capacity) because the 29-day old plants (0.45 m high) presented a relatively small leaf area.For the other two applications a garden sprayer (Bruden brand, 4-L capacity) was used.During the foliar spray of urea, the soil and borders of the plots were covered with plastic sheets to avoid contamination of the soil and the surrounding plants.The dates of urea application as well as the height and plant mass in each period are shown in Table 1.
Crotalaria biomass production was evaluated in 1 m 2 of an adjacent area, grown with Crotalaria with no urea spray.One week after 15 N-urea was sprayed, two plants per plot of the experiment were sampled, separated into shoot and root, and analyzed for total N and 15 N concentration.The Crotalaria plants were harvested at the flowering stage, 79 days after emergency.Shoot and root were analyzed separately.Roots were washed, dried under shade, and weighed.Plant shoot and root, as well as dead leaves collected from the ground were oven-dried (60°C) for the determination of dry mass, N content and 15 N abundance.Nitrogen content was determined by the micro-Kjeldahl digestion-distillation method (Bremner & Mulvaney, 1982) and 15 N by the mass spectrometry using the sampling preparation described by Trivelin et al. (1973).Plant chemical analysis for determination of macroand micronutrients was performed according to Bataglia et al. (1983).
After harvesting the Crotalaria plants, soil was sampled at the 0-20 cm and 20-40 cm depths for fertility analysis.Two composit samples were assembled with the soil of the 8 plots treaded with 15 N-urea and the 4 control plots, respectively.
RESULTS AND DISCUSSION
The degree of labeling of the Crotalaria plants increased with time due to successive 15 N-urea applications.The plant samples collected 7 days after the first urea spray had concentrations of 15 N of 0.657 atoms % for shoot and 0.875 atoms % for root.The corresponding values for the plants at flowering, after three urea applications, were 2.412 and 1.644 atoms % Nitrogen-15 labeling of Crotalaria juncea Scientia Agricola, v.60, n.1, p.181-184, Jan./Mar.2003 15 N, with dry matter yield of about 9.1 and 1 t/ha for shoot and root, respectively (Table 1).The results indicate that 15 N enrichment of the Crotalaria plants was efficient and even the roots reached a reasonable 15 N-labeling.Therefore, this field labeling technique seems to be adequate for studies on nitrogen dynamics using plant material that will be added to the soil and undergo isotopic dilution until it is analyzed in the subsequent crop.
Samples of Crotalaria taken 7 days after the first and the second urea application were thoroughly rinsed with distilled water before 15 N determinations.The results of 15 N concentration (data not shown) were similar to those obtained with plants that were only oven-dried, indicating that the 15 N was rapidly incorporated to the plants and could not be removed by the cleaning procedures used for plant analysis.
A lower labeling level (<2 15 N atoms %) can negatively affect accurate determination in isotopic analyses.The dry plant material contained more than 190 kg ha -1 of N.These values can vary greatly.Muraoka et al. (2002) observed a variation from 149 to 362 kg ha -1 N, affected by the dry matter production but not by the N contents, which were similar.
Table 2 shows the data on yield and N concentration of the Crotalaria at the flowering stage, when it was cut to be used as green manure.That the dry matter yield of shoot and root as well as the N concentration of the plants sprayed with 15 N-urea were similar to those of the control plots, suggesting that the labeling technique was adequate for labeling the Crotalaria plants without changing the N content of the plant.Dead leaves that fell from the plants along the growing cycle represented a small part of the total dry matter produced, but were highly enriched with 15 N (Table 2).
The nutrient contents shown in Table 3 are similar to those found in field experiments with Crotalaria by Tanaka et al. (1992) and Muraoka et al. (2002), although Mn and Fe values are a little higher because the parent material of the Paleudalf presents high contents of those nutrients.The results also indicate that the foliar spray of small amounts of 15 N labelled urea did not change the macro and micronutrient contents of the plants.
Table 2 -
Fresh and dry mass and N content of different Crotalaria plant parts at harvesting.Results are means of four replicates.Dead leaves collected on the ground. 2T1= Control (Crotalaria with no 15 N).T2 and T3= treatments with 15 N-labeled Crotalaria. 1
Table 1 -
Application dates of 15 N-containing urea.Characterization of Crotalaria shoot and root performed seven days after fertilizer application.
1Only leaves were analyzed for labeling control. 2 Harvest date.
Table 3 -
Macro and micronutrient contents and C/N ration in different plant parts of Crotalaria sampled at harvesting.Dead leaves collected on the ground. 2T1= Control (Crotalaria with no 15 N).T2 and T3= treatments with 15 N-labeled Crotalaria. 1
|
2018-12-16T05:27:49.653Z
|
2003-02-01T00:00:00.000
|
{
"year": 2003,
"sha1": "a53e51568a3a947f85bbd8e2ffce37227573e006",
"oa_license": "CCBY",
"oa_url": "https://www.scielo.br/j/sa/a/zqJWkGMMf3zmn5scd5qvxPz/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "a53e51568a3a947f85bbd8e2ffce37227573e006",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Chemistry",
"Biology"
]
}
|
256602359
|
pes2o/s2orc
|
v3-fos-license
|
Histone deacetylase HDAC4 participates in the pathological process of myocardial ischemia-reperfusion injury via MEKK1/JNK pathway by binding to miR-206
Histone deacetylases (HDACs) and microRNAs (miRs) have been reported to exert pivotal roles on the pathogenesis of myocardial ischemia-reperfusion injury (MIRI). Therefore, the present study was performed to define the underlying role of HDAC4 and miR-206 in the pathological process of MIRI. An IRI rat model was established. The interaction between HDAC4 and the promoter region of miR-206 was determined using ChIP, and that between miR-206 and mitogen-activated protein kinase kinase kinase 1 (MEKK1) was determined using dual luciferase reporter gene assay. After the loss- or gain-of-function assay in cardiomyocytes, western blot analysis, RT-qPCR, TUNEL, and ELISA assay were performed to define the roles of HDAC4, miR-206, and MEKK1. Up-regulation of HDAC4 and down-regulation of miR-206 occurred in rat myocardial tissues and cardiomyocytes in MIRI. HDAC4 down-regulation or miR-206 up-regulation contributed to reduced cell apoptosis and the levels of tumor necrosis factor-alpha (TNF-α), interleukin-6 (IL-6), and malondialdehyde (MDA), while elevating the superoxide dismutase (SOD) and glutathione (GSH) contents. Meanwhile, HDAC4 silencing promoted the expression of miR-206, which targeted and negatively regulated MEKK1. Then inhibition of JNK phosphorylation reduced the cardiomyocyte apoptosis to alleviate MIRI. Coherently, HDAC4 silencing could up-regulate the expression of miR-206 to reduce cardiomyocyte apoptosis and inhibit oxidative stress, and exerting a protective effect on MIRI via the MEKK1/JNK pathway.
INTRODUCTION
Myocardial ischemia-reperfusion injury (MIRI), a manifestation of cardiomyocyte apoptosis induced by ischemia-reperfusion (IR), is a crucial cause for myocardial damage and subsequent heart failure, resulting in high morbidity and mortality worldwide [1]. The innate immune system and following inflammatory responses possess vital significance during the process of myocardial damage extension [2]. Currently, the clinically adopted management protocols for MIRI include ischemic pre-conditioning, pharmacological intervention, and physical interventions like hypothermia or electrical stimulation [3]. However, the advent of MIRI is inevitable due to limitation of reperfusion as the only established therapeutic modality for acute myocardial infarction till date, which raises concern for the development of other effective strategies to reduce IRI [4]. Hence, the current study was devised to explore a novel therapeutic target to alleviate MIRI at a molecular level.
Histone deacetylases (HDACs) serve as critical modulators for mediating myocardial protection and the survival of cardiomyocytes [5]. More notably, the inhibition of histone deacetylase 4 (HDAC4) has been demonstrated to confer significant cardioprotective effects against hypoxic injury [6]. Also, HDAC4 down-regulation functions as a critical stimulant for myocardial repair [7].On the other hand, a study has demonstrated that HDAC4 is a target of microRNA-206 (miR-206) in the regulation of myogenic differentiation [8]. Moreover, the specific genetic silencing of HDAC4 leads to up-regulation of miR-206 in rhabdomyosarcoma, which is an aggressive soft-tissue cancer characterized by disturbed myogenic differentiation [9], indicating that miR-206 is specifically regulated by HDAC4. Meanwhile, HDAC4 accumulation prevents the hypertrophy of myogenic cells triggered by miR-206 inhibition [10]. Inherently, miRNAs are defined as a large group of post-transcriptional regulators with approximately 30% of human protein-coding genes [11], which essentially function in IRI by altering crucial elements of multiple pathways detrimental to the fate of IRI [12][13][14]. Furthermore, miR-206 overexpression has also been shown to possess the ability to attenuate MIRI in rats [15]. In addition, miR-206 is implicated in the regulation of skeletal muscle differentiation via suppression of multiple factors of the c-Jun N-terminal kinase (JNK)/mitogenactivated kinase-like protein (MAPK) pathway such as mitogenactivated protein kinase kinase kinase 1 (MEKK1) and MAP kinase kinase 7 [16], which is highly suggestive of a novel regulatory mechanism involving HDAC4, miR-206, and the MEKK1/JNK pathway. Meanwhile, MEKK1 is a protein kinase activated by mitogen and has been demonstrated to be implicated in cardiac remodeling [17]. JNK is a protein kinase that can be activated by stress or mitogen, and incorporation of the mitochondrial JNK pathway has been indicated in ischemic myocardial dysfunction [18]. Of note, JNK activation has also been implicated in cardiac IRI by previous studies [19]. Currently available JNK inhibitors hold great therapeutic potential for the treatment of cerebral and MIRI considering their cardioprotective and neuroprotective properties [20]. Meanwhile, the suppression of MEKK1 and JNK can also confer protective effects against cardiac hypertrophy and heart failure [21]. Conjointly, the existing evidence suggests that repression of HDAC4 can exert crucial cardiac protective effects in the regulation of miR-206mediated MEKK1/JNK pathway during the process of MIRI. Consequently, the current study was conducted to confirm the aforementioned hypothesis and define the underlying molecular mechanisms of the HDAC4/miR-206/MEKK1/JNK axis in the pathological process of MIRI.
RESULTS
HDAC4 was up-regulated in cardiomyocytes after MIRI and HDAC4 silencing could alleviate myocardial injury in vivo To explore the role of HDAC4 in IRI, we first established an IRI rat model. Analysis of the myocardial tissues with 2,3,5-triphenyltetrazoliumchloride (TTC) staining (Fig. 1A) demonstrated that the presence of an infarct area in the area at risk (AAR) was much higher in the IRI rats than that in the sham-operated rats, while the percentage of infarct area in AAR was found to be reduced in short hairpin RNA (sh)-HDAC4-treated IRI rats compared to IRI rats. In addition, hematoxylin-eosin (H&E) staining illustrated that the sham-operated rats presented with neatly and compactly arranged cardiomyocytes, evenly stained cytoplasm, same nucleus size, and neat myocardial fibers without obvious fracture. However, the cardiomyocytes were disorderly arranged with hypertrophy, myocardial fibers showed rupture and dissolution, and interstitial collagen accumulation was accompanied by myocardial fibrosis and necrosis in IRI rats, which indicated that myocardial ischemia induced these myocardial morphological changes. Obvious improvements were noted in myocardial injury in IRI rats treated with sh-HDAC4 (Fig. 1B).
The results of terminal deoxynucleotidyl transferase-mediated dUTP nick-end labeling (TUNEL) assay ascertained that compared with the sham-operated rats, a high proportion of apoptotic cardiomyocytes was observed in IRI rats, while silencing HDAC4 led to reduced cell apoptosis in IRI rats (Fig. 1C). It has been well documented that cytokines, including tumor necrosis factor-alpha (TNF-α) and interleukin-6 (IL-6), are associated with the pathophysiology of cellular dysfunction in IRI [22]. As a result, enzymelinked immunosorbent assay (ELISA) was conducted to investigate the levels of IL-6 and TNF-α in the serum samples of rats from each group. The results revealed significantly elevated serum levels of IL-6 and TNF-α in IRI rats, while further HDAC4 silencing notably reduced the serum levels of those proinflammatory cytokines in IRI rats (Fig. 1D). Also, research has suggested an association between oxidative stress subsequent to IRI and cardiomyocyte death during the IRI [23].
In order to investigate whether HDAC4 silencing affects oxidative stress, we evaluated the expression levels of antioxidant enzymes and the content of malondialdehyde (MDA). It was found that the contents of superoxide dismutase (SOD) and glutathione (GSH) were drastically decreased but MDA content was increased in the IRI rats compared with the sham-operated rats. Additionally, HDAC4 silencing reversed the findings in rats with IRI (p < 0.05) (Fig. 1E). The results of western blot analysis revealed downregulated expression of HDAC4 in sham-operated rats treated with sh-HDAC4 as compared to sham-operated rats and shamoperated rats treated with sh-negative control (NC). The expression of HDAC4 was up-regulated in IRI rats relative to shamoperated rats (p < 0.05). HDAC4 was reduced in IRI rats treated with sh-HDAC4 (p < 0.05; Fig. 1F). Moreover, up-regulated HDAC4 is up-regulated in rat myocardial tissues after MIRI and its silencing alleviates myocardial injury in vivo. A Analysis of the myocardial injury area of rats by TTC staining (n = 5 for rat in each group). B Representative images of the myocardial injuries evaluated by H&E staining (n = 10; scale bar = 50 μm). C Apoptosis of cardiomyocytes detected by TUNEL assay (n = 10; scale bar = 50 μm). D Serum levels of IL-6 and TNF-α in peripheral blood of rats measured by ELISA (n = 10). E Determination of SOD, GSH, and MDA contents in rat myocardial tissues (n = 10). F Protein expression of HDAC4 in rat myocardial tissues detected by western blot analysis, normalized to GAPDH (n = 10). G Protein expression of Bax, Bcl-2, and cleaved Caspase-3 in rat myocardial tissues detected by western blot analysis, normalized to GAPDH (n = 10). * p < 0.05 vs. the sham-operated rats; # p < 0.05 vs. the rats with MIRI. Data were all measurement data and expressed as mean ± standard derivation. Comparisons among multiple groups were analyzed by one-way ANOVA, followed by Tukey's post hoc test.
pro-apoptotic factors (B-cell lymphoma-2-associated protein X (Bax) and cleaved Caspase-3) expression and down-regulated antiapoptotic factor B-cell lymphoma-2 (Bcl-2) expression were witnessed in rats with IRI. HDAC4 silencing reduced Bax and cleaved Caspase-3 expression and elevated Bcl-2 expression in the IRI rats (p < 0.05; Fig. 1G). The aforementioned data supported that HDAC4 was robustly induced in IRI and its silencing essentially mitigated myocardial injury.
HDAC4 silencing alleviates injury of cardiomyocytes from MIRI rats in vitro In order to further verify the aforementioned in vivo findings, we isolated cardiomyocytes from rats with MIRI. HDAC4 was silenced in the isolated cardiomyocytes and its silencing efficiency was evaluated using western blot analysis, which revealed a substantially reduced expression of HDAC4 in cardiomyocytes transfected with sh-HDAC4 while the expression of its subtypes, HDAC5, HDAC7, and HDAC9, exhibited no changes (p < 0.05; Fig. 2A). Subsequently, we determined the levels of proinflammatory cytokines, and the contents of SOD, GHS, and MDA. In accordance with our in vivo results, HDAC4 silencing reduced the levels of IL-6 and TNF-α (p < 0.05; Fig. 2B), increased the content of SOD and GSH (p < 0.05; Fig. 2C), and reduced MDA content in cardiomyocytes, respectively (p < 0.05; Fig. 2C)./ Moreover, flow cytometry demonstrated that silencing HDAC4 could considerably diminish cell apoptosis (p < 0.05; Fig. 2D). Western blot analysis also illustrated a reduction in the expression levels of Bax and cleaved Caspase-3 and an increase in the expression levels of Bcl-2 in the absence of HDAC4 (p < 0.05; Fig. 2E). The aforementioned data supported the potential of HDAC4 silencing to mitigate myocardial injury in vitro.
miR-206 is poorly expressed in the cardiomyocytes of rats with MIRI and its up-regulation of miR-206 could alleviate cardiomyocytes from MIRI rats in vitro The expression of miR-206 was measured in the cardiomyocytes of rats with MIRI by reverse transcription quantitative polymerase chain reaction (RT-qPCR), which demonstrated a significantly reduced miR-206 expression in IRI rats compared with the shamoperated rats (p < 0.05; Fig. 3A). In order to further define the role of miR-206 in the pathogenesis of MIRI, a gain-of-function study was performed in the cardiomyocytes. Subsequent findings revealed that miR-206 expression was evidently increased in miR-206 mimictransfected cells as revealed by RT-qPCR (p < 0.05; Fig. 3A).
Subsequently, ELISA results displayed that miR-206 overexpression inhibited the levels of IL-6 and TNF-α (p < 0.05; Fig. 3B), elevated the contents of SOD and GSH (p < 0.05; Fig. 3C), and reduced the content of MDA in the cardiomyocytes (p < 0.05; Fig. 3C). Moreover, cells mimicking miR-206 expression were observably less susceptible to apoptosis compared with the cells expressing NC mimic (p < 0.05; Fig. 3D). In addition, up-regulated miR-206 also brought about a decreased Bax and cleaved Caspase-3 expression and up-regulated Bcl-2 expression in the cardiomyocytes (p < 0.05; Fig. 3E). Conjointly, miR-206 was poorly expressed in cardiomyocytes and overexpression of miR-206 could alleviate myocardial injury in vitro.
HDAC4 silencing up-regulates miR-206 expression and inhibits activation of the MEKK1/JNK pathway in the cardiomyocytes of rats with MIRI The results of chromatin immunoprecipitation (ChIP) assay in cardiomyocytes from sham-operated rats and rats with MIRI demonstrated that due to the high expression of HDAC4 in MIRI rats, significantly more HDAC4 was recruited to the promoter region of miR-206 in cardiomyocytes from rats with MIRI relative to cardiomyocytes from sham-operated rats (Fig. 4A). miR-206 expression detected by RT-qPCR showed an increase in cardiomyocytes transduced with sh-HDAC4 (p < 0.05; Fig. 4B). In addition, western blot analysis showed that the protein expression of MEKK1 and the extent of JNK phosphorylation were reduced in cardiomyocytes transfected with sh-HDAC4 (p < 0.05; Fig. 4C).
The interaction between miR-206 and MEKK1 was explored by means of dual luciferase reporter gene assay, which revealed a decrease in the luciferase activity in the cardiomyocytes cotransfected with miR-206 mimic and MEKK1 wild-type (wt) (p < 0.05), while no significant changes were observed in the luciferase activity of the cardiomyocytes co-transfected with miR-206 mimic and MEKK1-mutant (mut) (p > 0.05; Fig. 4D), indicating that miR-206 could target and negatively regulate MEKK1.
Moreover, western blot analysis showed a reduced protein expression of MEKK1 and extent of JNK phosphorylation in response to miR-206 mimic (p < 0.05; Fig. 4E). In addition, the miR-206 expression was down-regulated while the protein expression of MEKK1 and the extent of JNK phosphorylation were elevated in Fig. 2 Silencing of HDAC4 protects cardiomyocytes from IRI in vitro. A HDAC4 silencing efficiency and the expression of HDAC5, HDAC7, and HDAC9 in cardiomyocytes evaluated by western blot analysis, normalized to GAPDH. B The levels of inflammatory factors (IL-6 and TNF-α) in cardiomyocytes determined by ELISA. C The levels of SOD, GSH, and MDA in the cardiomyocytes. D Apoptosis of cardiomyocytes detected by flow cytometry. E The expression of Bax, Bcl-2, and cleaved Caspase-3 protein in cardiomyocytes determined by western blot analysis, normalized to GAPDH. * p < 0.05 vs. the cardiomyocytes transfected with sh-NC. Data were all measurement data and expressed as mean ± standard derivation. Comparisons between two groups were analyzed by unpaired t-test. The experiment was conducted three times independently.
cardiomyocytes transfected with both sh-HDAC4 and miR-206 inhibitor (p < 0.05; Fig. 4F, G). The aforementioned data suggested that depletion of HDAC4 up-regulated the miR-206 expression and obstructed the activation of the MEKK1/JNK pathway in cardiomyocytes from MIRI rats.
HDAC4 silencing restrains cell apoptosis via miR-206mediated MEKK1/JNK disruption in cardiomyocytes from MIRI rats To investigate the function of HDAC4-dependent miR-206/MEKK1/ JNK axis, we adopted RT-qPCR, which showed that up-regulation of HDAC4 brought about a down-regulated miR-206 expression in the cardiomyocytes and miR-206 mimic resulted in an elevated miR-206 expression, which was annulled by treatment with miR-206 mimic + overexpression (oe)-HDAC4. In the presence of miR-206 mimic, oe-MEKK1 did not impact miR-206 expression (Fig. 5A).
Western blot analysis revealed that in the presence of oe-HDAC4, miR-206 mimic did not affect HDAC4 expression, reduced MEKK1, Bax, and cleaved Caspase-3 expression and phosphorylation level of JNK, and elevated Bcl-2 expression in cardiomyocytes.
In addition, in cardiomyocytes overexpressing HDAC4, miR-206 mimic augmented the contents of SOD and GSH, and diminished the content of MDA. In addition, in cardiomyocytes mimicking miR-206 expression, oe-MEKK1 triggered decreased contents of SOD and GSH and increased content of MDA (Fig. 5D). Furthermore, as shown in Fig. 5E, combination treatment with oe-HDAC4 and miR-206 mimic regressed the accelerated cell apoptosis induced by HDAC4 up-regulation, while combination treatment with miR-206 mimic and oe-MEKK1 increased cell apoptosis. These findings suggested that during IRI, up-regulated HDAC4 level to reduce miR-206 expression, which consequently activated the MEKK1/JNK pathway and ultimately rendered the cells to apoptosis in cardiomyocytes from MIRI rats. E Protein expression of Bax, cleaved Caspase-3, and Bcl-2 in cardiomyocytes determined by western blot analysis, normalized to GAPDH. * p < 0.05 vs. the normal cardiomyocytes or the cardiomyocytes transfected with NC-mimic. Data were all measurement data and expressed as mean ± standard derivation. Comparisons between two groups were analyzed by unpaired t-test. The experiment was conducted three times independently.
Q. Li et al.
DISCUSSION
MIRI is a severe cardiovascular condition, with highly intricate and complex pathogenic mechanism, which can be affected by multiple factors such as cytokines, chemokines, growth factors, free radical damages, and overload of calcium [24][25][26]. Existing research has revealed the potential cardioprotective benefits of HDAC inhibitors in MIRI [27,28], thereby providing promising therapeutic approaches for this cardiovascular disease. The current study was set out to investigate the explicit function and molecular mechanism of HDAC4 in MIRI and the obtained results suggested that silencing of HDAC4 up-regulated the expression of miR-206, which inhibited the MEKK1/JNK pathway, thereby inhibiting the apoptosis of cardiomyocytes and alleviating MIRI.
Firstly, findings obtained in the current study revealed that HDAC4 was highly expressed in both myocardial tissues and cardiomyocytes following MIRI. Consistently, up-regulated HDAC4 level has been documented in oligodendrocyte progenitor cells in a study based on rat models with ischemic stroke [29,30]. HDACs are also known to function as critical modulators for myocardial protection and cardiomyocyte survival [31]. Adding to this knowledge of HDACs, our study demonstrated that HDAC4 silencing facilitates the improvement of IRI-induced infarction and the resultant myocardial injury both in vivo and in vitro. In consistency with this, myocyte-specific overexpressing activated HDAC4 in mice stimulates MIRI substantiated by a reduction in ventricular functional recovery and increase in infarct size following IRI [32]. On the other hand, HDAC4 inhibition has been confirmed as an imperative stimulator for regeneration and cardiac function restoration [6]. Inhibition of HDAC4 can improve cardiac function and reduce myocardial infarction in mice suffering from ischemic-induced heart failure [33]. Additionally, HDAC4 deficiency can attenuate Ang IIinduced cardiac hypertrophy in cardiomyocytes [34], as well as reducing cardio fibrosis in juvenile rats with overload-induced ventricular hypertrophy [35].
Furthermore, our findings highlighted that miR-206 was downregulated in the cardiomyocytes of rats with MIRI, which is consistent with the findings obtained from other groups [15]. We further explored the function of miR-206 in MIRI and found that up-regulation of miR-206 confers a protective effect of upregulated miR-206 against myocardial injury, evidenced by decreased levels of IL-6, TNF-α, and MDA yet increased content of SOD and GSH. miR-206 overexpression protects against cardiomyocyte apoptosis in vitro and in vivo in rodent models of myocardial infarction and IRI [36,37]. IL-6 and TNF-α are fundamentally acknowledged as inflammatory indicators in rats suffering from cardiovascular disorders [38]. An existing study has documented compensated TNF-α and IL-6 release by miR-133b, thus alleviating myocardial injuries [39]. Moreover, another inhibitor measured by western blot analysis, normalized to GAPDH. * p < 0.05 vs. the cardiomyocytes transfected with sh-NC, NC mimic, NC inhibitor, or sh-HDAC4 + NC inhibitor. Data were measurement data and expressed as mean ± standard derivation. Comparisons between two groups were analyzed by the unpaired t-test, and comparisons among multiple groups were analyzed by one-way ANOVA, followed by Tukey's host hoc test. The experiment was conducted three times independently.
research identified the down-regulation of IL-6 and TNF-α as a sign of amelioration of heart disease [24]. Overexpression of miR-206 can undermine the aggravated hypoxia-induced MIRI by amplified long noncoding RNA RMRP [36]. MDA is one of the widespread biomarkers of oxidative stress and has undergone extensive investigation [40,41]. SOD, as a ubiquitous enzyme, can principally protect tissues against oxidative distress via breakdown of superoxide radicals [42]. Also, GSH critically functions in antioxidant defense and serves as a vital regulator of pathways essential for maintaining body homeostasis [43]. More importantly, decreased levels of TNF-α, IL-6, and MDA have been observed along with increased SOD and GSH levels have been previously documented following alleviation of MIRI after oleuropein treatment [44]. Collectively, these findings supported the inhibitory role of miR-206 overexpression in MIRI.
An existing study reported that HDAC inhibition can bring about rapid alterations in miRNA levels [45]. HDAC4 has been documented to negatively regulate the expression of miR-200a [46]. Furthermore, HDAC4 is recruited to the miR-206 promoter in order to repress miR-206 transcription; HDAC4 stimulates miR-206's target gene MRTF-A expression and facilitates fibrogenesis in hepatic stellate cells in a miR-206-dependent manner [47]. Although the inhibition of miR-206 ameliorates ischemia-reperfusion arrhythmia by targeting Cx43 [48,49], we focused on the effect of HDAC4/miR-206 axis on IRI in the current study, which has not been previously reported. In the current study, we uncovered a similar targeting relationship between miR-206 and MEKK1. Interestingly, MEKK1 and JNK are radically repressed by miR-206, thus implicating their function in skeletal muscle development [16]. miR-206 regulates cell movements during zebrafish gastrulation by regulating the JNK pathway [47]. Meanwhile, the MEKK1/JNK pathway has been suggested as a contributor of IRI and myocyte apoptosis [50]. In addition, both miR-140-5p overexpression and HDAC4 silencing have been revealed to reduce the apoptosis of cardiomyocytes, thus exerting cardioprotective effects against diabetic cardiomyopathy [51]. Inhibition of HDAC4 attenuates neuronal apoptosis via reduction of JNK/c-Jun activity during early brain injury following subarachnoid hemorrhage [52]. Together, our findings in conjunction with existing evidence indicate that HDAC4 down-regulation can augment miR-206 expression and subsequently inactivate the MEKK1/JNK pathway, leading to repression of cardiomyocyte apoptosis and further attenuation of IRI.
In conclusion, findings obtained in the current study demonstrated that HDAC4 silencing could up-regulate the expression of miR-206 and inhibit the activation of the MEKK1/JNK pathway, thereby reducing cardiomyocyte apoptosis and inhibiting oxidative stress, and exerting a protective effect on MIRI (Fig. 6). Our study validates the cardioprotective effect of miR-206 up-regulation, which may be a promising viable target for MIRI treatment. However, since siRNA silencing of HDAC4 was only~50% in relation to the non-targeting controls, analysis of other HDAC isoforms with superior silencing efficiency should be conducted in the future. Also, further studies are warranted to determine whether the HDAC4/miR-206/MEKK1/JNK axis is involved in other types of cell death and to confirm the clinical application of the corresponding axis in treating MIRI.
MATERIALS AND METHODS Ethics statement
The current study was conducted with approval of the animal ethics committee of the Zhengzhou University People's Hospital, Henan Provincial People's Hospital,Central China Fuwai Hospital (protocol No. 20201230c0400510[376]) and performed in accordance with the recommendations in the Guide for the Care and Use of Laboratory Animals published by the US National Institutes of Health. Extensive measures were undertaken to minimize the number and suffering of the included animals.
Establishment of rat models of IRI
A total of 105 specific pathogen-free male Sprague-Dawley (SD) rats (age: 8-10 weeks; weight: 310-400 g) were included in the current study, among which 45 rats were subjected to sham operation and the remaining 60 were subjected to establishment of MIRI models. The heart function of all rats was evaluated using small animal ultrasound (Vevo). The establishment of the IRI model was conducted as the previously described method [53]. Briefly, rats were intraperitoneally anesthetized using intraperitoneal injections with 20% urethane and fixated in a supine position, followed by measurement of their body temperature, blood pressure, heart rate, and average arterial pressure. Next, mechanical ventilation was performed after tracheal intubation. An incision was then made at about 0.5 cm from the left edge of the sternum between the third and fourth intercostal space of the left side of the ribcage. After dissection and retraction of the pectoral muscle, the 3rd intercostal space was exposed. Subsequently, thoracotomy was then performed to expose the heart. Afterwards, left anterior descending (LAD) coronary artery was identified and ligated using a 5-0 silk suture. Pallor appearance of the myocardium followed abruptly by cyanosis and elevated ST segment on the electrocardiograph was regarded as an indicator of myocardial ischemia establishment. The LAD coronary artery of sham-operated rats was penetrated using 50 silk threads without ligation of the suture. The tissues were harvested immediately after 24 h of reperfusion. After 30 min of ligation, the suture was re-opened for 2 h. The reperfusion model was considered to be successfully established when the ST segment descended or the QRS wave progressively reverted to normal levels. Nine rat casualties were encountered during the modeling process, with failure in three rats. The success rate of modeling was calculated to be 76.67%, and 45 were chosen for subsequent experimentation, with 5 rats in each group used for TTC staining. Heart tissues of rats in each group were extracted and sliced into 8 transverse sections (2 mm each), and sections were then stained with 0.75% TTC solution. The infarct size was determined by experienced blind researchers using the SigmaScan Pro5 software for planimetric measurement. Planimetric-determined infarct size of each section was normalized to section weight and the mean value of the sum of section weight was obtained.
Isolation of primary cardiomyocytes
Normal rats and MIRI modeled rats were intraperitoneally injected with heparin (5000 U/kg). After 15-20 min, the rats were intraperitoneally anesthetized with 1% pentobarbital sodium. Next, the heart along with the aortic arch was isolated and immersed in ice-cold (4°C) calcium-free solution. Subsequently, the heart along with the aortic arch was perfused with the Langendorff system as per a previously reported method [54]. After the heart became soft, the aortic arch was removed and the heart was transferred to collagenase type 1 solution (SCR103, Sigma-Aldrich, St Louis, MO, USA). The ventricular tissues were dissected into small sections (1 mm × 1 mm × 1 mm) and resuspended using a pipette with the blunt end tip. The tissues were then filtered using a cell strainer (75 μm mesh) and maintained at room temperature for 10-15 min. The precipitates, which were considered as cardiomyocytes, were resuspended at room temperature for 10 min. The aforementioned protocols were repeated three times to isolate the enriched cardiomyocytes. Afterwards, the cells were plated in culture dishes pre-coated with the mouse laminin at a density of 0.5-1 × 10 4 cells/cm 2 .
Cell treatment and in vivo injection
The plasmids of miR-206 mimic, miR-206 inhibitor, overexpression (oe)-MEKK1, and oe-HDAC4 were provided by Shanghai Genechem Co., Ltd. (Shanghai, China). The lentivirus harboring shRNA against HDAC4 (sh-HDAC4) was constructed by Suzhou GenePharma Co., Ltd. (Suzhou, Jiangsu, China). The cardiomyocytes of MIRI rats or normal rats were transferred to a 6-well plate for 24 h and then transfected with the corresponding plasmids. Next, 250 μL Dulbecco's modified Eagle's medium (DMEM; Gibco, Carlsbad, CA, USA) was used to dilute 8 μL of the HDAC4 interference sequence and 6 μL of Lipofectamine 2000, which was then mixed, allowed to stand for 20 min, and added to the culture well. After gentle fusing, the culture was continued in a 5% CO 2 incubator at 37°C, and the medium was renewed with DMEM containing 10% fetal bovine serum (FBS) and penicillin/ streptomycin after 8 h. The cells were collected 36 h after transfection. For in vivo injection, the recombinant lentivirus specifically targeting cardiomyocytes harboring sh-HDAC4 or sh-NC (10 8 pfu/mL/rat with normal saline as solvent) was intramyocardially injected into the rats. The tissues were obtained immediately after 24 h of reperfusion.
H&E staining
Heart tissues were dissected into small sections and fixed with 10% neutral formalin. Next, the fixed tissues were paraffin-embedded and then sectioned. The tissue sections were subjected to H&E staining for histological analysis as previously described [55]. Each section underwent observation under an optical microscope (XP-330, Shanghai Bingyu Optical Instrument Co., Ltd., Shanghai, China) in a double-blind-manner. Three random visual fields were selected to evaluate myocardial congestion, hemorrhage, fibrosis, necrosis, and degeneration. The scoring criteria were as follows: 0 indicated no lesion; 0-1 indicated lesions were less than 1/4 of the designated area; 1-2 indicated lesions ranged from approximately 1/4−1/2 of the designated area; 2-3 indicated lesions ranged from approximately 1/2−3/4 of the designated area; and 3-4 indicated lesions were greater than 3/4 of the designated area.
TTC staining
The 5 mm myocardial sections from heart samples of 5 rats in each group were placed in 1% TTC solution (AMRESCO, USA) and incubated at 37°C under dark conditions for 10 min. Afterwards, the sections were fixed with 10% formalin for 2 h and observed under a stereoscopic microscope (Zeiss, Germany). The white part was indicative of infarct area, while red part was indicative of non-infarct area. The Image-Pro Plus 6.0 software (Media Cybernetics) was adopted to calculate the infarct area (%) = infarct area/ transverse section area × 100%.
TUNEL assay
Frozen sections were fixed with 4% paraformaldehyde for 1 h, blocked with blocking solution for 10 min, and dripped with the penetrating solution on ice for 5 min. Cell apoptosis was determined using a TUNEL Apoptosis Kit (Roche, Basel, Switzerland). The nucleus was stained with 4ʹ,6-diamidino-2-phenylindole (Sigma-Aldrich). Immunofluorescence was performed under a fluorescence microscope (Carl Zeiss, Jena, Germany). Apoptotic nuclei and total nuclei were calculated at ×200 magnification.
ELISA
Blood samples were extracted from the orbital sinus was centrifuged at 3500g and the serum was collected and stored at −80°C. Serum levels of IL-6 and TNF-αwere measured using the murine IL-6 and TNF-α ELISA kits (MSKbio Co., Ltd., Wuhan, Hubei, China) in strict accordance with the manufacturer's instructions.
After 24 h of culture, the cell medium was collected and centrifuged at 1000g at room temperature for 10 min, followed by supernatant collection. Subsequently, ELISA was performed in compliance with the provided instructions (MSKbio Co., Ltd., Wuhan, Hubei, China) to determine the levels of TNF-α (69-22452) and IL-6 (69-30490).
Determination of SOD activity, reduced GSH and MDA
Myocardial tissues (125 mm 3 ) from MIRI rat or normal rats were collected from each experimental group and centrifuged after the addition of 1 mL PBS at 10,000g and 4°C for 10 min, after which the supernatant was harvested. The protein concentration was measured using a bicinchoninic acid kit (P0011, Beyotime, Shanghai, China). The contents of MDA, SOD and GSH in the myocardial tissues were determined using MDA (A003-1-2), SOD (A001-3-2), and GSH (A006-2-1) assay kits (Nanjing JianCheng Bioengineering Institute, Nanjing, China), respectively.
Dual luciferase reporter gene assay
Reporter plasmids containing wt MEKK1 or mut MEKK1 (NM_005921.2) were provided by Shanghai GenePharma Co., Ltd. (Shanghai, China). Next, two plasmids were co-transfected with the NC mimic and miR-206 mimic into 293T cells, respectively. After 48 h of culture, the cells were harvested. Q. Li et al.
The luciferase activity was subsequently measured in accordance with the protocols provided with the GenecoPoeia's Dual Luciferase Assay Kit (D0010, Beijing Solarbio Science & Technology Co., Ltd., Beijing, China). The fluorescence intensity was then measured using a Promega's Glomax 20/20 Luminometer (E5311, Zhongmei Biotechnology Co., Ltd., Shaanxi, China). Luminescent signal reflecting activation of the target reporter gene was compared based on the ratio of the firefly relative light luciferase unit (RLU) to the Renilla RLU.
ChIP assay
The cardiomyocytes transduced with sh-NC and sh-HDAC4 were collected. When the cell density reaches 1 × 10 6 cells/10 cm culture dish, the original culture medium was discarded, incubated with 1% formaldehyde at 37°C for 10 min, and added with 2.5 mM glycine on ice for 5 min to stop crosslinking, followed by digestion and centrifugation to obtain the cell pellet. The cells were resuspended in 200 μL sodium dodecyl sulfate (SDS) lysis buffer, placed on ice for 10 min for cross-linking reaction, and the chromatin DNA was fragmented with ultrasound. The cells were centrifuged at 14,000 rpm and 4°C for 10 min, after which the supernatant was attained. After being diluted with ChIP dilution buffer containing protease inhibitors, the supernatant was incubated with blocking solution at 4°C for 30 min. After centrifugation at 1000 rpm and 4°C for 1 min, a small amount of supernatant served as Input, and the remaining supernatant was added with HDAC4 antibody (ab12171, 1:1000, antirabbit, Abcam, Cambridge, UK) or NC Immunoglobulin G (IgG) (ab172730, 1:1000, anti-rabbit, Abcam), followed by incubation overnight at 4°C. The supernatant was incubated with cross-linked agar at 4°C for 1 h to collect antibody/transcription factor complexes, followed by centrifugation at 1000 rpm and 4°C for 1 min. After discarding the supernatant, the complex was eluted. The eluted supernatant and Input DNA was added with 20 μL of 5 mol/L NaCl and water-bathed at 65°C for 4 h for decrosslinking. The DNA was purified and recovered after Proteinase K digestion for protein removal. With the recovered DNA as a template, RT-qPCR was performed to detect the expression of DNA binding to the miR-206 promoter. The specific primers for ChIP assay of miR-206 promoter: F: 5ʹ-CTACTTATGC AGCTAGAGATACAAG-3ʹ and R: 5ʹ-ACTTCCAATAAGTCTTTGACCCATG-3ʹ.
RT-qPCR
Total RNA content was extracted using the TRIzol reagent. The PolyA tailing detection kit (B532451, Sangon, Shanghai, China; containing universal PCR primer R) was applied to obtain the complementary DNA (cDNA) of the miRNA with the PolyA tail in strict accordance with the provided instructions. The primer for miR-206 was designed and synthesized by Takara (Tokyo, Japan) (Table S1). RT-qPCR was performed in triplicate using the SYBR ® Premix Ex Taq TM II Kit (RR820A, Takara, Tokyo, Japan) using the ABI 7500 instrument (Applied Biosystems, Foster City, CA, USA). The transcriptional levels of miR-206 were estimated based on relative quantification (2 -△△CT method), and then normalized to U6 mRNA.
Apoptosis assay by flow cytometry Cells were treated with 0.25% trypsin after 36 h of transduction. Afterwards, the cells were collected, rinsed twice with PBS, and resuspended in 200 μL of the binding buffer. Next, the cells were supplemented with 10 μL of Annexin V-fluorescein isothiocyanate (FITC; ab14085, Abcam) and 5 μL of propidium iodide (PI), which was then gently infused and reacted under dark conditions at room temperature for 15 min. Next, 300 μL of the binding buffer was added to the mixture. Cell apoptosis was subsequently evaluated using a flow cytometer (BD FACSCanto II, Image Trading Co., Ltd., Beijing, China) at an excitation wavelength of 488 nm.
Statistical analysis
Statistical analyses were performed using the SPSS 21.0 statistical software (IBM Corp, Armonk, NY, USA) was adopted for statistical analyses. Measurement data were expressed as mean ± standard deviation. Data conforming normal distribution and homogeneity of variance between two groups were compared using the unpaired t-test while data comparisons among multiple groups were performed using one-way analysis of variance (ANOVA) with the Tukey's post hoc test. A value of p < 0.05 was considered to be statistically significant.
DATA AVAILABILITY STATEMENT
The datasets generated and/or analyzed during the current study are available from the corresponding authors on reasonable request.
|
2023-02-06T15:12:17.557Z
|
2021-09-15T00:00:00.000
|
{
"year": 2021,
"sha1": "2b401dc331f8e07ce56534c317ab321cb804ddb2",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41420-021-00601-1.pdf",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "2b401dc331f8e07ce56534c317ab321cb804ddb2",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
}
|
226287044
|
pes2o/s2orc
|
v3-fos-license
|
Wrist function recovery course in patients with scaphoid nonunion treated with combined volar bone grafting and a dorsal antegrade headless screw
Background Surgical treatment is necessary for scaphoid nonunion. Open surgery with a combined volar and dorsal approach is thought to have poor functional outcomes and a prolonged recovery course. However, the detailed recovery course for this approach is rarely reported. The aim of this study was to investigate the recovery course and radiographic outcome for patients with scaphoid nonunion who underwent a combined volar bone grafting and dorsal antegrade headless screw approach. Material and methods Eighteen patients with scaphoid nonunion who underwent combined volar bone grafting and dorsal antegrade headless screw fixation were enrolled in this retrospective study. Preoperative and serial postoperative wrist functional and radiographic outcomes were collected and analysed. Results All 18 patients achieved bone union at a mean time of 14.3 weeks. Compared to the preoperative status, the grip strength, wrist motion arc, and Mayo Wrist score were improved significantly 6 months after surgery, whilst the Disabilities of the Arm, Shoulder, and Hand (DASH) score did not recover until 12 months after surgery. Significant improvements were found in all scaphoid radiographic parameters. Conclusion The surgical outcomes for scaphoid nonunion treated with a combined volar bone grafting and dorsal antegrade headless screw achieved a high union rate, with great wrist functional and radiographic outcomes. The earliest recovered wrist functional parameters were grip strength, motion arc, Mayo Wrist score and finally the DASH score at postoperative 6 months and 12 months, respectively.
Introduction
Scaphoid fracture accounts for 60% of all carpal fractures, and be the second most fractures around the wrists [1]. The highest nonunion rate was 15.5% of scaphoid fracture amongst whole body bones [2]. Untreated scaphoid nonunion may progress to scaphoid nonunion advanced collapse, dorsal intercalated segment instability (DISI) deformity, and generalised wrist arthritis. Surgical procedures including bone graft and screw fixation are the gold standard treatments for scaphoid nonunion [3,4]. In comparison to percutaneous screw fixation and arthroscopic bone grafting, a combined volar and dorsal approach for bone grafting and screw fixation is thought to have inferior functional outcomes and prolonged recovery course because of the risks of a disrupted blood supply and scar formation [5]. However, the detailed recovery course and the functional and radiographic outcomes of this approach are rarely reported in the literature. A better understanding of the recovery course of combined volar and dorsal approaches may fill the gap between clinical science and clinical practice [6][7][8]. The aim of this study was to investigate the recovery course and the functional and radiographic outcomes of patients with scaphoid nonunion who were treated with combined volar bone grafting and dorsal antegrade headless screw fixation.
Study population
The trial was approved by the Research Ethics Committee of the China Medical University Hospital, Taichung, Taiwan (Protocol ID: CMUH109-REC1-093), and was conducted in accordance with the ethical principles of the Helsinki Declaration. Clinical data of 18 patients with scaphoid nonunion who underwent volar bone grafting that included a dorsal antegrade headless screw from January 2016 to June 2019 were collected. The inclusion criteria were scaphoid waist fracture with no sign of bone union for more than a 3-month period, and Herbert classification type D1 (nonunion > 6 weeks, fibrous union with no deformity), D2 (nonunion > 6 weeks, pseudarthrosis with early deformity), D3 (nonunion > 6 weeks, sclerotic pseudarthrosis with advanced deformity) [9]. The exclusion criteria included radioscaphoid arthritis, scaphoid nonunion with AVN, and previous scaphoid surgery.
Surgical technique
The procedure was performed under general anaesthesia. The patient was positioned supine with the upper limb placed on a radiolucent table. A tourniquet was placed at the upper arm and inflated to 250 mmHg during surgery. A 4 cm curved incision was made along the radial border of flexor carpi radialis tendon (FCR) proximally from the wrist crest and distally to the scaphoid tubercle. The radial artery and its dorsal branch were carefully protected. The capsulotomy was performed above the radial scaphoid joint with a vertical incision. The nonunion site was then exposed. Interposed fibrous tissue and sclerotic bone that had occupied the nonunion site were removed thoroughly. The tourniquet was released to ensure the bleeding viability of the fracture fragment. Two 1.6-mm Kirschner wires were inserted perpendicularly into the central portion of proximal and distal fragments of the fractured scaphoid. The nonunion gap was opened with the aid of a pin distractor. Mercerized cancellous bone harvested from the iliac crest was impacted into the wedge-shape fracture gap. The scaphoid length, humpback deformity of scaphoid, and the DISI deformity of carpal bone were corrected with this method and verified with intraoperative fluoroscopy. The dorsal approach was then adapted for screw fixation. A 3-cm longitudinal incision was made over the ulnar border of the Lister's tubercle. The extensor retinaculum was incised along extensor pollicis longus tendon (EPL), and the dorsal radiocarpal joint capsule was exposed between the third and fourth extensor compartment. A vertical capsulotomy was made to expose the scapholunate ligament. Care was taken not to disrupt the blood vessels entering the mid-portion of the scaphoid as well as protection the integrity of scapholunate ligament. The wrist joint was positioned to 30°of flexion and 10°of ulnar deviation to expose the screw entry point. A 1.0-mm antegrade Kirschner wire that served as a guidewire was inserted along the central axis of the scaphoid under intraoperative fluoroscopy. With an appropriate screw length, a 3.0 Dartfire screw (Wright, Memphis, TN, USA) was inserted into the scaphoid along the central axis of the guidewire. The fracture and bone graft stability were verified with fluoroscopy after the headless screw fixation. The wound was then closed in layers with gauze packed well and protection with short arm thumb spica cast immediately after surgery.
Postoperative protocol
A short arm thumb spica cast was applied for 6 weeks and was replaced with a wrist brace for an additional 6 weeks. Clinical and radiographic follow-up were arranged every 4 weeks for the first 3 months. After the short arm thumb spica cast was removed, the patients began to participate in a rehabilitation programme, in which a well-trained physical therapist applied passive motion training. At 12 weeks after surgery, low-impact exercises with muscle strengthening were allowed. The patients were allowed to return to full sports activity 6 months after surgery. Wrist flexion-extension arcs, grip strength, the Visual Analogue Scale (VAS), Mayo Wrist score, and DASH score were recorded at postoperative 3, 6, 9 and 12 months, respectively.
Radiographic examination
Our protocol was based on the routine scaphoid series recommended by the American College of Radiology in Shenoy et al. [10]. The scaphoid series was taken in four views: posterior-anterior, lateral, semi-pronated oblique, and posterior-anterior ulna deviation.
The measurements of these five parameters were performed by two experienced hand surgeons. If there was a discrepancy in the measurement value or bone union time, a revised measurement was determined after re-measurement and discussion by the two surgeons.
Clinical evaluations
Grip strength and flexion-extension arcs were measured by a blinded observer, who was not aware of the surgical plan, and other radiographic findings were measured preoperatively and at postoperative 3, 6, 9 and 12 months, respectively. The hand grip strength was measured with a Jamar Hydraulic Hand Dynamometer (Jamar Technologies/America) using the Southampton protocol as follows [12]: patients were seated with back support and the hips flexed as close to 90°as could be tolerated. The patients rested their forearms on the arms of the bed with their wrists in a neutral position. The measurer supported the weight of the device by resting it on his or her palm. Measurements were performed three times for each hand to give six readings in total. The best of the six grip strength measurements was used in the statistical analyses. The operated hand was measured as a percentage of the normal side. Considering whether the dominant or non-dominant hand was injured, we employed the 10% rule for data correction [13][14][15]. The active wrist flexion-extension arcs for the operated and non-operated hands were measured with the manual universal goniometer.
The functional outcomes of the 18 patients were evaluated with Visual Analogue Scale (VAS), DASH score and Mayo Wrist score questionnaires preoperatively and at 3, 6, 9 and 12 months postoperatively. The patient's satisfaction was classified into four degrees according to the Mayo Wrist score: excellent, 90 to 100; good, 80 to 90; satisfactory, 60 to 80; and poor, below 60.
Statistical analysis
Data analysis was performed using the SPSS software (Version 20.0; Chicago, Illinois). Univariate analysis was performed using frequencies for descriptive statistics. Kruskal-Wallis test was used in the analysis of the categorical variable. Post hoc analysis, and Wilcoxon ranksum test performed to evaluate the significant differences between preoperative and postoperative measures at 3, 6, 9 and 12 months. Correlations were considered significant if p values were less than 0.05 (two-sided).
Results
From January 2016 to June 2019, 25 patients received surgical treatment for scaphoid nonunion in our hospital. Three were excluded because of avascular necrosis, three due to loss of follow-up, and one for previous scaphoid surgery. Eighteen patients were finally included for analysis (Table 1). All patients received volar bone grafting and a dorsal antegrade headless screw. The patients' average age was 32.7 (range, 20 to 59) years. Initial injury to operation time was 20.8 (range, 3 to 144) months. Mean time to union was 14.3 (range, 8.9 to 20.9) weeks. Thirteen (72%) patients were men and 5 (28%) were women. Four patients were smokers (22%). The injury mechanism included ten traffic accidents (56%), and eight falls (44%). Scaphoid nonunion on the right side occurred for 9 (50%) and on the left side for 9 (50%), and the dominant hand of all patients was the right side (100%). The fracture site for all patients was at the scaphoid mid-portion. According to the Herbert classification, two cases were D1 (22%), five cases were D2 (28%), and nine were D3 (50%).
The recovery course for the grip strength, arc of motion, DASH score and Mayo Wrist score from the preoperative period to 12 months postoperative is shown in Table 2. The wrist function recovery course after the operation was divided into three phases (Fig. 1). A downswing phase was noted from the preoperative period to 3 months postoperatively. The lowest wrist function status was found at postoperative 3 months. The upswing phase started from 3 to 6 months postoperatively. A prominent wrist function improvement occurred in which low-impact exercises and light activity were allowed under the assistance of an experienced therapist. The improvement of wrist function was reduced during the steady growth phase (6 to 12 months postoperatively).
At postoperative 6 months, the grip strength, Mayo Wrist score (Fig. 2) and motion arcs began to improve significantly compared to preoperative status (Table 2). Finally, the DASH score was significantly improved at postoperative 12 months (P < 0.05) ( Table 2).
Discussion
The purpose of this study was to investigate the recovery course, and the functional and radiographic outcomes of patients with scaphoid nonunion who were treated with combined volar bone grafting and dorsal antegrade headless screw fixation. The main findings of our study are as follows: (1) The surgical outcomes for scaphoid non-union treated with combined volar bone grafting and a dorsal antegrade headless screw achieved a high union rate, satisfactory wrist functional and radiographic outcomes. (2) The wrist functional recovery course after the double approach surgery for scaphoid nonunion was divided in three phases: a downswing phase from operation to postoperative 3 months, an upswing phase from postoperative 3 to 6 months, and a slower progressing phase from 6 to 12 months postoperatively. (3) Compared to the preoperative status, the grip strength, Mayo Wrist score and motion arcs were the earliest recovered wrist function parameters that had significant improvements at 6 months postoperatively, and finally the DASH score at 12 months postoperatively.
Arthroscopic surgery has the advantages of direct visualisation, facilitated debridement of the scaphoid nonunion site, and minimal violation of the scaphoid vascularity [17]. High union rates of approximately 84 to 100% are reported with this method [5,[21][22][23]. Though this minimally invasive technique has yielded good results with minimal morbidity, its use is still limited to scaphoid nonunion without a large bone defect [24]. In addition to large bone defect filling, the open volar approach is also advantageous in correction to the humpback deformity, scaphoid length and DISI [12,25] ( Table 3). A literature review regarding the wrist functional recovery course after scaphoid nonunion surgery is presented in Table 4 [25][26][27][28][29][30][31][32][33][34][35]. The union rates ranged from 84.6 to 97.1% for open surgeries and 86 to 100% for arthroscopic surgeries. Our patients achieved a 100% union rate, which was higher than that for the open surgery group in previous studies. The high union rate in our study suggested the importance of deformity correction and fixation stability, which were determined by the bone graft quality and the screw position. In addition to the compacted-wedge shape bone graft, the centrally placed screw is the keystone of this procedure. Biomechanically, centrally placed screws have superior stiffness, support a greater load at failure, have a longer screw length and shorter healing time than eccentrically placed screws [36][37][38]. A centrally placed screw from the volar side can also be achieved by a lever trapezium approach or by drilling a portion of it; however, many surgeons prefer the dorsal approach because of the ease of access and the ability to place a screw closer to the central axis [39,40]. In addition, a study that has examined scaphoid intraosseous vascular anatomy also shows that the central axis and antegrade dorsal screw fixation cause less disruption of the scaphoid internal blood supply than that for the retrograde volar screw [41]. In our study, the headless screws were inserted through the dorsal miniopen approach instead of a purely dorsal percutaneous technique. A mini-open dorsal approach has been shown to be safer than the purely percutaneous method when approaching from the dorsal mid-portion. Weinberg et al. have shown that there is a 13% chance of tendon injury with a purely percutaneous technique [42]. We believe that the mini-open wound to the dorsal midportion provides adequate exposure for vessels, ligament protection, and provides an excellent screw insertion site to allow easy insertion of the central axis screw. Table 4 shows the final postoperative motion arcs, grip strength and function scores of previous studies. The In our study, we found that the earliest recovered parameters were the grip strength, the Mayo Wrist score and motion arcs, which had recovered significantly at 6 months postoperatively, and finally the DASH score at 12 months. In our protocol, wrist immobilisation with a short arm thumb spica cast and wrist brace was applied for the first three postoperative months. After three postoperative months, low-impact exercises and light activity were allowed under the assistance of an experienced therapist. An average bone union time of 14.3 weeks was observed in this period. In our observation of the functional recovery course of the wrist, the grip strength recovered quickly after the bone union, which could be the early objective predictor to confirm the bone union. The motion arcs improved quickly after the removal of the wrist brace, which could be a reliable parameter to evaluate the intensity and frequency of rehabilitation. The Mayo Wrist score is a questionnaire that consists of pain, satisfaction, range of motion and grip strength. The objective grip strength and motion arcs improvement that occurred in the early postoperative period yielded the early significant improvements in the Mayo Wrist score. The DASH score contains 38 questions, including different kinds of daily activities, highly strained and technically demanding works, which needs longer time of physiotherapy and mainly depend on patient's subjective feedback. In our opinion, the DASH score is more suitable to evaluate functional recovery 12 months after scaphoid nonunion surgery. This study has several limitations. First, our study had no control group; however, our results were comparable to Oh. et al., which had better carpal alignment but similar wrist function to the arthroscopic group, though the clinical results were similar [25]. Second, our study's 12month follow-up time was relatively short; late complications such as arthritis and screw migration may not be detected in this limited time. Finally, our study's case number was relatively small. The findings in our study should be confirmed in a future study with a larger population.
Conclusion
The surgical outcomes for scaphoid nonunion treated with combined volar bone grafting and dorsal antegrade headless screw achieved a high union rate and great wrist functional and radiographic outcomes. The earliest recovered wrist functional parameters were grip strength, Mayo Wrist score and motion arc at postoperative 6 months and finally the DASH score at postoperative 12 months. We believe that our findings are informative for clinical hand surgeons to predict the postoperative functional recovery course. Our findings also provide a reference for common functional scores at different evaluation times; however, these findings should be confirmed in a future study with a larger population and longer follow-up time.
|
2020-11-10T15:19:06.231Z
|
2020-10-13T00:00:00.000
|
{
"year": 2020,
"sha1": "36f8a8829924d3628c07542cebbc958ce0f501ab",
"oa_license": "CCBY",
"oa_url": "https://josr-online.biomedcentral.com/track/pdf/10.1186/s13018-020-02055-0",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "36f8a8829924d3628c07542cebbc958ce0f501ab",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
3702572
|
pes2o/s2orc
|
v3-fos-license
|
Patterns of comorbidity and disease characteristics among patients with ankylosing spondylitis—a cross-sectional study
The knowledge of the development of comorbidities in patients with ankylosing spondylitis (AS) is limited. The aim of this study was to analyse associations between AS disease characteristics and comorbidity and to evaluate patterns of comorbidities in patients with AS. Patients with AS, fulfilling the modified New York Criteria, were identified (n = 346, mean age 56 ± 15 years, 75% men, 99% HLA B27 positive). Through a review of the patient records, data on disease activity parameters, laboratory results, disease manifestations, and diagnoses of any clinically significant comorbidity was obtained. Four categories of comorbidities of interest were identified: A. arrhythmias, conduction disorders, and valvular heart disease; B. atherosclerosis and atherosclerotic CVD; C. spinal and non-spinal fractures; and D. obstructive sleep apnoea syndrome. Associations between AS disease characteristics and comorbidities in categories were assessed in logistic regression models. Differences in proportions of comorbidities was analysed using two-sided chi-square. Age was associated with all four categories of comorbidities, and male sex with arrhythmias, conduction disorders, valvular heart disease, and obstructive sleep apnoea syndrome. Early disease onset and long disease duration, respectively, were associated with arrhythmias, conduction disorders, and valvular heart disease. Obstructive sleep apnoea syndrome was associated with features of the metabolic syndrome. Patients with atherosclerotic cardiovascular disease had an increased risk of most other comorbidities, similar to, but more pronounced than patients with arrhythmias, conduction disorders and valvular heart disease. Comorbid conditions motivate clinical awareness among patients with AS. Longitudinal studies are needed to establish preventive measures. Electronic supplementary material The online version of this article (10.1007/s10067-017-3894-0) contains supplementary material, which is available to authorized users.
Introduction
Ankylosing spondylitis (AS) is a chronic inflammatory disease, primarily involving the axial skeleton and entheses [1]. The disease is usually clinically manifested in the third decade of life, and men are overrepresented among patients with a diagnosis of AS [1]. The prevalence of AS mirrors the prevalence of the genetic factor, the MHC class I molecule HLA B27 in the population. In Sweden, this can be depicted by the south-north gradient and an observed prevalence of AS around 0.15% in the south and 0.25% in the northern parts of the country [2]. The clinical characteristics of AS, such as the risk of progressive spinal stiffness, are well known, but the total burden of the disease, taking also comorbidity into account, has been less studied. Some comorbidities, like anterior uveitis, inflammatory bowel disease, and psoriasis, might be considered as phenotypic features of the AS disease as such [3,4]. This might also be the case for aortic regurgitation and cardiac conduction system disorders, and possibly also osteoporosis and spinal fractures, which all are related to the disease [5][6][7][8][9][10]. Other comorbidities observed to be associated with AS include hypertension, dyslipidaemia, diabetes, and sleep apnoea syndrome [3,[11][12][13]]. An increased mortality risk, mainly attributed to cardiovascular disease (CVD), infections, and cervical spinal fractures, has been reported in AS [14][15][16]. In order to improve the clinical care for patients with this potentially debilitating and life-threatening disease, it is important to identify the risk factors for the development of the comorbid conditions as well as the relationships between comorbidities and the disease characteristics of the patients. In addition, it is also of interest to reveal if the comorbidities are related to each other.
The aim of this study was to evaluate comorbidities in a clinical population of patients with AS according to the modified New York Criteria [17], taking the phenotype of the AS disease into consideration. Firstly, the report will focus on four categories of comorbidities: A. arrhythmias, conduction disorders, and valvular heart disease; B. atherosclerosis and atherosclerotic CVD; C. spinal and non-spinal fractures; and D. obstructive sleep apnoea syndrome (OSAS), in relation to characteristics of the AS patients. Secondly, the presence of these particular comorbidities will be related to other comorbidities in order to reveal patterns of comorbidities in the AS patients.
Methods
Västerbotten County in northern Sweden, with a population of 265,000 inhabitants, has one public clinic of rheumatology located at the university hospital with units in two other cities, and no private practitioners. Patients with AS treated with conventional synthetic or biologic disease modifying antirheumatic drugs (csDMARDs or bDMARDs) are followed by rheumatologists. Other patients with AS are offered yearly followup by physiotherapists, although for less severe disease the intervals between visits can be increased. Through the digital system of patient records, covering all three rheumatologic units in Västerbotten County, all individuals with a visit with a diagnosis of AS (ICD-10 M45.9) from a visit at the clinic of rheumatology between May 2002 (when the patient records were digitized) and November 2015 (n = 523) were identified. Individuals not fulfilling the modified New York Criteria [17] were excluded, leaving 346 patient files for thorough evaluation according to a pre-set form. Demographic data (age, sex, educational level, smoking habits); disease specific data (age at symptom onset, extra-articular disease manifestations, Creactive protein (CRP) levels (g/L); mobility measurements, HLA-B27 status, treatment with csDMARDs or bDMARDs, glucocorticoids or nonsteroid anti-inflammatory drugs-NSAIDs); and coincidence of comorbidity (supplementary Table 1) were collected from all visits at the clinic of rheumatology. The evaluation period covered May 2002 to the end of December 2015. Data on comorbidities and events from other medical specialities including primary care was also collected through overviews of diagnoses from digital patient records covering all in-and out-patient care in the county, except outpatient care by approximately ten primary care units. The data on events from the records' overview was complemented with data from the patient records, radiology reports, and other laboratory data if it was considered relevant and was available.
A comorbidity or an event was considered prevalent for a patient if ever occurring during the period May 2002-December 2015 or noted in any patient record as occurring at a time point antedating May 2002. The same consideration was made regarding smoking, and disease characteristics such as a description of arthritis, enthesitis, anterior uveitis, inflammatory bowel disease, or psoriasis in the patient record. Treatment with DMARDs, glucocorticoids, NSAIDs, or tumour necrosis factor inhibitors (TNFi) were registered if usage was noted in the patient record during or before the evaluation period. Mobility measurements from the latest recorded visit were used for calculation of an index for spinal mobility, based on the categories of mobility for each of five measurements defined in Bath Ankylosing Spondylitis Measurement Index (BASMI) [18]. Only for one patient, five spinal mobility measurement variables were available. Seventy-five individuals (21.7%) lacked all measurements, and 146 (42.2%) had four available variables. The remaining 124 (35.9%) patients had 1-3 variables. The mean score from all available measurements for each individual was included and is referred to as spinal mobility.
All patients but 18 (5.2%) had at least one analysed CRP value during the evaluation period. For patients with more than one CRP measurement, a mean of all available values was used. Missing data was also observed regarding the age at or date for disease onset (10 individuals, 2.9%); HLA B27 status (93 individuals, 26.9%); and smoking habits (132 individuals, 38.2%).
In this report, we choose to focus on four categories of comorbidities of interest in AS namely: A. valvular heart disease and/or arrhythmia, B. atherosclerotic CVD, C. spinal and non-spinal fractures, and D. OSAS. Group A included any valvular heart disease, atrial fibrillation, atrial flutter, and other arrhythmias or conduction disorders (Supplementary Table 2). Group B included ischemic heart disease, cerebrovascular disease, peripheral vascular disease, and congestive heart disease (Supplementary Table 2).
The study was performed in accordance with the Helsinki Declaration and was approved by the Ethical Review Board at Umeå University, Umeå, Sweden. The Ethics Review Board waived the requirement for individual consent for this retrospective, observational study.
Statistical methods
Simple logistic regression models, adjusted for age at end of the evaluation period (or the age at death) and sex (if appropriate), were used to evaluate the associations between AS disease characteristics using the four main categories of comorbidities of interest (A-D), respectively, as dependent variables. The results are presented as odds ratios (ORs) with 95% confidence intervals (CI), and p value. In dichotomous variables, presence/history of a characteristic was coded 1 and no presence/history of a characteristic was coded 0. Male sex was coded 1 and female 2. Patients with missing data regarding the age at symptom onset, the mean of CRP, or spinal mobility were not included in the respective regression model. Patterns of comorbidities were graphically presented, with statistical significance of differences in proportions of comorbidities compared between the patients with and without the specific comorbidity of interest (A-D) analysed using twosided chi-square. The level of significance was set at p < 0.05. All analyses were performed using IBM SPSS Statistics Version 24 for Mac.
Results
Among 346 included patients with AS, one in four were female ( Table 1). The reported disease onset occurred at a mean of 25 (standard deviation; SD 9) years of age, most patients were HLA B27 positive, and a majority of the patients were or had been smokers ( Table 1). The mean disease duration, years (SD) at the end of the evaluation period (or death) was 31 (14) years, ranging from 1 to 60 years. Twenty-three patients died during the evaluation period, at a mean age (SD) of 73 (7) years. Disease characteristics of the included patients with AS are presented in Table 1.
Frequencies of the main comorbidities of interest
Arrhythmias were registered for 34 patients (9.8%) and included mainly atrial fibrillation or flutter (n = 31). Other arrhythmias or conduction disorders (pacemaker, n = 1; pacemaker and atrioventricular block, second degree, n = 1; paroxysmal supraventricular tachycardia, n = 2; long QT syndrome, n = 1; left bundle branch block and atrioventricular block, first degree, n = 1; unspecified arrhythmia or conduction disorder, n = 7) were present among 13 individuals, of which 4 also had atrial fibrillation. Valvular heart disease was found among 16 individuals, of which 8 had aortic insufficiency, 3 aortic stenosis, 2 both aortic stenosis and insufficiency, and 3 mitral insufficiency (Supplementary Table 1). Any arrhythmia and/or valvular heart disease were present among 51 individuals, 7 women, and 44 men.
Among patients with atherosclerotic CVD (n = 57) half (n = 29) had more than one of the included atherosclerotic CVD conditions. The most frequent diagnoses were myocardial infarction (n = 25), congestive heart disease (n = 20), angina pectoris (n = 17), and unstable angina (n = 10) (Supplementary Table 1). Eleven individuals had suffered from a cerebrovascular disease, stroke or transient ischemic attack. Among the patients with congestive heart disease, 9 had no registered ischemic heart disease or cerebrovascular disease, but 5 had a diagnosis of hypertension. Cardiovascular disease was observed as the underlying cause of death for 8 of the 23 patients who had died during the evaluation period.
One or several fractures were registered among 85 patients with AS, 26 spinal fractures and 70 non-spinal fractures. Nonspinal fractures most frequently affected the forearm, n = 17; the lower leg or ankle, n = 13; the hip, n = 12; ribs; n = 9, the clavicle, n = 8, or fingers or toes, n = 6. Most spinal fractures were stable compression fractures of the thoracic or lumbar spine (n = 15), but one patient had an unstable thoracic fracture requiring surgery. Nine patients suffered from cervical fractures, which for one patient was the underlying cause of death.
A diagnosis of OSAS was observed for 30 patients, mostly men (n = 28) (Supplementary Table 1).
Associations between disease characteristics and the main comorbidities of interest
Higher age increased the probability of all four groups of comorbidities of interest (Table 2). Male sex was associated with a 5-fold risk of OSAS and with a 3-fold increased risk of arrhythmia and/or valvular heart disease ( Table 2). Debut of AS at an earlier age, and longer disease duration, respectively, were associated with a higher prevalence of arrhythmia and/or valvular heart disease (p = 0.036, Table 2). Ever treatment with NSAID was associated with a lower prevalence of arrhythmia and/or valvular heart disease ( Table 2). No statistically significant associations between disease characteristics and atherosclerotic CVD, or spinal or non-spinal fracture, respectively, were noted ( Table 2).
Patterns of comorbidities in AS patients
Hypertension was the most frequent comorbidity observed in the AS population, affecting 156 (45.1%) individuals (Fig. 1, Supplementary Table 1). More than 10% of the included patients had diabetes, malignancy, asthma, ischemic heart disease, urogenital disease, dyslipidaemia, or non-spinal fracture, respectively (Fig. 1, Supplementary Table 1). Patients suffering from arrhythmia and/or valvular heart disease had a higher proportion of most other CVD including hypertension, as well as stomach ulcer, urogenital disease, and dyslipidaemia compared with individuals without arrhythmia and/or valvular heart disease ( Fig. 1). Most registered comorbidities, including osteoporosis and hospitalisation due to infections, were more frequent among patients with atherosclerotic CVD than among patient with AS not suffering from atherosclerotic CVD (Fig. 1). Prevalence of spinal or non-spinal fracture was related to the presence of chronic obstructive pulmonary disease, osteoporosis, congestive heart disease, diabetes, and urogenital disease (Fig. 1). Patients with OSAS had a higher proportion of features of the metabolic syndrome-diabetes, dyslipidaemia, and hypertension (Fig. 1).
Discussion
In this cross-sectional study, exploring patterns of comorbidity in patients with established AS, we observed CVD, arrhythmias and valvular heart disease, fractures and OSAS to be associated with age, and arrhythmias and valvular heart disease to be associated with long disease duration, and low age at AS disease onset. Atherosclerotic CVD was linked to numerous other comorbidities, and arrhythmia and/or valvular heart disease was associated mainly with other CVD and CVD-risk factors. Obstructive sleep apnoea syndrome was linked to features of the metabolic syndrome, but no clear pattern was observed for the group of spinal and non-spinal fractures.
Progressive fibrotic changes in the aortic root leading to impaired aortic valve function and conduction abnormalities are frequently observed in patients with AS and can be seen as an extra-articular disease manifestation [6]. The background OR odds ratio, CI confidence interval, CVD cardiovascular disease, AS ankylosing spondylitis, csDMARD conventional synthetic disease modifying antirheumatic drug, TNFi tumour necrosis factor inhibitor, CRP C-reactive protein *Model for male sex was adjusted for age at the end of the evaluation period. Model for age was adjusted for sex of CV comorbidity in patients with AS is complex. Diseasespecific changes, such as fibrosis, as well as systemic inflammation and traditional CV-risk factors might affect the risk of valvular or congestive heart disease, arrhythmias, and coronary disease [5,19,20]. In the present study, we observed a relationship between arrhythmias and/or valvular heart disease and atherosclerotic CVD, which are taken to reflect this complexity. As expected, increasing age was associated with all four groups, but had the largest impact on the group with atherosclerotic CVD. This is also likely to explain why this group was associated with the most other comorbid conditions. The observed association between age or disease duration, respectively, and arrhythmias and/or valvular heart disease are in line with previous studies on conduction disorders or aortic valve disease [5,7,21,22]. In contrast to the present study, some earlier studies have shown associations with measurements of disease severity or AS disease phenotype [7,21]. None of the DMARD, TNFi or glucocorticoid treatments (which may be suggested as proxies for a more severe disease) or the mean CRP, or spinal mobility, showed any associations with the comorbidities of interest. A proper evaluation of any impact of the disease activity on the development of comorbidities would however require a prospective study design, and preferably an inception cohort. The negative association between NSAID treatment and arrhythmias and/or valvular heart disease is likely due to confounding by contra-indication. We could observe that among the individuals who never had used NSAID, one in three used or had used anticoagulants, which constitute a contraindication for NSAID use. Despite the high prevalence of fractures in the population, only 3.5% of the patients overall and 7% of the patients with fractures had a diagnosis of osteoporosis. This could reflect insufficient awareness of the risk of osteoporosis in AS, and possibly under-treatment of the condition.
Obstructive sleep apnoea has been suggested as a part of the metabolic syndrome, and also to contribute to the metabolic derangements [23], which could be a background for the associations observed in the present study.
The relatively low percentage of patients ever exposed to TNFi, 14%, can be attributed to the specific setting of the study. A lower uptake of biological drugs in Västerbotten County due to local treatment traditions [24] is one explanation. The second is the inclusion of also patients with a mild AS disease in the study, resulting in a greater denominator compared with other hospital-based AS populations. Fig. 1 Panel of patterns of comorbidities in patients with ankylosing spondylitis (n = 346) overall, and patients with ankylosing spondylitis and the specified comorbidities: (a) Arrhythmia and/or valvular heart disease, (b) atherosclerotic CVD, (c) spinal or non-spinal fracture, or (d) obstructive sleep apnoea syndrome. Statistically significant differences in the proportion of a comorbidity compared with individuals without the specified (A-D) comorbidity are marked: * for p < 0.05, ** for p < 0.01, and *** for p < 0.001 Despite of the strengths of this study in terms of generalisability due to the relatively large patient population, the inclusion of patients with a mild as well as severe disease and the available patient records from most medical specialities, there are several limitations needed to take into consideration. Firstly, the included data was retrieved from clinical visits and not from structured follow-ups, which mean that choices to measure or register a variable or not is likely to be influenced by factors related to the health care providers and to characteristics of the patients. An example is the body weight (a parameter not included in the analyses), that was registered for 80% of the patients with OSAS, but only for 40% of the patients without this comorbidity. Secondly, the comorbidities were retrieved from records and any formal validation could not be performed. Some diagnoses of comorbidities were imprecise, such as some arrhythmias or conduction system disorders. It is also likely that comorbidities considered being of minor clinical significance, such as first degree atrioventricular block, less frequently were listed as a diagnosis in the records overview, resulting in a low sensitivity for such comorbidities in the present study. The recordings of disease-specific factors, comorbidities, and events taking place at a time point before the digitization of the medical records might not be complete. The specific selection of patients due to the use of the modified New York Criteria should also be noted as a potential limitation. Due to the explorative nature of the study and the lack of precision in several measurements, we refrained from any statistical handling of missing data. The cross-sectional design does not allow any suggestions of prediction or causality. Thus, the present study has its main value as explorative and hypothesis generating, and the results might not be applicable to another population of patients with AS.
In summary, multiple coexisting comorbidities were frequent in AS, especially among patients with CVD. High age was associated with comorbidity, but among disease characteristics only long disease duration and early disease onset seem to contribute to the risk of comorbidity. The risk of comorbidity in AS motivate clinical awareness. Longitudinal studies are needed to identify predictors and measures to prevent comorbidity in the AS population.
|
2018-03-04T14:09:50.314Z
|
2017-11-08T00:00:00.000
|
{
"year": 2017,
"sha1": "d4a7be1671ae21a2b95037989b7b5a3fc09d09c8",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10067-017-3894-0.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "e48334cc535d3cefd4aba340114b22db26b251e5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
255268251
|
pes2o/s2orc
|
v3-fos-license
|
Characteristic analysis of skin keratinocytes in patients with type 2 diabetes based on the single-cell levels
Abstract Background: Keratinocytes play an important role in wound healing; however, less is known about skin keratinocytes in patients with type 2 diabetes mellitus (T2DM). Therefore, this study aimed to search for the transcriptional characteristics of keratinocytes at the single-cell level from T2DM patients, and to provide experimental data for identifying the pathological mechanisms of keratinocytes under pathological conditions. Methods: We performed single-cell RNA sequencing on the skin tissue from two T2DM patients and one patient without diabetes-induced trauma using the BD Rhapsody™ Single-Cell Analysis System. With the help of bioinformatics R-based single-cell analysis software, we analyzed the results of single-cell sequencing to identify the single-cell subsets and transcriptional characteristics of keratinocytes at the single-cell level, including Kyoto Encyclopedia of Genes and Genomes and Gene Ontology analyzes. Results: In this study, we found specific highly expressed signature keratinocyte-related genes. We analyzed the transcriptome of keratinocytes from experimental and control groups and screened a total of 356 differential genes, which were subject to bioinformatics analysis. Enriched pathways included oxidative phosphorylation, antigen processing and presentation, prion and Huntingtons’ diseases, bacterial invasion of epithelial cells, thermogenesis, vasopressin-regulated water reabsorption, and protein processing in the endoplasmic reticulum. Conclusions: This study revealed the characteristics of keratinocytes at the single-cell level and screened a group of differentially expressed genes related to T2DM-associated keratinocytes, oxidative phosphorylation, cytokine receptor interactions, prion diseases, and other signaling pathways.
Introduction
Wound healing is a complex process that includes a series of pathophysiological processes, including inflammatory responses, epidermal regeneration, proliferation and differentiation of various cells, and tissue remodeling. [1] The epidermis forms the protective structure of the outer layer of the human body. Under physiological conditions, the epidermis is constantly undergoing self-renewal after being stimulated by exogenous and endogenous injuries. The main participants in this process are the keratinocytes that are located in the basal layer of the epidermis. [2] As the wound forms, the process of epidermal regeneration is also completed by keratinocytes through the processes of proliferation, differentiation, and migration. Therefore, any endogenous or exogenous adverse factors that could interfere with the regulation of wound healing will affect the wound healing process. [3] In patients with type 2 diabetes mellitus (T2DM), especially in those whose blood glucose is not well controlled, delayed union or even nonunion in wound healing is common. However, the mechanism behind this impaired wound healing still needs further clarification, including the pathological changes of keratinocytes that are in a high glucose state.
Compared with bulk RNA sequencing (RNA-seq), which only provides an average expression for millions of cells, single-cell RNA sequencing (scRNA-seq) allows concurrent analysis of thousands of cell transcriptomes at the single-cell level, thereby allowing for the characterization of novel cell subsets. [4] Therefore, scRNA-seq can reliably identify even closely related cell populations, reveal changes that make each cell type unique, clarify the heterogeneity of gene expression patterns in peripheral blood cell populations, and be applied in the diagnostic and prognostic evaluation of clinical diseases. [5,6] However, current skin tissue studies are still insufficient to fully elucidate the mechanisms of wound healing, at the single-cell level. Therefore, in this study, we collected skin tissues from two patients with T2DM and one patient without diabetes and performed scRNA-seq of the tissues. This study aimed to investigate the characteristics of keratinocytes at the single-cell level in patients with diabetes and to provide a scientific basis for the care and treatment of clinical wound healing.
Ethics approval
The study was conducted according to the Declaration of Helsinki. The Ethics Committee of Integrated Traditional Chinese and Western Medicine Hospital, Southern Medical University, approved the protocol in the form of case reports. Written informed consent was obtained from each patient.
Human samples
Samples were collected from two patients with T2DM (47 years and 58 years, male, abdominal surgery) and one trauma patient without diabetes (52 years, male, right upper extremity trauma) as the control. Skin tissue was obtained from resected tissue at the time of patient's surgery. Primary tissue preparation was performed according to the previous studies. Briefly, the collected tissues were minced and digested into a single-cell suspension, and single cells were prepared in the differentiation medium. After centrifugation, the cells were resuspended in Roswell Park Memorial Institute medium with 10% fetal bovine serum (RPMI-1640 + 10% FBS) and processed to construct single-cell nextgeneration sequencing libraries. Processing and analysis of single-cell RNA-seq data Raw data read from scRNA-seq were processed using R 3.6.2, and gene expression data analysis were performed using the R/Seurat package. For the quality control step, low-quality cells, empty droplets, and multiplexed captures were first filtered out based on the distribution of unique genes detected in each cell in each sample. Cells with <300 genes and cells with >6000 genes were excluded. The cellular distribution of the expression-based mitochondrial gene fraction was also plotted, and cells with a mitochondrial gene fraction >30% were discarded to eliminate dying cells or low mass cells with extensive mitochondrial contamination. Subsequently, t-distributed Stochastic Neighbor Embedding (tSNE) was used for the two-dimensional representation of data structures. After clustering, the cluster biomarkers of each group were searched using the findmarkers function and findallmarkers function within the Seurat package; thus, we were able to define the marker genes of clustering by their differences in expression.
Single-cell RNA sequencing
We performed differential expression analysis for the experimental and control groups and identified the sets of differentially expressed genes in R using DESeq2 package with threshold set at P < 0.05 and absolute value of fold change >1.5. The identified differentially expressed genes were analyzed according to Gene Ontology (GO) functional categories and Kyoto Encyclopedia of Genes and Genomes (KEGG) enrichment pathways.
Recognition and identification of keratinocytes
After annotation of cell population, we identified cluster 3 (n = 520) as keratinocytes. We found that the specific expression of signature genes of keratinocytes supported the recognition of this group as keratinocytes, with specific highly expressed SFN, LYPD3, S100A8, KRT1, KRT10, KRT6A, KRT5, and KRT16 [ Figure 2]. KRT belongs to the keratin family, a family of fibrous structural proteins known as sclerosis. Keratin is a key structural material that forms the outer layers of scales, hair, feathers, horns, feet, hooves, calluses, and vertebrate skin. Keratin can also protect the epithelial cells from damage or stress.
Transcriptional characteristics of keratinocytes from T2DM patients
We analyzed the transcriptome of keratinocytes from the T2DM and the control group and screened a total of 356 differential genes at P < 0.01, including LUCAT1, MAL2, MXD1, PKP1, JUP, MARCH5, MARCH7, PTEN, AQP9, PIK0R3, PHKA2, PC, IL3, ADH4, etc [ Figure 3A]. Gene set enrichment analysis revealed differential gene enrichment of signaling pathways including oxidative phosphorylation, cytokine-cytokine receptor interactions, prion disease, Huntington's disease, antigen processing and presentation, thermogenesis, tryptophan metabolism, retinol metabolism, amyotrophic lateral sclerosis, fatty acid degradation, tyrosine metabolism, bacterial invasion of epithelial cells, and other processes [ Figure 3B]. Among them, for prion disease signaling pathway, normalized enrichment score (NES) = 1.585, P = 0.0029; for oxidative phosphorylation signaling pathway, NES = 1.733, P < 0.0001; cytokine-cytokine receptor interactions signaling pathway, NES = 1.598, P = 0.0030 [ Figure 3C-E]. The Gene Ontology term enrichment analysis revealed that the specific highly expressed genes in keratinocytes were enriched in biological process terms related to the regulation of T cell activation, leukocyte cell-cell adhesion, regulation of lymphocyte activation, lymphocyte differentiation, positive regulation of cytokine production, regulation of hemopoiesis, cellular responses to tumor necrosis factor (TNF), negative regulation of interleukin-2 production, cytokine secretion, the cellular component terms related to the external side of the plasma membrane, focal adhesion, cell-substrate adherens junction, invadopodium, condensed chromosome, and molecular functions related to actin binding.
Discussion
Patients with T2DM have an increased risk of skin infection and poor wound healing. Impaired keratinocyte function is one of the major factors for impaired wound healing in patients with diabetes. [7] In the pathological changes of abnormal cellular immune function and tissue inflammation, the keratinocyte defense response plays an important role in these pathological changes and is usually called a defensive sentinel. [8] In this study, we collected skin tissue from T2DM patients, and then performed scRNA-seq based on BD Rhapsody. Cells were divided into eight clusters, and seven kinds of cell types were identified, such as endothelial, fibroblasts, smooth muscle, keratinocytes, dendritic, mast, and Tcells. We first characterized keratinocytes and were able to clearly discover the high expression of the KRT family of proteins in agreement with keratinocyte function. However, the number of keratinocytes screened in this study was not large; on the one hand, relatively more subcutaneous tissue was attached to the tissue, on the other hand, cells that might be close to the skin surface were in a keratinized state and could easily be treated as low-quality cells that had lost their activity, and thus, were filtered out during the data preprocessing phase. At the same time, we found that S100A8 was highly expressed in keratinocytes. S100A8 and its binding partner S100A9 are members of a non-ubiquitous multigene family of cytoplasmic Ca 2+ -binding proteins. [9,10] Because of their damage-associated molecular pattern, their differential expression in chronic inflammatory diseases, and their association with cancer, these proteins have received much attention over the past years. [9] The expression of S100A8 is specifically derived from activated phagocytes. [11] It has been previously documented that S100A8/A9 is found in differentiating supra-basal wound keratinocytes, [12] particularly during the first 12 to 24 h after injury, and was found to gradually return to baseline expression within two weeks after injury. [13] Therefore, during the wound healing period, the S100A8/A9 immunoreactivity map increased rapidly and infiltrated into the collected leukocytes followed by continuous S100A8/A9 expression in the keratinocytes of the wound, representing de novo synthesis of S100A8/A9. [14,15] Compared with non-diabetic samples, we screened high and low expression genes of keratinocytes from T2DM. Kyoto Encyclopedia of Genes and Genomes pathway analysis according to differential genes identified about 20 differentially enriched signaling pathways, of which, the top ranked ones were oxidative phosphorylation, cytokine-cytokine receptor interactions, and prion disease. Oxidative phosphorylation is an efficient way of producing large amounts of adenosine triphosphate (ATP), thus producing chemical ATP gradients in the process. [16] The most important part of this process is the electron transfer chain, which produces more ATP than any other step in cellular respiration. Therefore, the abnormal signaling pathway may suggest that the energy metabolism pathways and methods of keratinocytes in T2DM patients may be abnormal. Another enrichment signal pathway consists of cytokine-receptor interactions, a common, but not highly specific signaling pathway, especially in inflammatory and immune-related diseases. [17,18] Cytokines participate in innate and adaptive inflammation, host defenses, cell growth, differentiation, cell death, angiogenesis, and key intercellular modulators and also cause mobilization of cells that aim to restore homeostasis in the development and repair process. [19] Keratinocytes express and secrete a broad range of cytokines that can affect and amplify inflammatory responses, induce keratinocyte proliferation, and promote the migration of leukocytes into the skin. [20,21] It is reported that keratinocytes constitutively produce interleukin (IL)-1a and -1b, which bind to the same receptor complex and have similar biological activities. [22,23] IL-1 released by keratinocytes can trigger a rapid immune response, leading to the expression and release of other cytokines such as IL-6, IL-8, and tumor necrosis factor (TNF), leading to Th2 cytokine-induced activation of the IL-33/ST2 axis, which is involved in the progression of several skin diseases. [24,25] The limitations of this study are the small number of subjects, and in addition, the enrolled diabetic patients had relatively good glycemic control and did not fully represent the abnormal glucose metabolism found in pathological conditions. Moreover, the control group was accompanied by a certain skin inflammatory response, which could have affected the objectivity of the experimental data results to some extent. Finally, although the present study analyzed the characteristics of keratinocytes at the single-cell level, cellular experiments and animal experiments would be more desirable if the molecular and biological alterations of keratinocytes in diabetic patients are to be elucidated.
In conclusion, single-cell sequencing from the skin samples of patients with T2DM was performed. Characteristics of keratinocyte at the single-cell levels were revealed, and a group of differentially expressed genes of keratinocyte related to T2DM, which were enriched in oxidative phosphorylation, cytokine receptor interaction, prion disease, and other signaling pathways were screened. Thus, our study provides valuable experimental data concerning the impairment of wound healing caused by diabetes and helps provide clues for the future studies on the molecular mechanisms of would healing.
Conflicts of interest
None.
|
2022-12-31T15:16:30.693Z
|
2022-10-20T00:00:00.000
|
{
"year": 2022,
"sha1": "022a8332510cfb540050c8699945b8717902a437",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "WoltersKluwer",
"pdf_hash": "022a8332510cfb540050c8699945b8717902a437",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": []
}
|
213971004
|
pes2o/s2orc
|
v3-fos-license
|
Bull horn injury causing traumatic tooth intrusion – ultrasound and CT imaging
Introduction Traumatic injury to upper alveolus may result in apical displacement of the affected tooth/teeth into the underlying alveolar bone. The tooth while being driven into the socket under the upwardly directed impact force usually causes a crushing fracture of the alveolar socket bone. The tooth may also be displaced through the labial plate of bone or may even impinge upon the bud of the permanent tooth. Case report We present a case of tooth intrusion due to bull horn injury and its imaging features on ultrasound and CT scan. Discussion Most common teeth involved in dental trauma in children of 6 to 12 year age group, are the maxillary anteriors, and this age group also constitutes the most common group in whom tooth intrusion is seen. Tooth intrusion usually involves a single dental element. Common etiologic causes are injuries, falls, sports accidents, violence and traffic accidents. Traumatic intrusion due to injury by animals is rarely described and is more commonly seen in less developed areas that too in rural set-up where man-animal encounters are frequent. Conclusion In such cases, whenever the conventional imaging modalities like the X-rays such as intra oral peri-apical views and orthopantomograms are unavailable, or where use of ionizing radiation is a grave concern (especially in children and pregnant patients), ultrasonography offers a non-invasive diagnostic imaging method which helps in diagnosis of the condition and also helps in supplementing the clinical information, thereby helping in better understanding of the underlying condition.
Introduction
Traumatic injury to upper alveolus may result in apical displacement of the affected tooth/teeth into the underlying alveolar bone. The tooth while being driven into the socket under the upwardly directed impact force usually causes a crushing fracture of the alveolar socket bone [1]. The tooth may also be displaced through the labial plate of bone or may even impinge upon the bud of the permanent tooth [2,3]. We present a case of tooth intrusion due to bull horn injury and its imaging features on ultrasound and CT scan.
Case report
A six year old boy was brought to the emergency outpatient department by his parents with complaint of injury to his mouth after being hit by a bull (by its horns) while playing in a grassland 4-5 h ago. The child fell after being hit but there was no loss of consciousness. The parents gave history of bleeding from the wound site that had decreased after they applied pressure bandage at the site. The parents gave history of single episode of vomiting about an hour back. At the time of presentation, the boy was conscious, cooperative but in severe pain. His vitals were normal. Local examination revealed a contusion injury to the chin, lacerated lower lip and missing upper central incisor teeth. No injury to the upper lip was noted. The parents did not give any history of previous tooth extraction or shedding, nor did they report finding of the teeth at the site of injury. Mouth opening was normal. Intraoral examination, revealed multiple small lacerations of the gingival tissue and bleeding from the tooth sockets of the upper incisors. A midline fracture of the primary palate was also suspected on palpation. No significant occlusal derangement could be observed. Vestibular examination could not confirm the presence or absence of teeth which was also limited due to child not allowing adequate assessment. Intramuscular analgesics were administered for pain relief along with tetanus toxoid and antibiotics; the lower lip laceration was sutured and antiseptic dressing was done after wound debridement.
Intraoral periapical x-ray, orthopantomogram or Cone-beam CT were not immediately available and it was decided to use ultrasound to check for any impacted foreign body or intruded teeth.
Ultrasound was done using a high frequency (10-13 MHz) linear probe, placed transversely across the nasolabial sulci. It revealed two linear echogenic structures with posterior acoustic shadowing in the upper part of gingiva (above the level of cervical aspect of other normally lying adjacent teeth) suggestive of tooth intrusion (Fig. 1). As there was history of vomiting a non-contrast CT scan was done subsequently to rule out any intra-cranial trauma.
No brain injury was seen, however CT revealed intrusion of primary central incisors (Fig. 2a) causing non displaced fracture of overlying alveolus which was well depicted on volume rendered images (Fig. 2b). The displacement was approximately 7 mm, from the cervical aspect of the adjacent tooth crown. The teeth had well developed roots with their apices displaced towards the palate. The permanent buds could not be adequately visualized due to possible overlap by the intruded primary teeth. A fracture of hard palate was also seen (Fig. 2c).
The palatal displacement of primary teeth towards the developing successors mandated extraction of the intruded teeth so as to avoid interference with future eruption of permanent teeth. Fracture of the hard palate was managed conservatively.
Discussion
Most common teeth involved in dental trauma in children are the maxillary anteriors with 6 to 12 years of age being the most common group in whom tooth intrusion is seen [1][2][3]. Tooth intrusion usually involves a single dental element. Common etiologic causes are injuries, falls, sports, violence and traffic accidents [1,[3][4][5]. Traumatic intrusion due to injury by animal(s) is rarely described and is more commonly seen in less developed areas that too in rural set-up where man-animal encounters are frequent [6]. An intrusion of 1 mm to 8 mm is seen in most cases [7]. Surgical repositioning of the intruded teeth, orthodontic extrusion, waiting for spontaneous eruption or extraction of the teeth are some of the treatment modalities, which depend upon the type of tooth, its degree of displacement and future prognosis [5,7].
When the affected tooth cannot be detected in its socket or recovered from accident venue, our approach should be to rule out aspiration, ingestion or intrusion of the missing tooth [1,2]. Potential complications of intruded tooth which also need to be considered are tooth impaction into the sinus cavity, commonly the maxillary sinus, but literature has also documented a case of intrusion into frontal sinus [2,5,7]. Dislodgement of the intruded tooth into the respiratory tract, can cause life-threatening airway obstruction or lead to lung abscess [8]. Aspiration of the tooth/or its fragment and ingestion should also receive adequate evaluation. A chest radiograph may be necessary to rule out aspirated tooth, and may lead to symptoms like cough, breathing difficulty, fever etc. [5,[7][8][9]].
An ingested tooth may safely pass through the gastrointestinal tract or even lead to obstruction of the GI tract, perforation, bleeding or sepsis. Abdominal radiographs may be necessary at routine intervals with appropriate follow up and stool examination, to ensure tooth has been safely passed out [5].
A CT scan plays an important role in determining the exact position of the intruded teeth, the nature of tooth displacement and associated fracture [4,5]. On the other hand as described in this case, an ultrasound evaluation and application in this area of maxillofacial trauma may open up further avenues where this non-invasive technique could be of possible direct or supplemental diagnostic importance; and even more so as it avoids the radiation exposure associated with other diagnostic modalities keeping in line with the principle of ALARA (as low as reasonably achievable) [10]; it in turn also offers an added benefit especially in the younger age who commonly suffers the brunt of trauma to their primary teeth, and upon initial presentation of the injury may be brought to an emergency set up where teeth specific x-ray equipment may not be readily available especially so in the rural places of the less developed areas. Enamel hypoplasia/hypocalcification is the most common squeal of tooth intrusion which is diagnosed at the time of eruption [7]. Eruption disturbances, root dilaceration and space loss are some of the other sequel which could occur.
Conclusion
In cases of trauma to the oral region, where teeth are missing, especially in case of young children, tooth intrusion should be included in the differential diagnosis besides other diagnosis like tooth avulsion. In such cases, ultrasound is a non-invasive diagnostic imaging modality in contrast to hazardous (radiation involving) conventional 2-D X-rays like intra oral peri-apical views and orthopantomograms; thereby helping in better understanding the underlying condition. Hence due recognition must be accorded to the alternative use of sonography, which does not use harmful, ionising radiations, especially in case of young children or pregnant patients, where use of X-Ray irradiation may be of grave concern.
Authors' contributions
Authors contributed as follow to the conception or design of the work; the acquisition, analysis, or interpretation of data for the work; and drafting the work or revising it critically for important intellectual content: Rohan B contribute 55%, MB 25% and Rohit B 20%. All authors approved the version to be published and agreed to be accountable for all aspects of the work. Fig. 2a and b -3D volume rendered image); A fracture of hard palate was also noted (Fig. 2c).
|
2020-01-16T09:11:28.676Z
|
2020-01-08T00:00:00.000
|
{
"year": 2020,
"sha1": "74ec5350bba2ab94abc1c6ce39aec82d0c288de2",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.afjem.2019.12.002",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "556e5f5e77af42ee0f7cd078e1be0474a8bd5137",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
232192863
|
pes2o/s2orc
|
v3-fos-license
|
Chromosome banding analysis and genomic microarrays are both useful but not equivalent methods for genomic complexity risk stratification in chronic lymphocytic leukemia patients
Genome complexity has been associated with poor outcome in patients with chronic lymphocytic leukemia (CLL). Previous cooperative studies established five abnormalities as the cut-off that best predicts an adverse evolution by chromosome banding analysis (CBA) and genomic microarrays (GM). However, data comparing risk stratification by both methods are scarce. Herein, we assessed a cohort of 340 untreated CLL patients highly enriched in cases with complex karyotype (CK) (46.5%) with parallel CBA and GM studies. Abnormalities found by both techniques were compared. Prognostic stratification in three risk groups based on genomic complexity (0-2, 3-4 and ≥5 abnormalities) was also analyzed. No significant differences in the percentage of patients in each group were detected, but only a moderate agreement was observed between methods when focusing on individual cases (κ=0.507; P<0.001). Discordant classification was obtained in 100 patients (29.4%), including 3% classified in opposite risk groups. Most discrepancies were technique-dependent and no greater correlation in the number of abnormalities was achieved when different filtering strategies were applied for GM. Nonetheless, both methods showed a similar concordance index for prediction of time to first treatment (TTFT) (CBA: 0.67 vs. GM: 0.65) and overall survival (CBA: 0.55 vs. GM: 0.57). High complexity maintained its significance in the multivariate analysis for TTFT including TP53 and IGHV status when defined by CBA (hazard ratio [HR] 3.23; P<0.001) and GM (HR 2.74; P<0.001). Our findings suggest that both methods are useful but not equivalent for risk stratification of CLL patients. Validation studies are needed to establish the prognostic value of genome complexity based on GM data in future prospective studies.
Chromosome banding analysis and genomic microarrays are both useful but not equivalent methods for genomic complexity risk stratification in chronic lymphocytic leukemia patients Ramos-Campoy et al.
Patient cohort
Patients were diagnosed between 1983 and 2018 according to current guidelines. [1][2][3][4] Clinical information collected at diagnosis included demographics (age and gender), Binet stage, genetic and molecular data. Regarding information on evolution, dates of treatment administration and last follow-up were collected. Of note, data from CBA or GM of some patients have been included in previous publications although they were not used with the same purpose as the present study. [5][6][7][8][9][10][11][12][13]
Balanced rearrangements included translocations and inversions, while chromosome additions, duplications, insertions, isochromosomes, as well as derivative, dicentric, ring and marker chromosomes were considered unbalanced rearrangements and were counted as one aberration.
Interphase fluorescence in situ hybridization (FISH) results were available in 320/340 (94.1%) cases using probes for the chromosomal regions 13q14, 11q22 (ATM) and 17p13 (TP53) and the centromere of chromosome 12 (CEP 12). In five cases, whole chromosome painting was performed in order to study the discrepancies observed between CBA and GM.
Genomic microarray analyses
Genomic microarrays data were already available or obtained from DNA extracted in a period of time less than one year from the date of CBA in order to avoid the emergence of additional abnormalities (median time from CBA to GM=0 months; range: 0-12). GM were assessed on DNA from whole PB (n=113; 33%), PB mononuclear cells (n=63; 19%), PB CD19+ purified cells (n=110; 32%) or from BM samples (n=54; 16%). Only DNA that fulfilled quality controls required was amplified, labelled and hybridized to different genomic microarray platforms according to the manufacturer's protocols.
Obtained data were visually revised and copy number variants found as benign polymorphisms in the Database of Genomic Variants (http://dgv.tcag.ca/dgv/app/home) were excluded. For defining genome coordinates, annotations of genome version GRCh37/hg19 were used. Chromothripsis-like and chromothripsis patterns were defined by the presence of ≥7 and ≥10 oscillating switches, respectively, between two or three copy number states on an individual chromosome. 7,8,15 Although the objectives of the study did not consider the analysis of copy-number neutral loss of heterozygosity (CN-LOH), in those cases in which the microarray platform included single nucleotide polymorphisms (SNP) probes, a global screening for CN-LOH was performed. CN-LOH were recorded when detected in a region larger than 10Mb and extending to chromosome telomeres. They were not included in the counting of CNAs.
TP53 mutation analysis
A total of 308 (90.6%) cases were screened for TP53 mutations. For the assessment of TP53 mutations exons 4-8 were sequenced (exons 9-10 were also included in some centers) following ERIC recommendations. 16 Sixty (19.5%) cases were screened by Sanger sequencing whereas the remaining (n=248; 80.5%) were analyzed by nextgeneration sequencing. Only mutations with a variant allele frequency >10% were considered.
IGHV mutational analysis
IGHV mutational status was analyzed in 307 (90.3%) patients following established international guidelines. 17 Sequences were examined and interpreted using the IMGT database and the IMGT/V-QUEST tool. Clonotypic IGHV gene sequences with <98% germline identity were defined as mutated (M-IGHV) whereas those with ≥98% identity were classified as unmutated (U-IGHV).
Statistical analyses
As different European centers were involved in the present study, before performing the survival analyses we evaluated the homogeneity of the results in terms of time to first treatment (TTFT). We found out that in three institutions, TTFT in the non-CK group was notably shorter than previously reported in other studies 11
Risk stratification of the genomic complexity observed by CBA and GM
Regarding CBA, when results obtained with each mitogen were considered separately, those cases stimulated with IL-2+DSP30 exhibited a higher proportion of complex cases. Significant differences were observed in the percentage of patients classified
Number and type of abnormalities detected by CBA and GM
Regions with CN-LOH were detected in 23 (7.5%) patients as the microarray platform used in 306 cases also contained SNP probes. Median size of CN-LOH was 50.1Mb (range: 11.9-159Mb) and they were found in several chromosomes. Notably, two of the three cases with CN-LOH affecting 17p arm and the only case with CN-LOH involving ATM gene had TP53 and ATM genes mutated, respectively. Nevertheless, CN-LOH data were not included in the analyses. Tables Table S1. Genomic microarray platforms used in this study. Those CNA highlighted in grey were non-classical CLL abnormalities smaller than 5Mb. (red) and 13 (green), on the left image, and for chromosomes 12 (green) and 18 (red), on the right image. FISH revealed that chromosomes apparently lost in the karyotype appeared to be fragmented, either constituting the additional material of other chromosomes or being part of marker chromosomes. (B) Five aberrations were detected by CBA while only gain of chromosome 12 was detected by GM. FISH was performed using chromosome painting probes for chromosomes 4 (red) and 7 (green). According to FISH images, both chromosomes were present in the analyzed metaphases but were fragmented (chr.7) or considered as marker chromosomes (chr.4). Chromosomes were stained with DAPI.
|
2021-03-12T06:16:03.751Z
|
2021-03-11T00:00:00.000
|
{
"year": 2021,
"sha1": "5736eb9e1dfa5d901ce7457e9e760247ef4b084b",
"oa_license": "CCBYNC",
"oa_url": "https://haematologica.org/article/download/haematol.2020.274456/73081",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5c4ec36720154426497578219401c89dd096d78c",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
257970828
|
pes2o/s2orc
|
v3-fos-license
|
Fetal to Neonatal Heart Rate Transition during Normal Vaginal Deliveries: A Prospective Observational Study
Documentation of fetal to neonatal heart rate (HR) transition is limited. The aim of the current study was to describe HR changes from one hour before to one hour after normal vaginal deliveries. We conducted a prospective observational cohort study in Tanzania from 1 October 2020 to 30 August 2021, including normal vaginal deliveries with normal neonatal outcomes. HR was continuously recorded from one hour before to one hour after delivery, using the Moyo fetal HR meter, NeoBeat newborn HR meter, and the Liveborn Application for data storage. The median, 25th, and 75th HR percentiles were constructed. Overall, 305 deliveries were included. Median (interquartile range; IQR) gestational age was 39 (38–40) weeks and birthweight was 3200 (3000–3500) grams. HR decreased slightly during the last 60 min before delivery from 136 (123,145) to 132 (112,143) beats/minute. After delivery, HR increased within one minute to 168 (143,183) beats/min, before decreasing to around 136 (127,149) beats/min at 60 min after delivery. The drop in HR in the last hour of delivery reflects strong contractions and pushing. The rapid increase in initial neonatal HR reflects an effort to establish spontaneous breathing.
Introduction
Most of the 140 million deliveries that occur globally every year are normal vaginal deliveries, and the majority take place with no identifiable complications to the mother, baby, or both at the onset of labor [1]. However, half of the stillbirths and three-quarters of neonatal deaths are reported to occur during labor and in the initial hours of life. Poor monitoring during labor is attributed to these deaths [2], and complications that arise in this period are not always predictable. For this reason, improving the quality of labor care, including heart rate (HR) monitoring, has been recommended as an important strategy to prevent deaths that occur around the time of delivery [2].
During the labor and delivery process, heart rate is commonly monitored to identify fetal and neonatal risks of hypoxia. Fetal heart rate (FHR) is frequently measured for the surveillance of fetal well-being before delivery [3]. Similarly, HR is immediately measured after delivery to assess neonatal well-being and is used to establish the need for and/or guide neonatal resuscitation [4].
Fetal heartbeats can be detected and monitored by ultrasound as early as five weeks of pregnancy [5]. Before 6 weeks, FHR is 100-115 beats per minute (bpm), after which it increases and peaks at 8 weeks to 144-159 bpm; by 9 weeks, it plateaus at 137-144 bpm [5]. Normal baseline FHR decreases slightly toward the end of pregnancy [6][7][8].
The pace of HR is controlled by the autonomic nervous system, baroreceptors, and chemoreceptors [9]. Some of the factors influencing FHR include sleep-wake patterns, breathing movements, medications, painful stimuli, sound and vibrations, and temperature. Moreover, maternal conditions during pregnancy, such as infections, may influence the fetal heart rate [10]. Drugs such as oxytocin, which are sometimes used for labor induction or augmentation, influence FHR by causing excessive uterine contractions [11]. As such, fetal HR during pregnancy and labor is influenced by different factors, making a description of a normal trace important in order to evaluate it and take clinical action if needed.
Guidelines derived from different studies and professional bodies are in agreement on recommending a baseline FHR of 110-160 beats per minute (bpm) during pregnancy and delivery [12]. However, during delivery, the first and second stages of labor are different in terms of physiology. During the second stage, physiological mechanisms such as frequent contractions and maternal expulsive efforts are dominant, and these may derange baseline FHR due to the compromise of fetal oxygenation [13]. In the second stage of labor during active pushing, the World Health Organization recommends fetal HR measurement every 5 min and through a contraction.
The set baseline FHR of 110-160 bpm needs further investigation, especially during the second stage of labor. Few old and recent studies have attempted to describe baseline HR changes that occur during the transition at normal delivery [14,15]. An old study described that HR undergoes wide fluctuations during the transition at delivery [14]. Additionally, a recent study evaluated FHR close to the time of delivery and showed that the baseline FHR decreases significantly toward the time of delivery [15]. However, the latest study did not take into account the mode of delivery, even though changes in FHR are known to influence the mode of delivery [16]. There is a need to specifically describe how FHR changes during the second stage and up to the moment of delivery in normal vaginal deliveries. This might enable healthcare providers to make confident decisions based on diverging from an expected normal FHR.
Studies describing HR changes just after birth have demonstrated a peak HR reached within one minute [17,18]. Factors such as mode of delivery, sleep-wake patterns, skinto-skin care, gender, and body temperature have been shown to influence initial neonatal HR [19]. It has been further shown that the HR of neonates born vaginally is significantly higher up to one hour after birth than those born by Cesarean section [20]. No studies have documented the continuous HR course during the second stage up to delivery and immediately after delivery up to one hour. This is important to further understand the normal HR transition, as it may help care providers distinguish neonates who need resuscitation at delivery from those who do not need resuscitation.
It is estimated that up to 5% of term pregnancy babies may need assistance during the transition at birth. Additionally, healthcare providers in delivery rooms are sometimes faced with the dilemma of recognizing which babies need assistance and which ones do not need assistance. This may lead to unnecessary intervention or delay in necessary intervention. As HR is the most commonly used parameter to monitor fetal and neonatal well-being during the transition, it is important to further understand its course immediately before and after delivery. By increasing the knowledge on HR transition, healthcare providers will be in a better position to recognize babies who need assistance and those who do not. This will help to avoid unnecessary interventions and the associated complications, while at the same time, avoid delays in necessary intervention.
Therefore, the aim of this study is to describe the fetal-to-neonatal HR transition from one hour before to one hour after normal delivery in a cohort of uncomplicated vaginal deliveries with normal neonatal outcomes.
Settings
This study was part of the Safer Births [1] project, which aims to improve perinatal survival by gaining new knowledge and developing innovative products to improve care during childbirth. The study was conducted in the labor ward of Haydom Lutheran Hospital, a regional referral hospital in Tanzania. The labor ward provides comprehensive emergency obstetrics and neonatal care, and approximately 4000 deliveries are attended per year. Normal deliveries are mainly attended by midwives, backed up by registrars (medical doctors) and specialists (obstetricians). Fetal heart rate during labor is measured using a fetoscope or a doppler monitor called Moyo (Laerdal Global Health AS, Stavanger, Norway). Neonatal HR is measured and monitored using NeoBeat (Laerdal Global Health AS, Stavanger, Norway). Neonates under routine care stay with their mothers and, after one hour, both are transferred to the postnatal ward.
Participants' Recruitment
The inclusion criteria were normal vaginal deliveries with neonatal outcomes of an Apgar score more than seven at five minutes and not ventilated. As the study aimed to describe HR transition in normal vaginal deliveries, we excluded stillbirths; gestational age below 37 weeks; multiple pregnancies; and those who had been induced, arrived late in the second stage of labor, undergone Cesarean sections, experienced cord prolapse, and suffered severe maternal bleeding. The recruitment of participants was carried out during admission to the labor ward, whereby the participants received information about the study and were asked for written consent by the admitting midwife. Upon consenting, the participants were enrolled into the study.
Data Collection
Quantitative data from the enrolled mothers and neonates were collected using a data collection form adapted from the Safer Births case report file. Trained research assistants filled in the required information through direct observation of deliveries and from the partogram. Variables included maternal age, gravidity, gestational age, birth weight, and Apgar score. HR data were collected using the Moyo FHR doppler monitor [2] and the NeoBeat neonatal HR meter [3] (Laerdal Global Health AS, Stavanger, Norway). The devices are shown in Figure 1.
Therefore, the aim of this study is to describe the fetal-to-neonatal HR transition from one hour before to one hour after normal delivery in a cohort of uncomplicated vaginal deliveries with normal neonatal outcomes.
Se ings
This study was part of the Safer Births [1] project, which aims to improve perinatal survival by gaining new knowledge and developing innovative products to improve care during childbirth. The study was conducted in the labor ward of Haydom Lutheran Hospital, a regional referral hospital in Tanzania. The labor ward provides comprehensive emergency obstetrics and neonatal care, and approximately 4000 deliveries are a ended per year. Normal deliveries are mainly a ended by midwives, backed up by registrars (medical doctors) and specialists (obstetricians). Fetal heart rate during labor is measured using a fetoscope or a doppler monitor called Moyo (Laerdal Global Health AS, Stavanger, Norway). Neonatal HR is measured and monitored using NeoBeat (Laerdal Global Health AS, Stavanger, Norway). Neonates under routine care stay with their mothers and, after one hour, both are transferred to the postnatal ward.
Participants' Recruitment
The inclusion criteria were normal vaginal deliveries with neonatal outcomes of an Apgar score more than seven at five minutes and not ventilated. As the study aimed to describe HR transition in normal vaginal deliveries, we excluded stillbirths; gestational age below 37 weeks; multiple pregnancies; and those who had been induced, arrived late in the second stage of labor, undergone Cesarean sections, experienced cord prolapse, and suffered severe maternal bleeding. The recruitment of participants was carried out during admission to the labor ward, whereby the participants received information about the study and were asked for wri en consent by the admi ing midwife. Upon consenting, the participants were enrolled into the study.
Data Collection
Quantitative data from the enrolled mothers and neonates were collected using a data collection form adapted from the Safer Births case report file. Trained research assistants filled in the required information through direct observation of deliveries and from the partogram. Variables included maternal age, gravidity, gestational age, birth weight, and Apgar score. HR data were collected using the Moyo FHR doppler monitor [2] and the NeoBeat neonatal HR meter [3] (Laerdal Global Health AS, Stavanger, Norway). The devices are shown in Figure 1. Moyo was strapped to the mother's abdomen throughout the second stage of labor until delivery. The second stage of labor is diagnosed once the cervix becomes fully dilated and ends with the delivery of the neonate, according to the World Health Organization. FHR was monitored and recorded continuously during the second stage. The serial number of the Moyo and the date and time of the monitor were recorded by the research assistants. After delivery, the research assistant extracted FHR doppler signal data from Moyo using the tablet-based Liveborn application (Laerdal Global Health AS, Stavanger, Norway). The Liveborn application is a research tool used for live observations of neonatal care during the first minutes after birth, and the research assistants were trained in its use.
Within a minute after delivery, neonates were placed on the mother's abdomen and dried. Neobeat was placed on the chest before cutting the cord. A research assistant linking the two devices automatically transferred HR electrocardiography (ECG) signal data to the Liveborn application through a Bluetooth connection. The matched HR signal data from Moyo and NeoBeat in the Liveborn application were then uploaded to the Liveborn server.
Data Processing and Analysis
The dataset used in this work was from the Liveborn database and collected between 1 October 2020 and 30 August 2021. The FHR data from Moyo were cleaned using a previously proposed framework by Urdal et al. [15]. Segments of less than 30 s were removed, as the HR was considered unlikely to belong to the fetus, but rather the mother. More information on the method and chosen segment length to be removed can be found in the article by Urdal et al. [15].
The data analysis was performed using MATLAB 2021a, and the statistical analysis was performed in RStudio 2022.02.2+485 [21]. To illustrate the HR trend before and after birth, the median, 25th and 75th percentiles were calculated. To achieve a high resolution in the analyses, we calculated the median and percentiles based on HR observations after every 15 s. As a normal distribution of the FHR and neonatal HR cannot be assumed, the Wilcoxon rank test was used to determine if statistically significant changes in HR occurred. Characteristics describing the included mothers and babies are presented using median and interquartile ranges (IQR).
Ethical Clearance
The study received ethical clearance from the National Institute of Medical Research (NIMR) in Tanzania, reference number NIMR/HQ/R.8a/Vol. IX/3036 and the Regional Committee for Medical and Health Research Ethics, Western Norway (REK Vest reference number 2018-2408). The devices employed in the study are in routine use during deliveries at the hospital. Voluntary written informed consent was obtained from mothers during admission to labor. Care during delivery was provided according to hospital guidelines and mothers were treated equally regardless of consent status. For confidential purposes, all data were de-identified.
Results
A total of 3659 deliveries occurred during the study period, and 2205 were normal vaginal deliveries with normal neonatal outcomes. After the exclusions, 305 babies with HR of good signal quality were included, as shown in Figure 2.
The median maternal age was 24 (20-30) years, median gravidity was 2 (1-4), and gestational age was 39 (38-40) weeks. Neonates had a median birth weight of 3200 (3000-3500) grams and Apgar scores of 9 and 10 at the first and fifth min, respectively (Table 1). The median maternal age was 24 (20-30) years, median gravidity was 2 (1-4), and gestational age was 39 (38-40) weeks. Neonates had a median birth weight of 3200 (3000-3500) grams and Apgar scores of 9 and 10 at the first and fifth min, respectively (Table 1). 3000-3500 1st min Apgar score 9 9-9 5th min Apgar score 10 10-10 The number of individual HR observations varied at different time intervals. The maximum was 198 individual observations recorded at 10 min before delivery and 303 recorded 3 min after delivery ( Table 2). The number of individual HR observations varied at different time intervals. The maximum was 198 individual observations recorded at 10 min before delivery and 303 recorded 3 min after delivery ( Table 2). The median time from the last FHR to delivery was 0.0 s (0.0, 15.0), and the median time from delivery to the first neonatal HR was 49.0 s (33.0, 71.0). A significant change in median HR was found from immediately before delivery (132) until 1 min after delivery (168) (p < 0.05). There was a significant drop in neonatal HR from 1 min (168) to 10 min (153) after delivery (p < 0.05), as shown in Figure 3. The median time from the last FHR to delivery was 0.0 s (0.0, 15.0), and the median time from delivery to the first neonatal HR was 49.0 s (33.0, 71.0). A significant change in median HR was found from immediately before delivery (132) until 1 min after delivery (168) (p < 0.05). There was a significant drop in neonatal HR from 1 min (168) to 10 min (153) after delivery (p < 0.05), as shown in Figure 3. A slight decrease in median FHR was observed in the last 60 min before delivery (from 136 to 131) (p = 0.4844) (Figure 4). The neonatal HR dropped from 168 at 1 min to 136 at 60 min after delivery (p < 0.05), as shown in Figure 4. A slight decrease in median FHR was observed in the last 60 min before delivery (from 136 to 131) (p = 0.4844) (Figure 4). The neonatal HR dropped from 168 at 1 min to 136 at 60 min after delivery (p < 0.05), as shown in Figure 4.
The median time from the last FHR to delivery was 0.0 s (0.0, 15.0), and the median time from delivery to the first neonatal HR was 49.0 s (33.0, 71.0). A significant change in median HR was found from immediately before delivery (132) until 1 min after delivery (168) (p < 0.05). There was a significant drop in neonatal HR from 1 min (168) to 10 min (153) after delivery (p < 0.05), as shown in Figure 3. A slight decrease in median FHR was observed in the last 60 min before delivery (from 136 to 131) (p = 0.4844) (Figure 4). The neonatal HR dropped from 168 at 1 min to 136 at 60 min after delivery (p < 0.05), as shown in Figure 4.
Discussion
In this study, we measured the HR of babies from one hour before to one hour after normal vaginal deliveries. The results indicate that FHR decreases in the last hour before delivery from a median of 136 to 131 bpm. This baseline heart rate falls within the established normal range (110-160), as documented in various guidelines [12]. In addition, the presented baseline FHR findings fall within category I in the National Institute of Child Health and Human Development classification system, which is strongly predictive of normal acid-base status when other parameters in the same category are normal [22]. This finding is expected considering that we only included uncomplicated normal deliveries in the study.
The slight decrease in FHR is explained by the physiological mechanisms of normal delivery, such as contractions, that result in a reduction in uteroplacental perfusion [9,23], causing transient fetal hypoxia. Additionally, maternal expulsive efforts and lying in a supine position frequently or constantly during pushing impair maternal breathing and blood flow toward the uterus [13]. These events, together or in isolation, impair fetal oxygenation, temporarily decreasing FHR, as shown by the results. The findings are different from those reported by Urdal [15], in which there was a significant decrease in FHR. While we only investigated uncomplicated normal deliveries, the study by Urdal investigated normal and complicated deliveries together, and this explains the difference in these findings.
In this study, all the neonates had normal outcomes, implying that they were able to withstand the labor stress in the last hour of labor. A healthy-term fetus with a normally developed placenta is able to overcome transient hypoxia by activation of the peripheral chemoreflex. This results in a prioritization of oxygenated blood to critical organs such as the heart, brain, and adrenals. Provided there is adequate time for placental and fetal reperfusion between contractions, the fetus is able to withstand intermittent hypoxia [13].
Clinical guidelines recommend no intervention in the case of a decrease in FHR during the second stage of labor, as long as the baseline is within 110-160 bpm and other parameters such as variability and late decelerations are in the normal physiological pattern. The interpretation should be in relation to the physiology of the second stage of labor, which differs from the first stage. When the decrease reaches less than 110, even if it is the second stage, it is important to be cautious of fetal hypoxia and to rule out causes of bradycardia such as uterine rupture, placental abruption, or umbilical cord prolapse. Once these life-threatening causes are ruled out, it is likely that the decrease is physiological and temporary. Then, a normalization of FHR should be expected after a short time. With these considerations, the risk of severe fetal hypoxia is less likely, and good outcomes should be expected in such labor. This knowledge can prevent unnecessary interventions during labor, and hence, reduce the risk of complications.
Our study further showed that neonatal HR significantly increases within one minute after delivery to 168 bpm, and further decreases significantly within one hour after delivery to 136 bpm. To our understanding, this is the first study offering new insight into how HR transitions from one hour before to one hour after normal delivery.
The first HR was recorded at a median of 49 s after delivery, and it reached a peak just before one minute. This concurs with findings from previous studies describing a rapid increase in normal neonatal HR after delivery, reaching a peak within the first minute [17,18]. The neonates included in our study started spontaneous air-breathing immediately after delivery, and this increased workload may partly explain the rapid initial increase in HR.
The results demonstrate that HR slowly decreased from its peak after one minute, with the trend continuing for 60 min, at which point HR was the same as 60 min before birth. Our results concur with a pulse oximetry-based study, which reported a similar decreasing HR trend up to one hour after birth [20]. The decrease can be explained by reduced stress after the neonate establishes spontaneous air breathing and skin-to-skin care is maintained. In research studies, neonates placed on the bare chest of their mother after delivery have been found to have lower cortisol levels at 60 min, and this likely reflects a reduced stress response and an associated reduced sympathetic drive of heart rate. In addition, skin-to-skin care calms the neonate and maintains their body temperature. Neonates kept skin-to-skin usually demonstrate a lower heart rate and respiratory rate, reflecting reduced stress.
Other factors that have been reported to influence initial neonatal HR include sleepwake state, sex, and body temperature [19]. In addition, the timing of cord clamping has been reported to have an effect on initial neonatal HR. One study showed that neonates with delayed cord clamping had a lower HR than those with early cord clamping [24]. The neonates included in our study were in a sleep-wake state and skin-to-skin care during observations. Skin-to-skin care and delayed cord clamping are standards of care for normal vaginal deliveries in our study setting. These factors further underlie HR decrease after an initial rapid increase. It should be noted that HR is not the only important sign to be monitored during the transition. Other parameters, such as breathing/respiration, color, and grimacing, are equally important to be monitored for a smooth transition, as will be determined by Apgar scoring. This is the first ECG-based study to show HR trends from the point of delivery up to 60 min after delivery. Previous ECG-based studies have demonstrated HR trends up to five minutes after delivery [17,18]. The use of dry-electrode ECG technology-based devices such as NeoBeat has made it possible to overcome challenges such as the prompt acquisition of HR signals immediately after delivery. This technology is known to enable the quick detection of neonatal HR; hence, it may support the decision of resuscitative measures and further care [25].
Our study was limited by uniform data availability along different time points, reducing the number of inclusions. Measurement bias might have occurred, for example, if Moyo recorded maternal HR instead of FHR. However, we excluded recordings that were suggestive of maternal HR. A mismatch of Moyo and NeoBeat data due to interrupted Bluetooth connections during signal data extraction contributed to the loss of data. The first measured HR occurred at 49 s after birth. This is later than has been shown by other studies which have utilized NeoBeat and been able to measure the first HR as early as 3 s after delivery. However, the peak HR detected within one minute was similar to the other studies using the same technology.
Conclusions
During the normal transition from intrauterine to extrauterine life, HR undergoes significant changes. The slight drop in FHR towards the time of delivery reflects strong uterine contractions and maternal pushing during the second stage. The rapid initial increase in neonatal HR reflects a stress response to the extrauterine environment after delivery and/or increased metabolic work due to the onset of spontaneous breathing. Data Availability Statement: Datasets may be available upon reasonable request to the corresponding author.
|
2023-04-06T15:40:59.243Z
|
2023-04-01T00:00:00.000
|
{
"year": 2023,
"sha1": "ffb46b204838d557489bfb794057572efe9cc2bb",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2227-9067/10/4/684/pdf?version=1680589337",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8e48546c804ca3200424ca3d6a07215fe7623c0d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
135188021
|
pes2o/s2orc
|
v3-fos-license
|
Evaluation of raw water quality in Wassit governorate by Canadian water quality index
In this work an attempt has been done to evaluate the raw water quality in Wassit Governorate by using Canadian Council of Ministers of Environment Water Quality Index (CCME WAI). Six stations along Tigris River were located and the field work was conducted during one year from October 2015 to September 2016 in collecting data. Twelve water parameters were used to evaluate the water quality index (pH, turbidity, total dissolved solid, total alkalinity, total hardens, nitrate, calcium, sodium, potassium, magnesium, chloride, and sulfate). The raw water quality in Wassite Governorate has been ranged between (65-79) based on CCME WQI results, which means that the river has not been in its good condition, and need to be manage to control the sources of pollutants by monitoring to keep them in their natural condition.
Introduction
One of the essential part of any environmental monitoring program is the reporting of results to both general public and the managers.This task is so complex in water quality monitoring because of the large number of measured variables required analyzing.One of the suggested solution of this problem is employing an (water quality index), that will arithmetically combine all water variable parameters measures and provide a general understood description of water.This index is a good tool for ranking the suitability of water use by humans, wild life, aquatic life, etc. [1].Water quality index was creating by the Canadian Council of Ministries of Environment (CCME) in 1997.The index value can range between 0 and 100.Increasing values are indicative of increasing water quality in which CCME WQI value between 95-100 means that the water quality is excellent while CCME WQI values between 0-44 means that the water quality is poor.In Iraq, many attempts has been done to evaluate the water quality index for both Tigris and Euphrates rivers in different locations.Shaymma and Ayad, 2012 studied the water quality index for Euphrates river from Al-Qaim to Al-Qurna for both drinking and irrigations water.The results shows that the water quality for overall drinking and irrigation uses in the period between April 1998 to April 2001 is mostly rated between good to very poor [2].Alhassan, 2013 make an attempt to calculate the overall water quality index for Tigris river as a case study using six water parameters in Baghdad city.The results indicated that the Tigris water quality are ranged between poor and slightly polluted [3].Layla, 2015 used CCME QWI to evaluate the water quality index of three adjacent water treatment stations in Shat Al-Hila River.The results showed that all evaluated parameters of treated water were within Iraqi standard except turbidity and the water quality is good [4].Salam, 2016 studied the water quality of Al-Gharraf river in south of Iraq using CCME WQI.The result shows that the water of Al-Gharraf classified to poor for aquatic life and fair for irrigation with seasonal overall WQI values of 30-39 and among stations was 38-29.
The objective of the present research is to evaluate the raw water quality of Tigris river in Wassit Governorate using CCME WQI.
Data and samples collection
To assess water quality conditions relative to water quality objectives, twelve water parameters were tested during one year from October 2015 to September 2016.Six locations were used for tacking samples over 160 km long on Tigris River within Wassite governorate.The sampling locations distributed between Al-Swiara and Al-Kut City.Water samples were tested twice in month and the average values for each parameters were recorded for WQI evaluation.
Water quality index measurement
The selection of appropriate water quality variables is necessary to yield meaningful water quality index results.This related to a professional judgment of the observer.After collecting the data for the twelve parameters for raw water in Wassite governorate during one years, the water quality index were calculated using CCEM WQI.The values of water quality index factors (F1, F2, and F3) are calculated as flow, F1, and F2 are calculated directly, while F3 are need some additional steps.
Where F1, is the percentage of variables that not meet the objectives at least one during the period of observation failed variables).
The CCME.WQI results ranged between 0 and 100 as shown in table 2.
Results and discussion
The average of some of the physical and chemical properties (turbidity, alkalinity, and calcium) in all six sample locations during study period are greater than the permissible level recommended by the Iraqi standard for drinking water, respectively greater than 5 NTU, 125 mg/ L , and 100 mg/L.Figure 2 shows the variation in turbidity values during study period in two locations samples in Kut city, while the sulfate values recorded are greater than recommended levels ( 250mg/L) in all locations during some specific months.Figure 3 shows the variation of sulfate also in these two locations in kut city.The results obtained by this investigation showed that the values of other parameters are within Iraqi standard.See tables in appendix A, while the permissible levels for all water parameters in the study are tabulated in The estimated values of water quality index by CCME method for all locations are tabulated in table 4, the values are ranged between 70.4 and 72.0 .
The water quality of Tigris river and its branches( Al-Djila and Ghraff )in Wassite governorate are ranged in fair class ( 65-79), that means the river required to be managed, and take care to be under consideration to control the source of pollutant that thrown to the river to keep them in its natural conditions.
Conclusions
Water quality index is a useful tools for the assessment of raw water states and can be used in formulating the pollution control strategies in terms of required treatment.
The calculated water quality index results showed that the Tigris raw water quality in Wassit governorate in all six sampling location were in fair condition with accordance to CCME WQI categories.
The raw water of Tigris river in Wassite governorate required to be observed and managed well by the local authorities to keep it in its natural condition and prevent degradation of its properties.
𝑁( 3 )
Where F2, is the percentage of tests that not meet the objectives (failed test).For calculation the factor F3, which represented by the equation In which, F3 represent the amount by which failed tests values don't meet their objectives, and nse is the normalized sum of excursion, which are calculated by nse = ∑ ≠ (4) equation are calculated by dividing the sum of excursion that represented the amount by which each failed test are out of complains with relative of all tests( sum of tests meet and not meet objectives).This equation is used when the test values is fall below the objectives.After calculating the three factors ( F1,F2, and F3) the CCME Water Quality Index = √ 2 + 2 + 2 .7
Fig. 2 .
Fig.2.Variation of turbidity values in two locationssamples in Kut City.
Figure1shows the aerial photo for sampling location within Wassite governorate, while table 2.1 shows the UTM coordinate for sampling locations.
Table 1 .
The UTM coordinates for sampling locations
Table 2 .
Shows the ranking of water categories due to CCME results
Table 3 .
Shows the limits of some raw water parameters for Iraq, WHO, and CCME
Table 4 .
Shows the values of CCME WQI for raw water in Wassite governorate.
Table A1 .
Chemical test results for raw water at Swiara city station.
Table A2 .
Chemical test results for raw water at Noamania city station.
Table A3 .
Chemical test results for raw water at Kut city station.Before Kut Barrage section 1.
Table A4 .
Chemical test results for raw water at Kut City station.Section 2. After Kut Barrage
Table A5 .
Chemical test results for raw water at Kut City station.Al-Dijialy section
Table A6 .
Chemical test results for raw water at Kut City station.Al-ghraf section
|
2018-12-21T13:47:29.175Z
|
2018-01-01T00:00:00.000
|
{
"year": 2018,
"sha1": "20433f706f6f7ebf7e81ce9212ed8bb5aefe01b1",
"oa_license": "CCBY",
"oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2018/21/matecconf_bcee32018_05020.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "20433f706f6f7ebf7e81ce9212ed8bb5aefe01b1",
"s2fieldsofstudy": [
"Environmental Science",
"Agricultural And Food Sciences"
],
"extfieldsofstudy": []
}
|
135279066
|
pes2o/s2orc
|
v3-fos-license
|
Factors of Changes in Waste Management in a Mountain Region of Southern Poland
The aim of the work was to analyse the changes in the effectiveness of municipal waste management for the period 2009–2015, in one of the largest counties in the mountainous region of southern Poland. Socio-demographic factors, as well as changes as a result of the implementation of the provisions of Directive 1999/31/WE and Directive 2008/98/EC into Polish legislation, are considered. Over the period of seven years, there was a significant increase in the amount of municipal waste generated in the county of 32%, with a simultaneous increase in the number of inhabitants and a decrease in the number of individuals registered as unemployed. An increase in the amount of waste that is non-selectively collected and the number of properties covered by collections of municipal waste occurred before there were any changes in waste management. However, after the changes, the amount of six types of waste selectively collected (paper and cardboard, plastic, metal, bulky, WEEE) increased, with a significant 40% share of glass waste reference to the selectively collected waste. This may result from the changes in waste management. However, over the whole research period, more than 80% of waste was non-selectively collected, which may result from a lack of ecological awareness.
INTRODUCTION
On a global scale, it is important that all waste producers are covered by organized waste collection.The management of municipal solid waste is one of the tasks of local authorities in each country (Faroog and Meraj, 2016), as a result of which the local authorities require waste compositional information at the local level to plan, implement and monitor the waste management schemes that will enable them to meet their contribution to the national targets (Burnley, 2007).
Solid municipal waste (MSW) poses a huge challenge to local governments due to its continuous growth (Buenrostro and Bocco, 2001).Since mid-2013, local governments in Poland, in accordance with the implementation of Directive 1999/31/WE (WE L 182 16.07.1999)and Directive 2008/98/EC (L 312 22.11.2008), have been obliged to implement Polish law to prevent the generation of MSW as far as is practicable.
They have also been required to increase the rate of waste recovery "at source" (Boas- Berg et al., 2018).While applying the waste hierarchy, European Union member states adopt the measures to encourage solutions that yield the best environmental impact (Pomberger et al., 2017) to help protect natural resources and prevent environmental degradation (Gharfalkar et al., 2015).The need to both limit the amount of waste generated and to increase the levels of recovery requires selective collection in various settlement units.
The number of investigations conducted to identify the factors affecting the production of waste generated is increasing and includes those conducted by Hekkert et al. (2008), Burnley (2007), Miliute-Plepiene and Plepys (2015), Talalaj and Walery (2015) and Liikanen et al. (2016).In the area of waste management, the mass accumulation of waste is determined based on rates.The data on waste accumulation per capita are widely used to compare the intensity of MSW generation in various locations (Kaseva and Moirana, 2010;Özbay, 2015).The accumulation of waste in the environment raises social awareness due to the problems caused by its growth (Mitsakas et al., 2017), including its further management.In addition to these rates, the studies that take into account socio-economic factors are becoming more frequent (Philippe and Culot, 2009), because waste is a social, ecological and often aesthetic problem.Guerrero et al. (2013) considered that the factors influencing the efficiency of waste management include the environmental, socio-cultural and institutional factors.Other factors influencing the amount of waste generated are the number of inhabitants, as well as the level of professional qualifications of the residents (Buenrostro and Bocco, 2001;Noori et al., 2009).Moreover, the type of residential housing, as well as local infrastructure, are considered as important factors in shaping the composition of waste (Den Boer et al., 2010), in addition to the geographical location.Other socio-demographic factors which are not directly related to waste management, such as number of registered unemployed or internal and external migration, may also be analysed.
The aim of this work was to analyse the efficiency of the changes in municipal waste management for the period 2009-2015, in one of the largest counties in the Malopolska Voivodeship in southern Poland, considering socio-demographic factors.
MATERIALS AND METHODS
The analysis involved the use of qualitative and quantitative data regarding the municipal waste generated in the county, divided into selectively collected (paper and cardboard, glass, plastics, metal, bulky and WEEE) and non-selectively collected waste.An element of statistics at study was average, minimum and maximum.A similar content of test waste was reported in Guerrero et al. (2013); however, these authors did not take organic waste production into account.The research material was annual data for the years 2009-2015 obtained on the basis of a questionnaire addressed to twelve municipalities in the Limanowa County, including: the amount of collected selectively collected waste based on six types and non-selectively collected waste; the number of inhabitants; and properties covered by the collection of municipal waste (including uninhabited).
Statistics Poland (2009Poland ( -2016) ) were also used, including the value of the number of registered unemployed, internal and external migration within the Limanowa County and the results of field observations related to the collection and disposal of waste.
The data for the mass of waste used in the work for the period 2009-2015, which came from individual communes, comprised approximately 25 data points for each.On this basis, the results representative for the entire county were elaborated and the rates for the mass accumulation of municipal waste, including selectivity, and divided into year and day, were defined.The rates of waste accumulation per capita in tests were also used by Talalaj and Walery (2015).The study refers to Directive 2008/98 / EC according to which selective collection is required, which should include at least waste of paper and cardboard, metal, plastic and glass.
The county is the sixth largest in the Małopolskie Voivodship and ranked one hundred and forty-two for size in Poland.The county is dominated by industry and construction (55.1%).The lowest share is (1.1%) for agriculture, forestry, hunting and fishing in the presence of two cities in this area (2009)(2010)(2011)(2012)(2013)(2014)(2015)(2016).The Limanowa County is a typically mountainous region at 400-580 m asl and is located in the Western Carpathians.The greater part of this area is occupied by the Beskid Wyspowy Subunit, while the southern part includes the northern slopes of the Gorce Mountains and the Gorce National Park, which are almost entirely within the county.Low single-family detached (dispersed) housing dominates in this area with the exception of the Limanowa and Mszana Dolna communes.Here, in addition, there are compact urban and high multi-family buildings.
Waste management
Municipalities and waste disposal companies are the key elements for waste management efficiency in the Limanowa County in the collection and transport of municipal solid waste.In general, municipalities are responsible for the collection and transportation of waste to recovery and disposal sites.After the system changes in waste management in 2013, all inhabitants in Poland and the county were covered by waste collection.Effective waste recovery was required for recycling and the choice of transport company was based on a tender procedure (Przydatek et al., 2017).Exports of waste collected in both selective non-selective ways in 7 year from the communes within the county took place once or twice a month.The increased frequency of reception was related to the spring and summer periods.The waste was collected in plastic bags as well as in containers (Table 1).The containers were used in larger population centres (cities) and in public places.Burnley (2007) suggested that even the type of container has an influence on municipal waste collection.De Oliveira Simonetto and Borenstein (2009) determined that the waste collection strategies represent a major issue in an environmentally efficient system, since they can significantly affect the recycling targets.
The wastes collected selectively and nonselectively in the Limanowa County were delivered to the waste disposal installation in the Małopolska Voivodeship area.
Inhabitants and migration
The number of inhabitants in the county in 2009-2015 demonstrated a positive increase of 5,045 (which is 4%) with an average of 127.394.Both the lowest (124,278) and the largest (129,323) rise in number of inhabitants occurred in two extreme years (Fig. 2).
The internal migration was within the range of 935-958 persons, with an average of 968, and external migration varied between 903 and 992 with an average of 991.The lowest results for the internal and external migration of the county inhabitants occurred in 2009 and 2014, amounting to 935 and 903 respectively, and the highest results were 1,021 and 1,104.In 2014-2015, there was a decrease in the internal and external migration by almost 200 people in the studied area.The results for the internal migration showed an increase of 23 people and the external one by 89 people.The average results for the external and n-s -non-selectively, s -selectively, r -rural, t -town internal migration rates, as well as their difference by 23 people, were found in the external advantage (Fig. 3).
Registered unemployed
A significant increase in registered unemployment, by 1,679 people, occurred in the country in 2009-2013 (which represents 19%).In turn, between 2013 and 2015, a significant decrease of 3,020 people is noticeable.The highest unemployment rate was in 2013, at 10,569, while the lowest, at 7,549, occurred in 2015 with an average of 9,421.1.In 7 years, registered unemployment fell in this area by 1,341 (which is 15%) (Fig. 4).
Residential buildings covered by collection of municipal waste
The number of properties covered by the collection of municipal waste concerned both residential and uninhabited buildings, including public facilities.In 2009, the number of properties from which municipal waste was collected was the lowest and amounted to 20,188, with the highest occurring in 2012 at 33,670 with an average of 25,701.In 2013-2015, there was a decrease in the number of real properties covered by waste collection by 1,111.Despite such a change, there was a general increase in the number of real estate properties by 10,634, from which municipal waste were regularly collected (Fig. 5).
Waste collected selectively and non-selectively
In the initial research period of 2009-2011, there was a decline in the amount of municipal waste collected both selectively and non-selectively.These declines amounted to 210 Mg and 3.431 Mg, respectively.Between the consecutive years of 2012-2015, a dynamic increase in the value of waste selectively collected by 2,141 Mg is noticeable, as well as a decrease in that non-selective collected by 2,185 Mg.Within seven years, there was an increase in the mass of collected waste by 3,231 Mg (which is 32%), including the selectively collected waste in the amount of 2,185 Mg.In the case of waste collected non-selectively there was a decrease by 503.9 Mg.These favourable results may be affected by the changes in the municipal waste management system caused by the implementation of the provisions of Directive 2008/98 / EC into Polish legislation, which have been in force since 2013 (Przydatek et al., 2017).The lowest amounts of waste collected in the county were observed simultaneously in the third analysed year, while the highest occurred in the fourth year in the case of non-selectively collected waste (12,103.6Mg) and the total mass of waste (13,404.7 Mg).The largest amount of waste (3,602 Mg) was selectively collected in 2014, after the changes (Fig. 6). Figure 8 shows the shares of the six selectively collected waste types.The dominant share is the glass waste (42%).The share of plastics waste was lower by 9%.The content of the paper and cardboard as well as bulky waste remained at the same level of 9%.The lowest share was for the metal waste (6%) and WEEE (1%).According to Gidarakos et al. (2006), the composition of the generated waste depends on such factors as demography and geographical determinants.Figure 10 shows the mass rates for general municipal waste accumulation per day, which ranged from 0.22 to 0.28 kg•cap. - with an average of 0.24 kg•cap -1 .Similarly to the analysis of the waste accumulation rates by year, the lowest value of 0.14 kg and the highest 0.29 kg•per capita occurred in 2011 and 2012, respectively.In the lower value range, the results for the rate of selective waste accumulation per capita per day oscillated between 0.04 and 0.01 kg day -1 with an average of 0.02 kg•day -1 , which confirms the decline.Such extreme results in relation to the second rate with the lowest value of 0.01 kg were recorded in 2013-2015, while the highest, at 0.04 kg, occurred in 2009-2011.Between the averages of both rates, a difference of 0.22 kg is also noticeable.As previously, during the research period there was an increase in the first value, but only by 0.06 kg, while the value of the second rate increased by 0.03 kg per capita per day.
DISCUSSION
The important factors affecting the amount of waste generated include an increase in the number of inhabitants, as well as improvements in the living conditions (Guerrero et al., 2013).The number of inhabitants in the county increased by over 5,000 during the study period and at the same time the amount of collected waste increased by over 3,000 Mg.According to Hannan et al. (2015), an increase in the amount MSW is associated with fast paced urbanisation and population growth.In addition to demography, \migration may have some impact on the waste management.The achieved result of the external migration predominance was below 70 inhabitants.However, this was insignificant due to the increase in the number of inhabitants of the county, exceeding 5,000.In 2009-2011 there was a decrease in the mass of municipal waste non-selectively collected, exceeding 3,000 Mg.It should be noted that the lowest amounts of collected selectively and non-selectively waste in the same year of 2011 were observed.However, the largest amount of 12,103.6Mg of collected non-selectively waste occurred in 2012 before the changes in the waste management system.In this year, the number of properties covered by the collection of municipal waste was also the largest.
In general, the increased frequency of waste collection in communes of the county was related Similarly, there was a significant increase in unemployment by almost 20% in the county in 2009-2013.Miliute-Plepiene and Plepys (2015) reported that unemployment did not reduce the amount of waste generated.The largest amount of (3,602 Mg) selectively collected waste occurred in 2014, which confirms the improvement in waste management associated with the system changes (Przydatek et al., 2017).
Despite such a significant increase in the amount of selectively collected waste, its share in the total weight of waste constituted only 16%.Such a level of recovery among EU 28 was found in 1995 (Pomberger et al., 2017).A significantly higher percentage of selectively collected waste in rural communes in the mountain areas of Italy was reported by Passarini et al. (Passarini et al., 2011).This confirms the possibility of the impact of regional differences on the efficiency of municipal waste management (Hage and Söderholm, 2008).The highest mass values of paper and cardboard, glass and metal waste suitable for recycling occurred in 2014.In the literature, such growth is regarded as pure profit in relation to landfilling of waste (Nahman, 2010).Glass waste constituted the largest share at 42%.A lower percentage of 8% of this type of waste fraction in an urban commune was reported by Przydatek et al. (2018).The content of paper and cardboard waste was at the level of 9%.The lower content of paper and cardboard in municipal waste by 1.5% was demonstrated by Dangi et al. (2011).
The low content, at 6%, was the metal waste.The plastic waste experienced the highest increase by 1.307 Mg, as well as the highest value of 1.407 Mg over the analysed period, which occurred after the changes.The mass of bulky waste collected in the county with the highest value of 397 Mg occurred in the last examined year.In general, the lowest percentage at 1% was WEEE, which were collected in a shorter period from 2012 to 2015.In these years, there was a slight increase in the amount of WEEE collected by 3 Mg, and the same a dynamic increase in the amount of selectively collected waste, by 2,727.2Mg, was noticeable, as well as a favourable decrease in non-selectively collected waste by 503,9 Mg.Additionally, during this period, there was drop in the number of properties subject to regular collection of municipal waste, which was close to 3,000.It may have been caused by the coverage of a lower number of homes by the waste collection services (Knussen et al., 2014).
The highest value of 104.69 kg for the municipal waste accumulation rate per capita per year occurred before the changes in waste management and was much higher than that reported by Dahlen et al. (2007) in Sweden.However, the average value of this rate was more than 4 times lower than the average value of 475 kg per capita for the European Union (Eurostat, 2016).The highest value of the 0.29 kg rate of selective waste accumulation per capita daily occurred in 2012 and was significantly lower than the demonstrated 2.2 kg and 0. 2011), unemployment plays an important role in the production of municipal solid waste.The average values of the amount of general municipal waste accumulation rates per capita were higher than the average values of selectively collected waste accumulation rates per capita of by 84.43 kg per year and 0.22 kg per day, respectively.The latter result as a difference was higher by 0.07 kg than the value of the rate reported in Algeria by Garfě et al. (2009).In this case, these rates differ significantly, which confirms the need for increased ecological awareness (Chan, 2008;Ekere et al., 2009).According to Matsakas et al. (2017) the accumulation of waste should be raised to public awareness due to problems caused by the growing amount of waste in the environment.In general, there is a noticeable increase in the value of waste accumulation by more than 10 kg per capita per year and by 0.03 kg per capita per day, throughout the entire research cycle.Matsumoto (2011) and Manaf and Samah (2009) reported an increase in the value of the accumulation rate together with an increase in the number of inhabitants.Daskalopoulos et al. [40], based on the conducted research, presented that the differences in the accumulation of waste may also be the result of various consumer behaviours.
CONCLUSIONS
On the basis of the analysis of the research material concerning the Limanowa County in the mountains region, the following conclusions can be drawn: • The number of inhabitants in the county increased by 4% with a simultaneous increase in the amount of municipal waste generated by 32% and a decrease in registered unemployment by 15%.• The largest amount of non-selectively collected waste and at the same time, the number of properties covered by collection of municipal waste occurred before the changes in waste management.• The highest mass of selectively collected waste occurred after system changes in waste management.• In the period 2012-2015, there was a dynamic increase in the amount of selectively collected waste by almost 3,000 Mg, and a favourable fall in non-selectively collected waste by over 500 Mg, at the drop in the number of properties covered by organized collection of municipal waste by approximately 3,000.• In the mass of municipal collected waste, a low percentage not exceeding 20% was attributable to selectively collected waste with the highest share of glass waste at over 40%.
• The average values of municipal waste accumulation rates per capita were higher than the average values of accumulation of waste selectively collected by 84.43 kg per year and 0.22 kg per day, which indicates the need for an increase of ecological awareness.• Generally, in the whole research cycle an increase in the value of waste mass accumulation per capita is noticeable by over 10 kg per year and by 0.03 kg per day.
Figure 1 .
Figure 1.Location of Limanowa County in the Małopolska Voivodship (southern Poland)
Figure 2 .Figure 3 .
Figure 2. Number of inhabitants in the county in different years
Figure 7
Figure 7 shows the amount of waste collected selectively divided into six fractions.The mass of paper and cardboard waste collected in 2014 increased by 216 Mg in comparison to 2009.A noticeable decrease occurred in 2011 by 58 Mg in relation to the reference year and in 2015 by 108 Mg in relation to the previous year.The latter value was the same as the increase in the value of the paper and cardboard waste collected in the entire analysed period.The amount of glass waste within the 7 years increased significantly by 844 Mg.As in the case of the paper and cardboard waste, the lowest amount occurred in 2011 and this type of waste demonstrated a decrease by 89 Mg in relation to the reference year, with the highest occurring in 2014.Between 2009 and 2014, there was a significant increase of this fraction of waste by as much as 963 Mg, although after 2014, there was a decrease of 119 Mg.The plastic waste was characterized by the highest increase of 1.307 Mg.As before, the lowest result
Figure 9
Figure 9 contains mass rates of general municipal waste and municipal selectively collected waste accumulation per capita per year in 2009-2015.The value of the municipal waste accumulation per capita per year ranged from 81.51 to 103.31 kg•cap. - with an average of 87.73 kg•cap. - , and an accumulation of selectively collected waste from 14.17 to 3.88 kg•year -1 with an average of 9.06 kg•year -1 .A significant increase in the value of the first rate was noted in 2009-2012 by 23.18 kg.In the following years 2013-2015, the increase in the waste accumulation rate was lower and amounted to
Figure 7 .Figure 6 .
Figure 7. Amount of waste selectively collected according to type
Figure 9 .Figure 8 .
Figure 9. Rate of waste mass accumulation per capita per year to the periods of spring and summer.Similarly, Przydatek et al. (2018) reported an increase in the amount of waste selectively collected in spring.According to Mandl et al. (2008) the waste management efficiency is the result of the optimization of the process, and thus affects the amount of waste collected.The amount of selectively collected waste in 2011-2014 underwent a nearly 4-fold increase.Moreover, an increase in selectively collected waste was presented by Liikanen et al. (2016).
Figure 10 .
Figure 10.Rate of waste mass accumulation per capita per day 3-1 kg per capita daily by Kamaruddin et al. (2017) and Dangi et al. (2011), respectively.However, the highest value of this rate at 0.04 kg and the lowest at 0.01 kg per day, covered the 3 years before the changes (2009-2011) and practically after the changes (2013-2015), respectively.The highest registered unemployment rate of 10.569 occurred in 2013.According to Abdoli et al. (
Table 1 .
Solution for selectively and non-selectively collected municipal solid waste • The results obtained confirmed the noticeable impact of the implementation of the provisions of Directive 1999/31/WE and Directive 2008/98/EC into Polish legislation through the increase in municipal waste recovery.
|
2019-04-27T13:13:38.705Z
|
2019-05-01T00:00:00.000
|
{
"year": 2019,
"sha1": "13c0073f60a3a9a239699ce9fc70ac7c8b764ce1",
"oa_license": "CCBY",
"oa_url": "http://www.jeeng.net/pdf-105334-36839?filename=Factors%20of%20Changes%20in.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "13c0073f60a3a9a239699ce9fc70ac7c8b764ce1",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Business"
]
}
|
247029258
|
pes2o/s2orc
|
v3-fos-license
|
GnRH Analogues as a Co-Treatment to Therapy in Women of Reproductive Age with Cancer and Fertility Preservation
In this review, we analyzed existing literature regarding the use of Gonadotropin-releasing Hormone (GnRH) analogues (agonists, antagonists) as a co-treatment to chemotherapy and radiotherapy. There is a growing interest in their application as a prophylaxis to gonadotoxicity caused by chemotherapy and/or radiotherapy due to their ovarian suppressive effects, making them a potential option to treat infertility caused by such chemotherapy and/or radiotherapy. They could be used in conjunction with other fertility preservation options to synergistically maximize their effects. GnRH analogues may be a valuable prophylactic agent against chemotherapeutic infertility by inhibiting rapid cellular turnover on growing follicles that contain types of cells unintentionally targeted during anti-cancer treatments. These could create a prepubertal-like effect in adult women, limiting the gonadotoxicity to the lower levels that young girls have. The use of GnRH agonists was found to be effective in hematological and breast cancer treatment whereas for ovarian endometrial and cervical cancers the evidence is still limited. Studies on GnRH antagonists, as well as the combination of both agonists and antagonists, were limited. GnRH antagonists have a similar protective effect to that of agonists as they preserve or at least alleviate the follicle degradation during chemo-radiation treatment. Their use may be preferred in cases where treatment is imminent (as their effects are almost immediate) and whenever the GnRH agonist-induced flare-up effect may be contra-indicated. The combination treatment of agonists and antagonists has primarily been studied in animal models so far, especially rats. Factors that may play a role in determining their efficacy as a chemoprotective agent that limits gonadal damage, include the type and stage of cancer, the use of alkylating agents, age of patient and prior ovarian reserve. The data for the use of GnRH antagonist alone or in combination with GnRH agonist is still very limited. Moreover, studies evaluating the impact of this treatment on the ovarian reserve as measured by Anti-Müllerian Hormone (AMH) levels are still sparse. Further studies with strict criteria regarding ovarian reserve and fertility outcomes are needed to confirm or reject their role as a gonadal protecting agent during chemo-radiation treatments.
Introduction
The incidence of cancer in women of a reproductive age remains high. In 2020, breast cancer became the leading type of cancer worldwide, with 2.3 million new cases and 685,000 deaths [1]. In Australia, the cancer incidence rate for women under 40 was 64.7 per 100,000 in 2017 [2]. In the UK, the mean cancer incidence for women under 40 was 56.9 per 100,000 [3]. The International Agency for the Research of Cancer estimates that globally in 2020 there were about 1,380,000 new cancer cases in women under 45 with a 52.2 per 100,000 cancer incidence rate [4].
Depending on age and treatment choices, 15-50% of pre-menopausal women may be expected to develop premature ovarian failure (POF) [5]. This is true especially when chemotherapy is administered for breast cancer and Hodgkin's Lymphoma (HL) [6,7], though it is also encountered in the treatment of other malignancies such as ovarian and endometrial cancer [8]. Presently, women hope for fertility later in life, and a significant proportion of them have not completed their family at the time of diagnosis. Infertility is an important long-term effect of cancer treatment, especially given the fact that surviving cancer does not seem to drop the desire for childbearing and may increase value placed on familial bonds, though anxieties about potential infertility remain [9,10].
Treatment protocols for cancer patients often include chemotherapy and radiation therapy, both of which are associated with gonadotoxicity, which may result in POF or infertility. POF is caused by apoptosis of primordial follicles and a subsequent loss of ovarian reserve [11]. Alkylating agents are the most toxic, though treatment duration and cumulative dose also plays an important role [7]. Radiotherapy, when targeted to the pelvis, abdomen, or head (by adversely affecting the hypothalamic-pituitary-adrenal axis [12]) can also be gonadotoxic [13]. Past studies showed that ovarian function was preserved in over 90% of long-term female survivors who were treated for lymphoma before puberty, but only in a minority of similarly treated adult patients [14]. The mechanisms behind the toxicity are multiple, such as direct ovarian toxicity through apoptosis of the oocytes, as well as oxidative stress and decreased ovarian blood flow [11].
Due to the treatment's gonadotoxicity, premenopausal patients are advised to seek fertility preservation, as is the official recommendation of all the cancer such as ASCO [15] and NCCN [16]. Patients would have a range of choices when it comes to fertility options once cancer treatment is imminent. Depending on age, treatment choice, and type of cancer, the patient should be informed of their options by a fertility specialist. They may elect to cryopreserve oocytes, embryos, cryopreserve ovarian tissue itself, transpose the ovaries, and use GnRH analogues (agonists and antagonists) [17,18]. These treatments may be used in combination. This applies especially to the use of GnRH analogues, which may be used either as part the ovarian stimulation protocols or as a chemoprotective agent for ovarian function preservation. They could be used alongside other, non-pharmaceutical, fertility preservation procedures.
The primary issue with most fertility treatments is, however, that they require several days to be completed. Cryopreservation of oocytes can be used as fertility preservation method for women after menarche without a partner [15], with embryo cryopreservation also being a choice for those partnered, or for those wishing to use a sperm bank, and where legally allowed. For oocyte collection, patients may seek in vitro maturation, an experimental procedure [19]. It allows for the immediate collection of immature oocytes, valuable to cancer patients that cannot undergo hormone treatment or delay chemotherapy.
Ovarian tissue cryopreservation after removal by laparoscopic surgery is the only option for young prepubertal females and patients who cannot undergo ovarian stimulation [20]. An experimental surgical method for fertility preservation is transposition of the ovaries outside the radiation field. According to reports ovarian function is preserved in 20% to 100% of patients [21], though we still do not have definitive clinical trials on the efficacy and safety of the procedure.
Presently, GnRH analogues, consisting of agonists and antagonists, are used for fertility preservation. ESHRE recommends ovarian stimulation in women seeking fertility preservation for medical reasons the usage of GnRH antagonist protocol and they further add that there is moderate quality evidence of the necessity of considering a specific GnRH analogue protocol. They state that GnRH antagonist protocols are preferred, since they shorten the duration of ovarian stimulation, offer the possibility of triggering final oocyte maturation with GnRH agonist in the case of high ovarian response, and reduce the risk of ovarian hyperstimulation syndrome. Data on live births are extremely scarce, in particular in cancer patients with vitrified oocytes [22]. ASRM recommends that "GnRH analogues may be used "off label for fertility preservation" [23]. They also state that GnRH agonists may be offered to breast cancer patients to reduce the risk of premature ovarian insufficiency [24] but should not be used in place of other fertility preservation alternatives [15] and that more studies are required to establish the efficacy of this treatment and to determine which patients are the best candidates for its use. According to the National Comprehensive Cancer Network (Guidelines Version 2.2022) GnRH agonists are not considered a form of fertility preservation [25] (Table 1). Table 1. International guidelines on GnRH analogues.
Guideline
Year of Publication Recommendation Methodology
ASCO [15] 2018
"There is conflicting evidence to recommend gonadotropin-releasing hormone agonists (GnRHa) and other means of ovarian suppression for fertility preservation. The Panel recognizes that when proven fertility preservation methods such as oocyte, embryo, or ovarian tissue cryopreservation are not feasible, and in the setting of young women with breast cancer, GnRHa may be offered to patients in the hope of reducing the likelihood of chemotherapy-induced ovarian insufficiency. However, GnRHa should not be used in place of proven fertility preservation methods." Systematic review of the literature, published from January 2013 to March 2017, was completed using PubMed and the Cochrane Library.
ASRM [23] 2019
"GnRH agonists can be offered to women with breast cancer and potentially other cancers for the purpose of protection from ovarian insufficiency. However, GnRH analogues should not replace oocyte/embryo cryopreservation as the established modalities for fertility preservation." Systematic reviews, meta-analyses and RCTs between the years 2006-2018.
ESHRE [22] 2019
"For ovarian stimulation in women seeking fertility preservation for medical reasons the GnRH antagonist protocol is probably recommended. There is moderate quality evidence of the necessity of considering a specific GnRH analogue protocol.
GnRH antagonist protocols are preferred since they shorten the duration of ovarian stimulation, offer the possibility of triggering final oocyte maturation with GnRH agonist in case of high ovarian response, and reduce the risk of Ovarian Hyperstimulation Syndrome (OHSS). Moreover, especially in cancer patients, who are at higher risk of thrombosis due to their oncologic status, seem to be preferred since they enable GnRH agonist trigger, therefore reducing the risk of OHSS." The search was based on a final list of 18 key questions. Key words were sorted by importance and used for searches in PUBMED/MEDLINE and the Cochrane library. The search was performed up to 8 November 2018. Literature searches were performed as an iterative process. In a first step, systematic reviews and meta-analyses were collected.
As chemotherapy mostly affects tissues with rapid cellular turnover, such as the growing follicles [26], it is hypothesized that gonadotoxicity is lower in prepubertal girls than adult women [27]. Recently, evidence shows that GnRH analogues, by inhibiting the stimulation of gonadotrophins and thus ovarian cellular turnover, could decrease the chance of cellular destruction during gonadotoxic cancer treatments [28], although other mechanisms are also at play. Indeed, GnRH analogues have displayed a decrease in the incidence of POF compared to control and despite a growing interest in them, their longterm effects remain understudied [29]. The aim of this narrative review is to summarize and critically appraise the available data on the potential gonadotoxicity-reducing role of the use of GnRH agonists and antagonists during chemo-radiation therapy for women of reproductive age.
Methodology
The literature search was performed using the databases of Medline and Scopus. We searched for the phrase "fertility preservation" in combination (using AND as a conjunction) with: "woman reproductive age" (288 combined results), "Hodgkin's lymphoma" (52), "gynecological cancer" (31), "breast cancer" (593), "AMH" (150), "GnRH agonists" (53) and "GnRH antagonists" (5). We searched for animal and human studies, up to those published by October 2021. From the numerous studies found we kept the meta-analyses, RCTs, prospective studies, retrospective studies, and cohort studies, for an analysis of 37 articles.
Mechanisms of Action-Physiology
The two types of analogues act through different pathways to produce a similar decrease in GnRH secretion. Agonists, such as Buserelin and Triptorelin [30], take advantage of Gonadotropin-releasing hormone receptor (GnRHR) down-regulation that occurs in chronic GnRH surges, by increasing GnRH secretion. They exert their effect by competitively binding to GnRHR while having a higher affinity and lower enzymatic degradation than GnRH. The GnRHR are desensitized to both the exogenous (analogue) and endogenous GnRH, as the receptor is internalized through receptor-mediated endocytosis [30]. This process is known as homologous desensitization, meaning the attenuation is caused by the agonists on their target receptors. Initially, this creates a flare-up of gonadotrophin production until the receptors down-regulate, which in the long-term inhibits gonadotrophin secretion. GnRH agonistic analogues have two distinct differences from GnRH. In the GnRH agonistic decapeptides, the glycine in position 6 is substituted for hydrophobic groups, as this is the primary site of degradation. Many of them also have a deletion of the glycine in position 10, with an ethyl-amide group substituting the C-terminal [30,31], making them nonapeptides. This increases their affinity to GnRHR. The combined effects of a higher affinity and lower degradation make them two hundred times more potent than endogenous GnRHR [31]. They have a couple of disadvantages; they have a flareup effect, are contraindicated in estrogen receptor positive breast cancer, reduce bone mass in >6-month treatments, and require an administration of minimum one week prechemotherapy [17]. GnRHas are administered every four weeks starting 1 to 2 weeks before the initial chemotherapy dose and are usually continued until the end of the chemotherapy regimen. Some protocols, in order to prevent a flare up produced by GnRHa, add an GnRH antagonist at the initial phase followed by agonist protocol treatment, especially if an early start of chemotherapy is needed [32].
Antagonists, such as Ganirelix and Cetrorelix [30], bind competitively to GnRHR preventing pituitary stimulation and the release of gonadotrophins [33]. GnRH antagonists have a higher number of substitutions than the two found in agonists. They exhibit substitutions in positions 1-3, 6, 8 and 10 [30], remaining decapeptides. Their multiple substitutions increase their affinity and lower their degradation rate compared to endogenous GnRHR, without activating the receptors. Their immediate action, while a benefit when time is limited, also comes with the disadvantage of requiring a constant presence in the blood stream, making long-term preparations necessary. Another disadvantage is their generally poor solubility and subsequent high dosing concentrations [30].
For both of the above, results are still inconclusive of the extent that they may aid fertility when administered before or during chemotherapeutic and/or radiotherapeutic treatment [34]. It does not seem true that the hormonal changes they create have a direct protective effect on the ovaries [35] and it rather appears to be primarily through the suppression of ovarian function.
Anti-Müllerian Hormone as an Estimator of Ovarian Reserve
AMH is produced by the primary, secondary, pre-antral and small antral follicles up to 8 mm in diameter. Larger antral follicles (more than 8 mm) in diameter do not produce AMH [36]. Thus, it is produced by all pre-antral follicles and early antral follicles, except for the primordial ones. As such, it is a marker of ovarian reserve and a predictor of quantitative response to controlled ovarian stimulation.
There are some reasons for using AMH as an ovarian reserve marker: it is not menstrual cycle dependent, with only small fluctuations occurring throughout it [37]. However, it may be influenced by the usage of oral contraceptives, which may lower AMH levels [38].
Limitations to AMH also exist. When it comes to cancer, AMH has only recently begun to be studied, with most studies focusing on breast cancer. Studies that look specifically at GnRHa co-treatment and its effect on AMH levels remain limited [39][40][41][42][43]. Pre-treatment AMH, combined with age, the other fundamental predictor, is instead an important marker to be evaluated during counselling. The efficacy of every fertility preservation method, including GnRHa, depends on the woman's age, ovarian reserve, type and cumulative dose of the gonadotoxic therapy. Post-treatment AMH has limited utility as a predictor of menstrual restoration/fertility and currently cannot serve as a predictor of time to menopause [44].
Rationale of Using GnRH Analogues in Fertility Preservation Post Cancer Treatment
The use of GnRH analogues in order to achieve reduction of ovarian toxicity is based on the observation that chemotherapy mostly affects tissues with rapid cellular turnover, such as gonadal ones [26]. It also based on the fact that gonadotoxicity is lower in prepubertal girls than in adult women [14,27]. The latter could be because of their higher ovarian reserve, in addition to the hypogonadotropic prepubertal milieu. This could be because of a decrease in the proliferation rate of granulosa cells and a suppression of follicular recruitment, as GnRHas seem to stimulate the prepubertal hypogonadotropic milieu. Potential mechanisms for ovarian protection could be: (a) a reduction in ovarian blood flow via a direct effect on GnRH receptors that causes a decrease in the amount of chemotherapeutics that reach the ovary [45,46], (b) via a direct effect on ovaries such as up-regulation of intra-ovarian anti-apoptotic molecules and protection of germ line stem cells [28,47] and (c) indirectly by having an anti-apoptotic event on surrounding cumulus cells [48], as has been recently been stipulated.
Based on their mode of action, there are two reasons that we believe GnRH analogues could be used for fertility preservation. First, because of their fast-acting effects as established above. Secondly, because of their mechanism of action, as their suppressive ovarian effects may protect the oocytes from toxicity, making them beneficial in chemotherapeutic treatments such as alkylating agents and anthracyclines in adolescent girls and pre-menopausal women with ages between 15 and 45 [11].
GnRH Agonists and Fertility Preservation after Cancer Treatment
Up to date, over 50 publications (14 RCTs, 25 non-RCTs, and 20 meta-analyses) have reported on over 3100 patients during chemotherapy, receiving concurrently GnRH agonists for preservation of ovarian function via temporary ovarian suppression. These patients were treated for breast cancer, hematologic cancers, or autoimmune diseases. The above studies reported that the GnRHa adjuvant co-treated patients resumed regular menses and normal ovarian function in about 85% to 90% of cases as compared to the 40% to 50% in the chemotherapy only group. Furthermore, natural pregnancy rates in survivors who were co-treated with GnRHa adjuvant during gonadotoxic chemotherapy ranged from 23% to 88%, as compared to the 11% to 35% (p < 0.05) in control patients who were not co-treated [39,40,[49][50][51][52][53][54][55][56][57][58][59][60][61][62]. More specifically, a long-term follow-up analysis (up to 15 years) of adolescent and young adults with Hodgkin's lymphoma co-treated with triptorelin confirmed the gonadoprotective effect of GnRHa [63].
Indeed, 96.9% in the GnRHa group resumed ovulation and regular menses, throughout a median follow-up of 8 years (range 2-15), compared with 63% in the control group. Recently, a prospective non-randomized study in adolescent and young women treated for cancer compared the rate of POF after hematopoietic stem cell transplantation in those receiving GnRHa with gonadotoxic chemotherapy vs. chemotherapy alone [64]. The study found that GnRHa co-treatment significantly decreased the POF rate from 33% to 82%. Moreover, a recent single-center retrospective study on postmenarchal adolescent patients (median age 14, range 11 to 18) treated for acute lymphoblastic leukemia, acute myeloid leukemia, Hodgkin's lymphoma, and other cancers showed that co-treatment with GnRH analogues preserved ovarian function and fertility in adolescents [65]. Other large retrospective and prospective studies, as well as case series, also showed a potential protective effect of GnRHa during chemotherapy in women with hematological malignancies [40,61,63,[65][66][67][68].
Thus, the German Hodgkin Study Group HD14 trial analysis with 263 patients revealed that prophylactic use of GnRH analogues as a highly significant prognostic factor for preservation of fertility favoring pregnancies [40] in early Hodgkin's Lymphoma patients after chemotherapy treatment. In addition, in another study where fertility status was assessed among 108 females of reproductive age treated by chemotherapy for newly diagnosed Hodgkin's lymphoma between 2005 and 2010, authors concluded that chemotherapy with GnRH analogues used in more advanced Hodgkin's Lymphomas retained ovarian function significantly better after two years [66].
On the contrary, randomized trials performed in women with hematological malignancies showed no GnRH analogue induced protective effect, nor suggested a partial protective effect, with only a delaying in the appearance of POF. All these studies had a small sample size and were not powered to find a possible advantage of GnRH analogues [41,42,[68][69][70].
Thus, a study investigated the impact of leuprolide on ovarian function (Follicle stimulating hormone (FSH) levels) after myeloablative conditioning on 17 women undergoing hematopoietic cell transplant and concluded that leuprolide may protect ovarian function after myeloablative conditioning as 3 out of 7 evaluable Leupron recipients had ovarian failure 703 days post-transplant [68].
In a second study, they evaluated the best method to assess the ovarian reserve by measuring FSH, Luteinizing hormone (LH), inhibin B, AMH levels and the ultrasound antral follicular count in 29 women with Hodgkin's Disease treated with chemotherapy. A combination of ultrasound antral follicular count and AMH levels were the best predictor of ovarian reserve. They concluded that GnRH analogue treatment did not have any protective effect but could delay the development of ovarian failure [41].
Similarly, another study reported the 5-year follow-up results on ovarian reserve, measured with AMH or FSH levels, of 67 patients with lymphoma randomly assigned to receive either triptorelin plus norethisterone or norethisterone alone during chemotherapy. They reported that AMH and FSH levels were similar in both groups while 53% and 43% achieved pregnancy in the GnRH analogues and control groups (p = 0.467) [70].
A clinical practice guideline by ASCO on ovarian suppression adjuvant endocrine therapy for women with HR+ breast cancer [71] stated that the addition of ovarian suppression to standard adjuvant therapy with tamoxifen or with an aromatase inhibitor improved DFS, disease, and distant recurrence, compared with tamoxifen alone. The panel concluded that high-risk patients should receive co-treatment with GnRHa to achieve ovarian suppression, in addition to adjuvant endocrine therapy. Thus, the results of all these publications implied that GnRHa might either improve or not affect the survival of patients receiving chemotherapy [28].
Regarding endometrial cancer, recently there was a small monocentric retrospective study in patients with early-stage endometrial cancer using a combination of surgery and GnRH agonist with a 3-month follow-up interval with endometrial sampling by hysteroscopy. It was concluded that GnRHas after surgery are an effective fertility-sparing strategy for women with grade 1 endometrial carcinoma and/or endometrial intra-epithelial neoplasia [72].
The only prospective phase III RCT including postmenarchal adolescent patients affected by ovarian malignancy demonstrated the gonadoprotective effect of GnRHa even in the younger population [73]. Six months after chemotherapy, all the patients in the GnRHa group had normal menstrual bleeding and normal titre of FSH/LH, whereas 33% in the control group had amenorrhea and POF.
On the other hand, there are in vitro studies that do not support the beneficial effect of GnRH analogues in fertility preservation post-chemotherapy. An in vitro study, using (n = 15 age = 14-37) human granulosa cells and ovarian tissue fragments expressing GnRH receptors, found that GnRH agonists administered with chemotherapy (e.g., cyclophos-phamide, paclitaxel, fluorouracil, or a TAC (docetaxel, doxorubicin, cyclophosphamide) regimen) for 24 h neither activated anti-apoptotic pathways nor prevented follicle loss or DNA damage caused by the chemotherapeutic agents [43]. In the study, however, the administration of the GnRH agonists occurred concomitantly with the initiation of chemotherapy rather than approximately one week earlier (the minimal time required for ovarian suppression following the flare-up effect). Therefore, there is a chance that the initiation of chemotherapy concurred with the flare-up period of the GnRH agonist, potentially neutralizing the protective effect. The authors concluded that GnRH agonist treatment with chemotherapy does not prevent or ameliorate ovarian damage and follicle loss in vitro.
As also shown above, there are studies reporting on the effects of GnRH analogues on AMH levels. One study included 263 women with early-stage HL who all received GnRH analogues treated either with less gonadotoxic chemotherapeutic agents (Adriamycin, Bleomycin, Vinblastine, Dacarbazine), also known as the ABVD regimen, or with more aggressive alkylating agents, such as the BEACOPP regimen (bleomycin, etoposide, adriamycin, cyclophosphamide, vincristine, procarbazine, prednisone), found that FSH and AMH hormonal levels were significantly better in the ABVD plus GnRH analogues arm, one year post treatment [40]. In another human study, studying 84 patients diagnosed with Hodgkin's or non-Hodgkin's lymphoma who completed the one-year follow-up after being treated with chemotherapy and GnRH analogues, it was reported that the group receiving GnRHa co-treatment had a significantly higher proportion of AMH values with >1 ng/mL compared to the control group (8/16 vs. 2/15; p = 0.023), as well as significantly higher mean AMH values (1.40 ± 0.35 vs. 0.56 ± 0.15 ng/mL; p = 0.040) [39]. However, the small sample size of 16 and 15 patients of the GnRH and control groups respectively limits the significance of this positive result. Another study, however, evaluating patients treated for Hodgkin's disease, found no discernible difference between AMH levels of the GnRH co-treated group and control group [41].
Its findings agree with a study that investigated the use of oral contraceptives and GnRH agonists as co-treatment during advanced HL chemotherapeutic treatment, where AMH levels remained practically below detection levels for all patients [42]. An in vitro study found in the control group without chemotherapy or GnRH analogue, AMH was indeed correlated with the number of growing follicles. As soon as chemotherapy was introduced, however, any correlation disappeared [43]. It should be noted that due to the general toxicity to any growing follicles during early stages after chemotherapy, we would not expect to see noticeable AMH levels for at least a few months post-treatment (Table 2). Ovarian suppression achieved for the majority of a goserelin study group (70%).
GnRH agonists slightly decreased the changes of pre-menopausal women developing permanent amenorrhea.
Significant reduction in POF cases for patients using GnRHas during chemotherapy (p < 0.001). A significant (p = 0.041) higher percentage of patients taking GnRH became pregnant post-treatment as compared to controls (9.2% vs. 5.5%).
Usage of GnRHas treatment reduces risk of chemotherapy induced POF in young women.
Author Study Design Fertility Preservation Discussion
Blumenfeld et al. [61] Follow-up on a woman that delivered two neonates, years after stem cell transplantation therapy, which on its own inevitably leads to POF. Patient had co-treatment with GnRHa. Data collected up to 2008. Published in 2010.
Patient spontaneously delivered 11-and 12-years post SCT treatment with chemotherapy with GnRHa co-treatment.
Author Study Design Fertility Preservation Discussion
Phelan et al. [68] 19 women observed, 9 of which underwent hematopoietic cell transplantation (HCT) co-treated with GnRHa, the others without. Data collected up to 2014. Published in 2016.
57% of the co-treated group experienced POF, a much lower rate than the historic average of 90%.
GnRHa leuprolide appears to preserve ovarian function in HCT patients.
Waxman et al. [69] 17 women were split in a control and study group given GnRH prior to and during chemotherapy. Data collected up to 1987. Published in 1987.
GnRHa buserelin was not significantly effective at preserving fertility.
Demeestere et al. [70] 129 lymphoma patients randomly assigned to receive GnRHa co-treatment or not. Data collected up to 2010.
Published in 2016.
In a five-year follow-up, co-administration with GnRHa did not seem to be correlated with reduced POF risk. Pregnancy rates were similar in the two groups (53% rate in GnRHa, 43% in control; p = 0.467).
GnRHa co-treatment was not found to be an effective fertility preservation tool in young patients with lymphoma.
Tock et al. [72] Retrospective review: 18 pre-menopausal women with grade 1 endometrial carcinoma (G1EC) and/or endometrial intraepithelial neoplasia (EIN), all of which received GnRHa combined endometrial resection and laparoscopy. Data collected up to 2016. Published in 2018.
12 patients conserved their uterus, eight patients became pregnant with 14 pregnancies among those who tried to become pregnant.
GnRHa is an effective fertility preserving option compared to other treatments for G1EC and EIN.
Bildik et al. [43] 15 ovarian cortical pieces, mitotic non-luteinized and non-mitotic luteinized granulosa cells expressing GnRH receptor were treated with chemotherapeutic agents, with or without GnRHa. Data collected up to 2015. Published in 2015.
GnRHa samples compared to control raized intracellular cAMP levels but did not activate any anti-apoptotic pathways nor prevented follicle loss.
GnRHa co-treatment does not prevent or alleviate ovarian damage and follicle loss in vitro.
GnRH Antagonists and Fertility Preservation Post Cancer Treatment
There are limited data regarding the effectiveness of GnRH antagonists for fertility preservation in gynecological cancer. Most are small animal studies and there is a general lack of human data.
An animal study assessed whether a GnRH antagonist ((GnRHant); in this study cetrorelix) was able to protect ovaries from chemotherapy damage in 42 female Wistar rats. The rats were divided into four groups: group I (n = 9) received placebo; group II (n = 12) received placebo+cyclophosphamide (CPA); group III (n = 12) received GnRHant+CPA; and group IV (n = 9) received GnRHant+placebo. The estrous cycle was studied using smears, pregnancies were documented, the number of live pups measured, and the ovarian cross-sectional area was measured, together with follicle count. The ovarian cross-sectional area was not different between groups, neither was the number of individual follicle types. However, rats on GnRH antagonists and placebo (Group IV) had a higher total number of ovarian follicles than those in the control group. Researchers conclude that the use of a GnRH antagonist before CPA chemotherapy provided fertility protection [75] (Table 3). Table 3. GnRH antagonists only & fertility preservation during cancer treatment.
Author Study Design Results Discussion
Lemos et al. [75] 42 female Wistar rats treated in four different groups: placebo or cyclophosphamide, GnRHa antagonist or placebo.
Data collected up to 2010. Published in 2010.
Rats in the group that received GnRHant treatment had a higher number of total follicles than the control group (p < 0.05).
GnRHant treatment before chemotherapy resulted in some fertility protection in rats.
Combination of GnRH Agonists and Antagonists and Fertility Preservation Post Cancer Treatment
To date it is already known that both GnRH agonists and antagonists have disadvantages that limit their use; GnRHas causes a flare-up effect during the first week after administration and no long-acting GnRHant agent is available. GnRHas combined with GnRHants may prevent the flare-up effect of GnRHa and rapidly inhibit the female gonadal axis. A small number of experimental animal studies with small sample sizes have reported controversial conclusions.
In a study involving 30 female Sprague Dawley rats of adolescent age, rats were randomized into five treatment groups (n = 6/group): (1) placebo, (2) cyclophosphamide (CPA) alone, (3) GnRH antagonist followed by GnRH agonist with placebo, (4) GnRH antagonist followed by GnRH agonist with CPA, and (5) GnRH agonist with CPA. The main outcome measure was live birth rate (LBR), and secondary measures included rat weight, ovarian volume, and follicles. Group 2 had decreased LBR. Group 4 and 5 had LBR similar to placebo. Ovarian volume did not vary between the groups. The CPA-alone group had fewer antral follicles compared to the control. The study demonstrated that the combination of GnRH antagonist and GnRH agonist and GnRH agonist alone preserved fertility in female adolescent rats following gonadotoxic chemotherapy treatment [76].
In another controlled animal study, researchers investigated the advantages of combination treatment with GnRHas and GnRHants in rats aged 12 weeks. The combination of a GnRH agonist with an antagonist completely prevented the flare-up effect and protected primordial ovarian follicles in the rats' ovary from cisplatin-induced gonadotoxicity [77].
Furthermore, in a control experimental animal study, the aim was to assess the ovarian reserve with AMH and perform histology analysis after exposure to cisplatin with a GnRHa or GnRHant. Twenty-four Wistar albino rats were randomly divided into three groups. In group 1, rats received a single dose of 50 mg/m 2 cisplatin with 1 mg/kg triptorelin. In group 2, rats received a single dose of 50 mg/m 2 cisplatin with 1 mg/kg cetrorelix. In the control group (group 3), rats received 50 mg/m 2 cisplatin. AMH levels and histology were used to assess ovarian reserve. Primary follicle counts were higher in group 2 whereas secondary follicle counts were higher in group 1. Both groups 1 and 2 had higher numbers of tertiary follicles and AMH levels than the control group [78] (Table 4). GnRHa and GnRHant displayed protective effects against cisplatin gonadotoxicity in rats.
Discussion
In this review, we explored the available data on the use of GnRH analogues as a co-treatment with chemotherapy in order to reduce gonadotoxicity in premenopausal patients with cancer. It has been hypothesized that ovarian suppression may have some gonadoprotective effects during gonadotoxic therapy. A potential mechanism for ovarian protection could be a reduction in ovarian blood flow that causes a decrease in the amount of chemotherapeutics that reach the ovary. Indeed uterine blood flow has been shown to be reduced after administration of GnRH analogues, although other studies did not detect difference [79]. Another two potential mechanisms are a decreased rate for granulosa cell proliferation and a suppression of follicular recruitment. These last two are based on the observation that chemotherapy mostly affects tissues with rapid cellular turnover, like gonads [26], and thus the gonadotoxicity is lower in prepubertal girls than adult women [27]. An alternative explanation is because of their higher ovarian reserve in addition to the hypogonadotropic prepubertal milieu. Thus, GnRH agonists seem to stimulate the prepubertal hypogonadotropic milieu, to have direct effect on GnRH receptors, to decrease ovarian perfusion [47], and act directly on ovaries through up-regulation of intra-ovarian anti-apoptotic molecules and protection of germ line stem cells [28,47].
Most studies (mentioned in Sections 7-9 and Tables 2-4) support the findings that GnRH agonist co-treatment protects from gonadotoxicity and preserves fertility in chemotherapy-treated pre-menopausal women with breast cancer and hematological malignancy. Specifically, most studies supporting GnRH agonists as a co-treatment in the cases of premenopausal cancer chemotherapy for fertility preservation, refer to breast cancer or to hematological malignancy. There are no large studies available regarding the possible fertility preservation effect of GnRH agonist co-treatment in the cases of premenopausal women treated with chemotherapy due to ovarian, endometrial, or cervical cancer, and thus we have inconclusive data. An explanation to the above could be that most of these patients present at a later age and have completed their family. In addition, for young women with cervical cancer the most accepted method for fertility preservation is fertility preserving surgery (i.e., radical trachelectomy), in highly selected cases with transposition of the ovary. Gonadotoxic chemotherapy is rarely used for endometrial cancer. For ovarian cancer, fertility-sparing surgery has been applied in a very selected group of patients with Stage IA disease grade 1 that did not require chemotherapy. Existing guidelines (Table 1) state that GnRH agonists can be offered to women with breast cancer and potentially other cancers for the purpose of protection from ovarian insufficiency. They do not refer to the use of GnRH analogues for fertility preservation in women with hematological malignancies post-chemotherapy. Furthermore, they state that GnRH analogues should not replace oocyte/embryo cryopreservation as the established modalities for fertility preservation.
Regarding the use of GnRH antagonists, as a co-treatment with chemotherapy in gynecological cancer and hematological malignancies, data are not conclusive as there are only a few, limited, animal data. Our perspective on the above is that although there are possible mechanisms explaining the potential effects, there are some points that we further need to consider when examining possible benefits. Any potential beneficial effects of GnRH analogues as a co-treatment in fertility preservation could depend on the type and maybe the stage of cancer treated and possibly the type of alkylating agents used. The latter is based on the observation of the significant differences seen in fertility preservation of breast and hematological cancer compared to other gynecological breast cancers. In addition to that. age and/or ovarian reserve could be an important factor as females in pre-pubertal stage seem to be more protected. Furthermore, as the needed power to detect differences between the study results requires hundreds of patients in order to be able to come to safe conclusions, we need data from several large human studies. Lastly, studies need to be homogeneous regarding the fertility preservation criteria they use as outcomes. Researchers might need to clarify the criteria they use to study the effectiveness of GnRH analogue co-treatment in fertility preservation treatments for premenopausal patients with gynecological cancers. For example, not all studies consider ovarian reserve as a criterion of fertility preservation assessment and use instead pregnancy rates, live birth rates, and look at long-term fertility.
Basic future research could focus on investigating the differential effects of GnRH analogue co-treatment on the physiology of different ovarian cell populations. In particular, the potential antiapoptotic effect of GnRHas on the several types of follicular cells as well as in the mesenchymal stroma cells should be further investigated. Whereas GnRHRs have been identified in several cell lines in the ovary [80] their absence from pre-antral follicles per se [48] creates several questions as to the protective effect of GNRH analogues. Furthermore, the impact of decreased ovarian perfusion and thus decreased delivery of the cytotoxic agents to the ovary as protective mechanism should also be evaluated. Clinical research could focus on the effects of GnRHa co-treatment with chemotherapy: first by evaluating surrogate markers of ovarian reserve such as AMH before during and after gonadotoxic therapy, secondly by evaluating other markers of ovarian reserve that could be more accurate, and thirdly by presenting the actual impact of their use in women that attempt pregnancy after treatment.
Limited and conflicting results were found for AMH levels as a fertility preservation indicator after treatment with GnRH analogues. Apart from one study [39], others did not discern any impact on the use of GnRH analogues in AMH levels [40][41][42][43]. Invariably, AMH levels seem to fall to almost zero during chemotherapy regardless of treatment and the post-chemotherapy levels in the ASTRRA trial (82 participants) seem to be an accurate predictor (86.7%) of the recovery of ovarian function during resumption of menstruation in breast cancer patients [79]. Nonetheless for post-chemotherapy recovery of AMH levels, available data are inconclusive. A recent small study (50 patients) in premenopausal patients (<40 years old) with early breast cancer who received chemotherapy and cotreatment with GnRHa triptorelin reported that AMH decreased to nearly undetectable levels after chemotherapy and recovered after 12 months. It did not, however, exceed one tenth of the pre-treatment levels although 48% of patients recovered above a threshold of 0.2 ng/ml compared to those who did not have co-treatment [81].
In conclusion, studies so far support the use of GnRH agonists as a co-treatment in order to provide gonadal protection and subsequently fertility preservation in women with breast cancer and hematological malignancy in general. There is a paucity of data regarding other types of gynecological cancer. Nevertheless, data extrapolated from studies involving young patients with breast cancer supports a potential beneficial effect of the use of GnRH analogues during chemotherapy with no adverse oncological impact [40,82,83]. On the contrary, there are studies to support a small beneficial effect on survival and decrease-free interval with the co-administration of GnRH analogues during gonadotoxic therapy. Large human studies need to take into consideration age, stage, type, and treatment of cancer used, as well as fertility preservation assessment criteria. It seems that we might have to individualize GnRH co-treatments in patients treated for gynecological cancer. Indeed, as mentioned in the previous sections, numerous randomized trials, systematic reviews, and meta-analyses have shown a correlation of GnRH analogue use before and during chemotherapy with lower rates of premature ovarian insufficiency [74,83]. According to clinical practice guidelines, in most cases GnRH analogues do not protect ovaries from radiotherapy-induced gonadotoxicity and so they are not suggested for female patients scheduled to receive pelvic, abdominal, or total body irradiation [15,74,84].
|
2022-02-23T16:31:15.962Z
|
2022-02-01T00:00:00.000
|
{
"year": 2022,
"sha1": "910c0f93339a8af23507c718eeb715370fef2527",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/23/4/2287/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "231b838ea825a208cc259ee7eab40fc0f8501f91",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
258887530
|
pes2o/s2orc
|
v3-fos-license
|
Comparison of Pedestrian Prediction Models from Trajectory and Appearance Data for Autonomous Driving
The ability to anticipate pedestrian motion changes is a critical capability for autonomous vehicles. In urban environments, pedestrians may enter the road area and create a high risk for driving, and it is important to identify these cases. Typical predictors use the trajectory history to predict future motion, however in cases of motion initiation, motion in the trajectory may only be clearly visible after a delay, which can result in the pedestrian has entered the road area before an accurate prediction can be made. Appearance data includes useful information such as changes of gait, which are early indicators of motion changes, and can inform trajectory prediction. This work presents a comparative evaluation of trajectory-only and appearance-based methods for pedestrian prediction, and introduces a new dataset experiment for prediction using appearance. We create two trajectory and image datasets based on the combination of image and trajectory sequences from the popular NuScenes dataset, and examine prediction of trajectories using observed appearance to influence futures. This shows some advantages over trajectory prediction alone, although problems with the dataset prevent advantages of appearance-based models from being shown. We describe methods for improving the dataset and experiment to allow benefits of appearance-based models to be captured.
I. INTRODUCTION
Autonomous Vehicles (AV) need to operate in areas where pedestrians are present. Prediction of future behaviour is important for avoiding conflict, especially when vulnerable road users such as pedestrians are present. Pedestrian prediction is hard since they can change direction and start or stop moving, and it is high risk, for example if they enter the road area. Conservative estimates of pedestrian motion can allow potential actions to be captured and avoided, however can lead to very conservative driving of an AV and prevent progress. A better approach is to accurately identify when changes of motion occur, and to use accurate predictions to avoid conflict situations.
Existing methods predict future motion based on an observed history of positions. A significant limitation of these approaches is that when changes of motion take place, such as initiation of motion to enter a road area from a stationary position, there is a delay before the motion can be accurately observed in the trajectory, and used to make an accurate prediction. Noise is present in the estimated position, and the greater the noise the later that motion initiation can be reliably observed. Appearance cues such as changes of body pose provide additional information about pedestrian actions, such gait changes when pedestrians begin or stop moving. These appearance cues can reliably inform when motion changes are taking place, and provide an early and accurate signal of motion. Figure 1 illustrates an example.
Pedestrian appearance has been used previously to estimate whether a pedestrian intends to cross the road, and to inform prediction of the future position of the pedestrian in the camera view. Common datasets are PIE [1] and JAAD [2], [3]. These methods demonstrate classification of pedestrian crossing intent, and prediction of future positions within the camera view. In order to use these approaches to support an AV, further steps are needed to infer behaviour in the world space in Cartesian coordinates, and it is unclear how well camera-based prediction can inform the future world position of pedestrians.
Prediction of pedestrian motion is inherently a multimodal task-if a pedestrian is standing beside the road area there are at least two significant possibilities to consider, of whether they remain stationary or begin moving into the road. A multimodal predictor can create a predicted trajectory for each mode, and assign a probability estimate for each event. Previous methods [4] have described effective methods for evaluating multimodal predictions of road users. These evaluations test whether distinct modes of behaviour are captured, as well as the probability distribution, which is useful for evaluating multimodal predictions of pedestrian motion. We present an experimental task for pedestrian prediction, that includes a dataset of cropped images of pedestrians, along with their associated trajectory in world coordinates. This dataset is constructed using data from the NuScenes dataset [5], which combines camera information with pedestrian trajectories, to produce an experimental task for pedestrian prediction including the use of observed appearance. This experiment involves using a history of images and trajectory positions, and predicting future positions, evaluated using the multimodal prediction measures from [4]. This experimental task allows a model to predict behaviour modes, such as motion initiation and standing still, using the appearance of pedestrians to provide cues when changes of motion take place.
To solve this task we compare physics-based models, trajectory-only prediction, and two network architectures using a Convolutional Neural Network (CNN) model and pre-calculated pose features for interpreting pedestrian appearance. This examines how pedestrian appearance such as changes of gait, can be used to estimate the future trajectory modes of pedestrians, using a prediction representation that can be used by an AV planner to control the vehicle while avoiding potential conflicts with pedestrians.
II. EXISTING METHODS
A number of approaches have been proposed for prediction of agents in road areas, including physics, goal, and regression methods, using trajectory and appearance data.
Kinematic models, e.g. constant velocity (CV) or acceleration [6], efficiently capture simple motions and can be a reasonable estimate when an agent is moving consistently. A study [7] has suggested that CV models perform as well as data-driven methods for pedestrian trajectories. Goal-based methods [8], [9], [10], [11], [12] estimate a belief that each goal is being pursued by the agent, for example using scene information. There can be a large number of possible goals a pedestrian may follow, and it may not be possible to reliably identify goal-directed behaviour.
Regression-ba sed methods directly map observations to predicted outputs. These representations can include interactions between multiple agents of varying classes, and map elements. Recent architectures [10], [13], [4] are based on Graph Neural Networks (GNN), which can capture complex representations and interactions. Further models have examined estimation of error covariances [14], and multi-modal predictions using Gaussian Mixture Models [15].
Appearance models use images as input, in order to infer the current or future motion of an agent since it allows the pose of a pedestrian to be observed, which provides important cues. Some models perform prediction based on a fixed elevated camera [16], [17], although these have limitations for use with AVs which use moving cameras. Further models [3], [18], [19], focus on intent prediction, e.g. crossing vs not crossing. A disadvantage of these methods is that they require manual intent annotation, which can be hard to define and identify. In contrast, trajectory prediction can be based on observations from sensors and the perception system without requiring additional labeling. Others, e.g. [1], [20], [21], [22], tackle trajectory prediction in the image space rather than world space. Each of these methods require further processing stages to be able to infer predicted pedestrian motion in the world space.
A. Dataset
To address the task of predicting pedestrian motion from appearance, we construct the NuScenes-Appearance dataset using camera and trajectory information present in the NuScenes dataset [5]. NuScenes contains sensor data collected from a fleet of autonomous vehicles operating in urban environments. It includes 3D trajectory annotations at 2 Hz and camera images at a variable rate (10 or 20 Hz). We generate the NuScenes-Appearance dataset by interpolating 3D trajectory annotations at 10 Hz, finding the closest camera timestamp for each camera frame, and projecting the 3D box to form a 2D box in each camera frame. This box is expanded to a square of twice the largest dimension and recorded in an image database, where each cropped image is associated with a recorded trajectory position.
This dataset includes camera images from different views, e.g. front-left, front-right etc cameras. Since each pedestrian can be visible from multiple views, each view is considered a separate trajectory, while individual agents (pedestrians) are kept in the same dataset split.
We select pedestrian instances from NuScenes and maintain the original data splits. As annotations are not provided in the original test set, we use the NuScenes validation set as test for the NuScenes-Appearance dataset, and define train and validation sets randomly from the NuScenes train set with a 7:1 ratio.
B. Methods
We compare the different prediction models on a prediction task with histories of length 1 s, and prediction of 3 s. We predict multi-modal trajectories with spatial distributions, and evaluate with standard trajectory error measures: minADE/FDE, predRMS (most probable mode), expRMS (expected RMS) and NLL. These measures evaluate closest-mode prediction as well as probabilistic estimates, which provide complementary evaluations of prediction accuracy [4], [23]. An effective predictor needs to perform well on each measure, indicating the ability to capture distinct modes of behaviour, as well as accurate estimates of the probability that each will occur.
Appearance-based prediction can assist with identifying changes of motion, and to focus on this task we create Fig. 3: Overview of appearance-based model. Pedestrian appearance of objects is encoded per frame with a CNN and interpreted over time using temporal convolutions. Image and trajectory encodings are combined and decoded to produce predictions of multimodal trajectories, covariances and mode probabilities to estimate future motion states of pedestrians. a dataset selection that emphasises changes of motion, in addition to the full dataset. Instances with high motion change are defined based on an average displacement error of >= 0.5m with a constant-velocity model. The motionchanges dataset is constructed using the instances with high motion change, and an equal number of random selections from the remaining instances. Predictions are produced with 5 modes, which are encoded using a predicted trajectory position for each timestep, as well as a 2x2 covariance matrix representing the spatial error distribution, and a probability weight for each predicted mode. Calculation of the evaluation measures minADE/FDE, predRMS and NLL are described in [4], and expRMS in [23] 1 .
Experiments are conducted using two appearance-based predictors as described below, and a number of trajectoryonly predictors, including kinematic prediction (which predicts a single-mode) and a neural-network trajectory predictor (DiPA [4]) that has been demonstrated as effective for prediction of road users including pedestrians.
IV. PROPOSED METHODS
We describe two appearance-based predictors for utilising an observed sequence of pedestrian images to influence trajectory predictions. The processed images are combined with the DiPA [4] trajectory predictor backbone to predict multimodal future trajectories of predictions. An overview of the model is shown in Figure 3.
One consideration for observing object appearance from the point of view of an autonomous vehicle, is that the camera is moving with the vehicle, and the detected region of each identified pedestrian will contain errors, resulting in visual effects such as background motion and misalignment between sequential frames, which can interfere with processing of visual features. To compensate for these effects, the appearance-based model processes a sequence of independent image frames using image features (two-dimensional), without the use of temporal video features (three dimensional including time). This is followed by temporal convolutions to provide inference between frames over time. The encoded 1 We calculate distances based on trajectory positions rather than grid cells as used in [23] features representing appearance are concatenated with the trajectory encoding features, and fed into the trajectory decoder of the DiPA model [4]. We test two implementations, one (App-net) uses a CNN (MobileNetV3Small [24]) which is trained against the mode prediction loss, and a second implementation (App-pose) using pre-calculated pose features [25] which are passed to the temporal convolution layer as a vector of 17 × 2 features of pose positions in the image. These appearance-based predictors allow visual cues to influence predicted trajectories, and the estimates of probabilities of each trajectory mode. The DiPA model used for the trajectory prediction experiments uses the same network backbone, without using the feed from the appearance model. The DiPA predictor uses stages of temporal convolution, and MLP layers for processing the encoding and decoding outputs of mode probabilities, covariances and trajectory modes. The original model processes interactions between neighbouring agents, however as this experiment performs single agent prediction agent-agent interactions are not used. Training is performed using the losses described in [4], which balance capturing distinct behaviour modes with estimating the probability distribution accurately.
V. RESULTS
We compare methods on the two presented datasets using standard trajectory error metrics. Baselines include a Constant Velocity (CV) and a Decaying Acceleration (DA) model. DA relies on Constant Acceleration for short-term and Constant Velocity for the long-term using an exponential decay function, a o e −λt , where a 0 is the initial observed acceleration and the decay rate λ equals to 5.5 s −1 . Results are reported in Table I. Among physics-based models, CV is best. Accelerations can capture motion initiations, but higher order derivatives are more difficult to estimate and noisy values can be detrimental. Since unimodal and multimodal prediction are distinct tasks, unimodal predictors are evaluated with RMS only, which is comparable to predRMS. We do not evaluate unimodal physics-based models with other metrics accounting for multi-modality and uncertainty.
The DiPA trajectory-only prediction model provides accurate predictions that improves over the physics-based baselines, and captures distinct behaviours along with good probabilistic estimates, on both the full and motion-changes datasets. Differences between results on the full dataset are small, as the data is dominated by simple motion behaviours, which does not allow differences between models on capturing changes of motion to be seen.
The App-pose model improves over other models on expRMS but shows higher error on NLL and minADE/FDE. This indicates that the model has learnt to accurately capture which mode is more likely, however has also followed a conservative mode generation policy that results in instances of the dataset not being covered by the model. The Appnet model produces balanced predictions on the various evaluation measures, although has not shown advantages over the other models.
These results show the benefits of multimodal evaluations for describing different aspects of how well predictions capture observed behaviours, which is useful for pedestrian trajectory prediction. Some advantages of appearance-based models can be seen, however further development is needed to allow appearance cues to provide substantial advantages over trajectory-based predictors.
Analysis of the data shows some significant limitations of the data and experiment. In a number of cases where a pedestrian initiates motion, movement in the trajectory is observed before motion or changes of gait occur according to the observed images. This effect originates in the source data, which may be caused by retrospective smoothing that allows future positions to influence earlier trajectory positions. A further effect occurs from interpolation between trajectory samples when upsampling detections from 2Hz to 10Hz. This effect provides unreasonable advantages to trajectory prediction, allowing trajectory-only prediction methods to be aware of motion before it takes place, and preventing advantages of appearance-based models of being demonstrated. This issue can be addressed through the use of a dataset with a higher sampling rate of observations, and ensuring that dataset filtering does not allow future information to influence earlier timesteps. A further issue is that in many instances the ground-truth motion does not accurately describe the pedestrian motion as observed in the video, for example showing trajectory motion where a person is standing still. These errors in data will introduce incorrect measurements of performance, for example a confident prediction (with narrow covariance) of stationary motion will be heavily penalised with high errors on NLL scores. Higher annotation accuracy would allow the advantages of appearance-based prediction to be more accurately measured.
VI. SUMMARY
In order to operate an autonomous vehicle in the vicinity of pedestrians, it is important to be able to estimate their future motion, and to identify significant cases such as changes of motion, which can indicate when they may enter the road Fig. 4: Examples of observed appearance data, along with ground-truth (past -green, future -red, white -prediction point) and multimodal predicted (blue) trajectory data. Top: successful case of motion initiation prediction. Bottom: example demonstrating limitations of source data. Ground-truth trajectory (red) shows motion while pedestrian is still stationary, and pre-empts motion before it occurs, for example due to bidirectional filtering over time.
area. To address this problem we introduce a new dataset task to perform estimation of multimodal trajectories, using pedestrian appearance to inform future motion. This task improves over previous datasets such as PIE and JAAD, which are limited to the camera frame, by evaluating prediction of motion in the world space, and including evaluation of probabilistic estimates of different modes of motion.
Comparison of these models shows that the neural-network trajectory predictor improves over the kinematic model and provides accurate predictions on all evaluation measures. The pose-based model improves on weighted trajectory estimates, indicating accurate mode estimation, however shows higher error on other tasks as a result of a conservative mode estimation strategy. Appearance-based prediction can provide advantages from using motion cues to inform predicted trajectories, however further development on this topic is needed to clearly capture these advantages.
An important limitation of the dataset, is that trajectory samples include motion before it takes place, for example as a result of filtering of the dataset and through interpolation. These effects interfere with the advantages of appearancebased prediction from being demonstrated. An improved experiment can be made by ensuring that dataset filtering does not allow future information to influence earlier timesteps, which will provide a more realistic experiment that corresponds with real-world usage. A further limitation is that it operates for a single pedestrian at a time, while further improvements can support the prediction of multiple agents together in a scene, including the use of appearance for each agent.
|
2023-05-26T01:16:16.373Z
|
2023-05-25T00:00:00.000
|
{
"year": 2023,
"sha1": "6b8d65d23b56af5ffbabd118956f9b883b346e61",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "6b8d65d23b56af5ffbabd118956f9b883b346e61",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
234080639
|
pes2o/s2orc
|
v3-fos-license
|
The Impact of Banking Competition on Economic Growth and Financial Stability: An Empirical Investigation
The paper examines the level of competition in banking market using different econometric models and analyzes the impact of efficiency of the banking system on the economic growth of the country. The research discusses to ensure banking competition as a function of the Central Bank. Also, the paper includes some recommendations developed to improve banking competition. Our hypothesis is that the existence of high levels of banking competition and low concentration in the banking market balances the speed of money supply in the economic sector. As a result, the Central Bank's monetary policy will be more effective in achieving its core objectives. Therefore, banking competition contributes to the economic growth of the country. In addition, the monetary policy of the Central Bank concentrates on financial stability, which is one of the fundamental factors in the economic development of a country.
Introduction
The global pandemic has made it clear that one of the most important tasks for the economy of Black Sea Region is to establish a reliable, financially sustainable banking system (Abuselidze and Mamaladze, 2020a;Abuselidze and Slobodianyk, 2021). On the one hand, it is due to the emerging phenomenon of banking institutions as the movement of cash flows in the economy. On the other hand, the institutional and business relations of banks with all subjects of the national economy. The economy of Georgia is characterized by significant systemic transformations. At the same time, because of specific of their activities banks have been at the centre of many crises, contradictory and hard to predict processes. It is noteworthy, that the financial support for the real sector of the economy, enterprises and organizations should play an important role in the economic development of the country and the introduction of market principles after the end of the global pandemic. In order for the country to stimulate the supply of money in the real sector of the economy, it is necessary not only to implement complex measures in the field of fiscal policy (Abuselidze, 2020b;2020d), but also to implement a number of changes in monetary policy (Abuselidze, 2019a;Marcus, 1984). Accordingly, one of the important functions of the Central Bank should be to ensure banking competition. The absence and/or imperfection of the legal framework in this area hinder the banking sector from stimulating the supply of money to economic entities and their financial support. In particular, commercial banks are limited by the standards set by major players and the Central Bank provides frequent variability in monetary policy instruments to ensure adequate money supply to the real sector of the economy. This is reason to create risks in terms of price and exchange rate stability. The Central Bank ensures the functioning of the banking system through its legal-normative acts. Therefore, great importance should be attached to the diversification of the functions of the Central Bank. It should be added to the function of ensuring banking competition. The main task should be to manage bank concentration with price stability, which in our opinion will play an important role in regulating money supply. The aim of the paper is to study the level of competition and concentration in the banking system of Georgia, to substantiate the argument of banking competition as a function of the Central Bank and the importance of its role in economic growth.
Literature Review
The paper is based on both quantitative and qualitative methods of research. To answer the research question include to analyse the scientific publications on the relevant topic, In particular, the Banking Competition and Bank Concentration and their impact on the socio-economic situation of country (Agostino, et al., 2008;2010;2012;Andrieş et al., 2014;Bikker, et al., 2005;Beck, et al., 2013;Berger, et al., 2009;Dash, et al., 2020;De-Ramon, et al., 2018;Fu, et al., 2014;Kanas, et al., 2019;Montes, 2014;Shair, et al., 2019;Staikouras, et al., 2006;Tabak, et al., 2012;Tan, et al., 2016;Titko, et al., 2015;Zigraiova and Havranek, 2016), influence of monetary policy on interbank competition (Abuselidze, 2019b), financial-economic policy (activities) against crisis and fragmentary approaches among them (Abuselidze, 2019c;2020c). The results of the surveys conducted by the leading research organization, economic models and statistical data (Bank of Georgia, 2019;Canhoto, 2004;Casu and Girardone, 2006;Chalikias, et al., 2020;Claessens and Laeven, 2004;Coccorese, 2008;Delis and Papanikolaou, 2009;Gischer and Stiele, 2009;Hamza, 2011;Liberty Bank, 2019;Leroy, 2019;Maradana, et al., 2017;Mandic, 2014;Marius and Căpraru, 2012;Memić, 2015;Nitsche and Heidhues, 2006;Ou and Tan, 2011;Rezitis, 2010;Ruckes, 2004;Ruzmatovich, 2020;Savel'eva, 2017;Serey, 2015;Sufian, 2011;TBC Bank, 2019;Tera Bank, 2019;VTB Bank, 2019;Wang, 2015;Yildirim and Philippatos, 2007). According to Marcus (1984), Charletti and Hatmann (2003), strong competition weakens market power, reduces the profit margin and forces banks to take more risks. However, modern literature emerges from the opposite position and investigate that the existence of competition is a guarantee of economic growth and financial stability. In addition, the existence of many data allows scientists to conduct certain types of "tests" for different countries. Based on the use of data from different countries, Shaeck et al (2009;2012;2014) and Boyd et al (2005) found empirical evidence that a competitive banking market was less likely to experience bankruptcies and banking crises. Eyubov (2012) also notes the special role of competition and believes that any country must create equal, competitive conditions for the stability of the banking system. The research by Jayakumar, et al., (2018) is presented in the paper "Banking competition, banking stability, and economic growth: Are feedback effects at work?". According to the research, banking competition plays an important role in the efficient functioning of the banking market and its regulation should be one of the main goals of monetary policy. The present paper reflects the relationship between banking competition, stability and economic growth, which was studied based on data from 32 European countries from 1996-2014. The results of the study show that banking competition, as well as the stability / sustainability of the banking system, is an important long-term driver of economic growth. The results of the study are presented in the following scheme (see Figure 1): Figure 1: The link between banking competition, stability and economic growth Source: Compiled by the author based on the data of the Jayakumar, et al., (2018) The graph shows the possible causal link between banking competition, banking stability and economic growth. Economic growth and development are characterized by negative trends because of a poorly organized banking sector. Operation of Monetary policy plays the important role in promotion of economic activities and growth of production volume, further socioeconomic development of the country. The monetary policy pursued by the central bank serves two interrelated purposes: to encourage financial system stability and activity of the weakened economic (Abuselidze, 2019a). Based on the above-presented literature review, I develop the following hypothesis in this article: H1: Our hypothesis is that the existence of high levels of banking competition and low concentration in the banking market balances the speed of money supply in the economic sector.
Materials and Methods
Structural rather than structural models are used to define and/or determine the level of banking competition in a country. According to the structural model, we can be determined the share of individual banks in the total assets of the banking sector, measure to market share and concentration coefficient. And non-structural models consider competition based on the mechanism of determining prices and marginal value in the banking market. In particular, The concentration index is considered to be the most common method of assessing competition and to determine the share of large banks in the banking market (Stazhkova, et al., 2017).
Where: CR -k -concentration index; q -The bank's share in the region. The index is calculated with the participation of three or four largest banks, depending on which the index will be compiled with the participation of three or four banks. In the case of three banks, if the index rate is less than 45%, the market is considered concentrated, from 45% to 70% -deadly concentrated, at a rate higher than 70% the market is considered highly concentrated. The Index Linda (1976) is used to determine the level of inequality between banks operating in the banking sector: Where: ILK -Index Linda; K -is the K group of the largest banks in the banking sector; Mi -is the ratio of the average relative share of the largest number of economic agents of the largest number of I in the relevant market segment (K -i) to the total market share of the largest number of economic agents; i -is the identifier of a specific authorized person. The value of the index i varies from 1 to k.
The non-structural model includes the Lerner index (monopoly power index) (Lerner, 1934), which is calculated as follows: Where: L -is the Lerner index; MC -marginal costs; P -monopolistic price. In the case of pure monopoly, the Lerner index is equal to one. The Lerner index practically shows the monopoly price set on the value of credit. This index represents the difference between the weighted average price of the oligopoly price and the bank's marginal costs. The value of the Lerner index can range from 0 (perfect competition) to 1 (monopoly). According to the Competition Law of Georgia (2012), the definition of "market monopolization" has been replaced by the concept of "market concentration". So, a structural model -Herfindahl-Hirschmann Index (HHI) 1 (Oliver and Hirschman, 1946;Weinstock, 1982;Werden, 1998), is used to assess the level of competition according to market structure indicators. An increase in the index score means a decrease in competition and an increase in market power, while a decrease in the index score indicates a reverse process.
(4) Where: 2 -is the share of the i-yur bank in the market, it is calculated as follows: Where: is the sales volume of the bank's product and -is the total volume of the banking market.
Results and Discussion
One of the main goals of any central bank is to promote the growth of the overall national product. One of the factors influencing the development of the economy is the lack of competition. Several studies show that there is not any exact answer as to how much competition can be a guarantee of economic growth in the banking sector. In our opinion, it is noteworthy that the competition is important factor to increase the real economy for many reasons. As in many other businesses, competition in the banking sector can have a significant impact on the efficiency of financial services production, the quality of financial products, the quality of innovation in the existing sector and so on. It is well known that competition is a positive phenomenon for many industries. Its effect can also be positively assessed in terms of the functioning of the banking sector. Financial markets and the banking sector play an important role in the efficient functioning and development of the economy. Competition Law of Georgia (2012) includes competition in banking sector and other various fields. According to the methodology of Competition Agency of Georgia, the levels of market concentration based on the Herfindahl-Hirschman index (score) are defined as follows: Low concentrated -< 1250 Average concentrated -1250 < > 2250 Highly concentrated -> 2250. Since 2014, the banking sector in Georgia has been characterized by a process of consolidation -the unit of various banks and the constant growth of the banking portfolio of the leading banks in the banking sector. In 2000-2019, consolidation takes the following form (see Figure 2). According to the given methodology, the concentration of the banking market was studied based the data of 2016-2018: As the results of the research show, the HHI index is characterized by an upward trend. The figure for 2018 is 119.6 times higher than the same period in 2016, which shows that the concentration in the banking sector is increasing. All three years belong to the third group of the HHI category and far exceed the lower limit set for this group. At the same time, it is clear that only two leading banks (Bank of Georgia and TBC Bank) are responsible for the current situation. The market share of these banks is characterized by an upward trend. The Bank of Georgia had a 34.68% share of total bank assets in 2016, up from 37.15% in 2018. The growth rate is much higher in the case of TBC Bank, which in 2016 was characterized by an equal rate of the same size as the Bank of Georgia (34.67%) and this amount increased to 2018 and amounted to 39.53%. We also used the HHI method to estimate the level of concentration in the banking sector according to net loans, the results of which look as follows: (see Table 2). According to this method, the banking sector of Georgia is characterized by high concentration (monopolization). In 2016-2018, the size of HHI is on an upward trend, although the growth rate is low compared to the assets method. In order to calculate the level of competition in the concentration index, we were guided by the total asset method of the HHI index and determined the market share of two, three and five leading banks in the Georgian banking sector. The results of the concentration index are as follows: % , (7) 5 = ∑ =5 =1 = 1 % + 2 % + ⋯ + 5 % , (8) As we can see from the concentration index, the Georgian banking sector is characterized by high concentration not only in the case of shares of two banks, but also only in the case three or five banks. The index shows that two leading banks in the market hold about 77% of the total market share, 81% -three banks, and in the case of five banks this rate exceeds 88% (see Figure 3).
Figure 3: Concentration of the banking market Source: Based on the data of the National Bank of Georgia compiled by the author
We also used non-structural indicators to assess the degree of concentration of the banking sector. In particular, we have calculated the Linda index for the two, three and five largest banks. For the two largest banks the Linda index is equal to: As the Linda index increases with the addition of new banks, we can conclude that the Georgian banking sector is characterized by high concentration (see Figure 4). . The H-statistic index is calculated by summing the obtained elasticity, the higher the revenue elasticity, the more competitive the banking market. However, H1 is the link between interest costs and (own equity minus assets), H2 is the relationship between staff costs and bank capital, H3 is the ratio of net non-interest income to own equity (see Figure 5). The banking system plays an important role in the economy. It promotes the redistribution of financial resources between sectors of the economy and contributes the efficient allocation of financial resources to promote economic growth and development (see Figure 6). Financial stability is a situation in which a banking system consisting of financial intermediaries, markets and market infrastructure will be able to withstand shock and financial imbalances (Figure 7). In order to develop banking competition, it is necessary to achieve a reduction in bank concentration, which in turn leads to financial stability and makes it possible to reduce the likelihood of delays in the process of financial intermediation. Financial stability is a very important factor for the banking sector, which allows it to carry out financial processes, promotes the movement of cash flows between creditors and debtors, and plays an important role in the efficient distribution of financial resources. These factors contribute to economic growth and development. In contrast, financial instability threatens these aspects of the economy and may affect other sectors as well. The results of the study are presented in the following scheme (see Figure 8).
Figure 8.
A strong banking sector is a prior condition for the sustainable development of country's economy. The competitive banking sector promotes the liquidity of the economy, which leads to the accumulation of capital, economic growth and employment. Capital is the driver of sustainable economic growth. Capital formation and accumulation should be a key element of any strategy for economic growth. A competitive and diverse banking system is associated with a low-risk loan portfolio. A bank can develop a business and encourage entrepreneurship in the country. In a highly competitive environment, financial institutions reduce interest rates to attract customers and diversify their banking product, which has a positive impact on financial sustainability. Commercial banks are a means of promoting investment activity through loans, which contributes to the growth of the country's economy. The quality of financial institutions, regulation and the quality of regulatory institutions, market refinement and competition significantly improve the efficiency of capital investment and reduce the risks associated with different financing options. To improve competition, regulatory institutions need to focus on creating appropriate incentive frameworks. These frameworks should include rules for entering and exiting the market, precautionary principles and oversight. In times of crisis, corrective action, restructuring measures should be used, which should ultimately help reduce potential moral hazard problems and avoid risk overruns, as well as reduce fiscal costs for taxpayers. The market capitalization requirements, as well as greater transparency in operations and prices, are types of actions that will improve oversight to improve competition. In contrast, rising regulatory spending, which raises barriers to entry into the financial sector, deprives countries of many of the benefits of an efficient and innovative banking system and also violates competition in the marketplace. The quality of competition is an important aspect of the functioning of the banking sector. Therefore the financially dependent external sectors are growing faster in a competitive banking system. On the other hand, the development of the banking sector is a result of economic growth. The faster the Real National Income and / or Gross National Product growth rate, the greater the demand for banking and / or financial intermediation.
Consequently, the quality of competition in the banking sector affects the availability of financial services and external financing. The link between banking competition and sustainability affects resource availability, choice, and economic stability. A competitive banking system improves the quality of credit access for banks. It could lead to an improvement in the banks 'loan portfolio, which ultimately contributes to the efficient allocation of resources. Maintaining competition at a certain level contributes to the sustainability and stability of the banking sector by minimizing risks, stimulating the loan market and monitoring system.
Conclusion
Based on economic research, it is proven that the sustainability of the banking sector stimulates economic growth in the long run. In turn, as a result of the increase in banking stability, the quality of competition in the banking sector is increasing. The banking sector is highly competitive in countries where the level of transparency in the banking sector is stimulated. At the same time, banking competition and banking stability strengthen each other. In addition to the positive impact of stability on banking competition, on the contrary, banking stability can also help strengthen competition. The stability of the banking system can lead to a well-organized investment in funds. In turn, effective investment contributes to further growth of savings, which can increase banking competition. Since banks play a fundamental role in financing the economy, banking competition affects the development of the economy. It is expected that a higher degree of competition in the banking market will ensure prosperity by reducing the cost of financial services and thus accelerating investment activity. This effect is due to two circumstances. On the one hand, higher levels of banking competition should lead to lower levels of monopolistic power in banks and consequently, lower bank prices. In turn, increased competition should help banks reduce their costs, which will have a positive impact on their operations. Thus, it is fundamental to have a sustainable, stable and efficient banking system to facilitate the efficient allocation of resources and risks throughout the economy.
|
2021-05-10T00:03:45.593Z
|
2021-02-01T00:00:00.000
|
{
"year": 2021,
"sha1": "a1e1dbb71679a11abba02d8ee661cf72a939b54a",
"oa_license": "CCBYNC",
"oa_url": "http://ecsdev.org/ojs/index.php/ejsd/article/download/1164/1147",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "deee95ea4e5ae2d802f9a885a14076598f8e9117",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Economics"
]
}
|
206514579
|
pes2o/s2orc
|
v3-fos-license
|
All that looks like “Brugada” is not “Brugada”: Case series of Brugada phenocopy caused by hyponatremia
Brugada syndrome (BS), a life-threatening channelopathy associated with reduced inward sodium current due to dysfunctional sodium channels, is characterized by ST-segment elevation with downsloping “coved type” (type 1) or “saddle back” (type 2) pattern in V1–V3 precordial chest leads (1, 2). Brugada phenocopy, a term describing conditions inducing Brugada-like pattern of electrocardiogram (EKG) manifestations in patients without true BS, is an emerging condition (3). We describe a case series of Brugada phenocopy with hyponatremia.
Introduction
B rugada syndrome (BS), a life-threatening channelopathy associated with reduced inward sodium current due to dysfunctional sodium channels, is characterized by ST-segment elevation with downsloping ''coved type'' (type 1) or ''saddle back'' (type 2) pattern in V1-V3 precordial chest leads [1,2]. Brugada phenocopy, a term describing conditions inducing Brugadalike patterns of electrocardiogram (EKG) manifestations in patients without true BS, is an emerging condition [3]. We describe a case series of Brugada phenocopy with hyponatremia.
Case 1
A 63-year-old Caucasian woman with a history of diabetes mellitus, hypertension, and schizoaffective disorder on haloperidol, presented to the emergency room with confusion and altered mental status. She was drinking up to 12 L of water and four to five 355-mL cans of beer every day. Physical examination including vitals was unremarkable except for confusion and disorganized thought process. Initial labs were significant for hyponatremia (Na + 112 mmol/L). Detailed family history was not significant for any cardiovascular disease including BS. EKG showed prolonged QTc (547 milliseconds)
CASE REPORT
Disclosure: Authors have nothing to disclose with regard to commercial support. and ''coved type'' ST elevations and deep T-wave inversions in leads V1-V3 (Fig. 1). However, no such changes were noticed on previous EKGs. Cardiac markers were within normal limits. Electrophysiological studies with programmed electrical stimulation to induce ventricular arrhythmias and left heart catheterization were unremarkable. A drug challenge test was not performed. Her haloperidol was held and water restriction initiated. Her sodium level improved gradually with serial EKGs showing resolution of ST elevations and QTc interval returning to normal (Fig. 2).
Case 2
A 54-year-old white man, with a history of hypertension, presented to the emergency room complaining of lethargy, vomiting, anorexia, and decreased fluid intake for 7 days. He denied any cardiovascular symptoms. Physical examination was unremarkable except for signs of dehydration.
Initial labs revealed significant hyponatremia (Na + 106 mmol/L) with EKG showing prolonged QTc (526 milliseconds) and a ''saddle back'' type ST elevation in leads V2-V3 (Fig. 3). Detailed family history did not reveal BS. Telemetry did not show any evidence of arrhythmia. Electrophysiological studies which included programmed electrical stimulation to induce ventricular arrhythmias and left heart catheterization were unremarkable. A drug challenge test was not performed. He was fluid resuscitated with gradual return of sodium level towards normal, and serial EKGs showing resolution of EKG findings with improving sodium level (Fig. 4).
Discussion
Brugada phenocopy associated with hyponatremia has been very rarely described. There have been very few isolated case reports [4][5][6][7]. This, to the best of our knowledge, is the first case series of Brugada phenocopy with hyponatremia. Sodium channel blockers are used to unmask and/or induce EKG-manifestations of BS in susceptible patients. Electrophysiologically, hyponatremia works similarly by decreasing the electrochemical gradient and causing decreased inward current, leading to Brugada phenocopy. We believe that reduced transmembrane gradient was responsible for Brugada phenocopy in our patient which was reversible and resolved with improvement in sodium levels and potentially transmembrane gradient. BS can be differentiated from early repolarization syndromes (forme frustes) by more than 2 mm ST-elevation in the right precordial leads with greater than 110 milliseconds QRS duration which was not appreciated in both these patients [8]. Prognostic implications of these changes are unknown; however, both our patients were doing fine at the 12-month follow up at the cardiology office. Management of these patients is supportive with intensive observation. Clinicians should be aware of the association of Brugada phenocopy with hyponatremia and be vigilant for a diagnosis of true BS in cases where EKG findings fail to resolve with supportive management.
|
2018-04-03T02:00:21.989Z
|
2016-02-15T00:00:00.000
|
{
"year": 2016,
"sha1": "d107176d8e3fe18f717b4b0469c2b674e84deadc",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.jsha.2016.02.003",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "63e0787570009318e92c348fecea2262004dd240",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
150451875
|
pes2o/s2orc
|
v3-fos-license
|
Robert R. Blake, With Recognition of Jane S. Mouton
This article reviews the life and contribution of Dr. Robert R. Blake, who received the Lifetime Achievement Award in 1994 from the International Association for Conflict Management for his pioneering work and prolific career in the field of conflict management. As a longtime co-author and collaborator, Dr. Jane S. Mouton certainly would have been joint recipient of this award if it were not for her death in 1987: The vast majority of their research was published together. Jane Mouton and Robert Blake became famous for their promotion of the Managerial Leadership Grid and through their work as consultants to a variety of professions and organizations. But there is much more to Robert Blake’s career and contributions than the Grid. Together, Blake and Mouton were tremendously influential in their work on managerial leadership and organizational development.
dislike work, which requires managers to supervise employees closely, called Theory X, or employees are highly motivated and eager to perform well in their work, which allows managers to lead by creating opportunities for employees to achieve, called Theory Y. The Grid proposed a middle ground, but one that set out to demonstrate the advantages of Theory Y as a leadership style, by proposing styles that emerge from a set of managerial concerns: whether managers have a high or low concern for their employees and whether they have a high or low concern for production. These separate but interdependent concerns lead to five distinct managerial leadership styles.
According to the Managerial Leadership Grid, managers who have high concern for people and for production use a team style, encouraging and supporting employees to work as a team to reach optimal productivity. Managers who have high concern for employees but low concern for productivity use a country club style, where the work environment is friendly and supportive but not necessarily productive. Managers with high concern for productivity and low concern for people use a produceor-perish-or task-style, in which the manager pressures employees and controls the environment, emphasizing rules and control over a supportive climate in the workplace. The style used when managers have low concern for both people and productivity is referred to as impoverished, in which the manager works to avoid problems more than support employees or strive for innovative approaches toward productivity. And a moderate emphasis on people and productivity yields a middle-of-the-road approach, or compromise style, which provides some support and accomplishes some goals, but not at optimal levels of either.
This original model was published in 1964. Malloy (1998) described the Grid's origin as follows: Blake, Mouton, Barnes, and Greiner (1964) first described the application of [the Grid] in a manufacturing plant of 4,000 employees in 1963. This was a longitudinal study over 12 months, but without a control group.
In total, 800 employees were exposed to the six phase Grid [organizational development] programme, and according to the authors, the results were impressive. At the individual level, they reported major shifts in dominant values, attitudes and behavior patterns. At what could probably be considered the team culture level, they noted improved union, community and parent company relationships and an improvement in team level performance surrogates including items such as boss's work effort, problem liveliness in group discussions, quality of decisions made and profit consciousness. However, the assessments were made after the study and compared with respondents' retrospective perceptions of the same items prior to the study. (pp. 23-24) The Grid is primarily conceptual, and Blake and Mouton used it prescriptively to treat managerial issues. Malloy (1998) pointed out that "despite the richness of the Grid model when viewed as a model of leadership culture and the widespread application of Grid . . . , it has not been extensively or rigorously tested" (p. 23).
In 1970, Blake and Mouton proposed the Conflict Grid, which is often overlooked in the progression of dual concern models related to conflict and negotiation. The goal of this model was to identify how people think about conflict as a predictor for the approach they will take. Quite similar to the Managerial Leadership Grid, the conflict grid used a 1 to 9 scale (1 = low, 9 = high) on each axis, with the horizontal axis representing concern for producing results and the vertical axis representing concern for people. This dual concern model is much closer to the negotiation and conflict models that were to come, with high concern for people and high concern for results (9, 9), which represents an approach that uses problem solving; moderate concern for people and moderate concern for results (5, 5), which yields an approach of comprising; low concern for people and high concern for results (1, 9), which results in an authority-obedience approach; high concern for people and low concern for results (9, 1), which yields an approach in which the manager works to smooth over issues and protect harmony; and low concern for people and low concern for results (1, 1), which results in a manager who withdraws.
Between 1964 and 1987, Blake and Mouton co-authored a significant number of journal articles, book chapters, and books directly related to the Managerial Leadership Grid, applying it to the military, to NASA, to health care, to airlines and their cockpits, as well as to organizational management, human relations, and corporate mergers and acquisitions. In 1967, Blake and Mouton added a third dimension to the dimensions of concern for people and concern for production; this dimension was referred to as thickness, or the depth of the managerial style. 1 After Mouton's death in 1987, Blake published two more books-in 1991 and 1994-that addressed further developments and applications of the Grid.
Personal Life of Robert R. Blake
Robert R. Blake was born on January 21, 1918, in Brookline, Massachusetts. In 1941, he married Mercer Shipman Blain. They had a daughter, Cary Mercer Blake, and a son, Brooks Mercer Blake. Blake served in the Army during World War II until his discharge in 1945. He retired in 1997, and he died on June 20, 2004, in Austin, Texas.
Education
In 1940, Blake earned his Bachelor of Arts in psychology and philosophy from Berea College, a college for less privileged students who were all required to work on campus as part of their tuition. According to one account, his experience at Berea College was "truly memorable and inspiring" (Obituary of R. R. Blake 2004). In 1941, he earned a Master of Arts degree in psychology from the University of 1 A much later version (McKee & Carlson, 1999) of the Managerial Leadership Grid added two managerial leadership styles: opportunistic and paternalistic. The paternalistic style is characterized by an oscillation between the impoverished and the produceor-perish styles, in which the manager sometimes praises and supports employees but maintains control and discourages challenges from employees about the way things are done. The opportunistic style also was added as a managerial leadership style, but it does not fit neatly on the grid; it characterizes a manager who uses attempts to lead in a way that will result in greater personal benefits; in other words, in this case, the manager has higher concern for self than for either people or production.
Volume 14, Number 1, Pages 51-59 Virginia. His thesis was entitled, "The development of opinions regarding the differences between Negroes and Whites." 2 And in 1947, Blake earned a doctorate in psychology from the University of Texas at Austin; his dissertation was entitled, "Ocular activity during the administration of the Rorschach Test." 3 Career Blake continued as a fulltime faculty member at the University of Texas in psychology from 1947 to 1964. In addition to lecturing in the United States (e.g., Harvard University), he also had an international presence, lecturing at Oxford and Cambridge Universities.
Shortly after joining the faculty at the University of Texas in Austin in 1947, Blake spent a year-in 1949-as a Fulbright scholar at the Tavistock Clinic in London, England, where he participated in research related to psychoanalytic approaches to group therapy. From 1950 to 1960, Blake studied group behavior at the National Training Laboratories in Bethel, Maine; this project started as a summer program, but he continued working there for ten years during the summers and serving as a member of the Board of Trustees. He cited his time there as some of the "richest learning experiences" of his life (Blake, 2004). During this decade, Blake worked with Herbert A. Shepard of Standard Oil (later the Exxon Corporation) on a ten-year research project, which was pivotal in Blake's development as a consultant; it was during this project he learned to apply his theory and methods of organizational transformation to corporate settings.
In 1961, Blake was invited to give the Alfred Korzybski Memorial Lecture (AKML) at the General Semantics Institute. Each year since 1952, distinguished individuals were invited to deliver a lecture on a topic of their choosing within the field of general semantics. The annual lecture honors Alfred Korzybski, who created the field of general semantics (not to be confused with semantics) and his goals for human development. Together with Mouton, Blake was invited to again deliver the AKML in 1982 (http://www.generalsemantics.org/our-offerings/programming/alfred-korzybski-memorial-lec ture-series/).
In 1994, Robert Blake received the Lifetime Achievement Award from the International Association for Conflict Management for his pioneering work and prolific career in the field of conflict management. Although IACM does not grant posthumous awards, as a career-long partner and collaborator, certainly Jane Mouton shared credit for his receiving this award. Up until Mouton's death in 1987, Blake and Mouton were close collaborators, and the vast majority of their research was published together.
Jane Srygley Mouton
Because Blake was the recipient of the IACM Lifetime Achievement Award, this review is primarily about him. However, Blake's contributions over his career were developed and co-authored with Jane Srygley Mouton, whose ideas and efforts were highly influential in Blake's career and to his many contributions. In many ways, Blake's career is inseparable from Jane Mouton's. Therefore, we would like to pay tribute to her and her collaboration with Robert Blake (see Figure 2).
In addition to playing a significant role in developing the original Managerial Grid in 1961, Mouton was co-author with Blake on over three dozen books, 460 journal articles, and 290 book chapters (Grid International, Inc., 2016). Together they co-founded Scientific Methods, Inc.-later renamed Grid 2 Thanks to Nancy Kechner, Ph.D., RDS Research Software Support, University of Virginia Library Liaison for Biology, Biomedical Engineering, and Psychology, for her assistance in identifying Blake's master's thesis. 3 Thanks to Victoria Pena, Ask a Librarian intern at the Perry-Castañeda Library, University of Texas at Austin, for her assistance in finding Blake's dissertation. Unfortunately, there seems to be little written about Jane Mouton's background and family life. She received several awards for her books, including from the American College of Hospital Administrators (1982), the American Journal of Nursing (1982), and the American Management Association (1982). She died of cancer in 1987 (Burke, 2017). In an autobiographical piece written in 1992, Robert Blake wrote the following tribute to Jane Mouton: The happiest day in my professional life came in the fall of 1987. Jane Mouton and I had just learned that we were both to be inducted into the Human Resource Development Hall of Fame on December 9. The gratification was made doubly meaningful because of the simultaneous induction; in other words, a recognition that, whatever contribution had been made, it had been made as a team, not as two separate individuals. That gave validity to the operating premise of our entire joint career.
This moment of great fulfillment was all too soon followed by ultimate sorrow. The ceremony was scheduled in New York, immediately upon our return from a trip to India, where we addressed the International Congress of Training and Development, and then to Athens, where we were scheduled for client activity. The presentation in Delhi went quite well, but at this point a difficulty arose. Jane complained of abdominal pains and, as they grew worse, it was determined she should be hospitalized. She decided to cut the trip short and returned to Austin in late November. I continued to fulfill our commitments, phoning her daily in order to stay apprised of the latest events. Though she remained hospitalized, Jane claimed to be making progress and even thought she This tragedy symbolizes the end of a significant part of my career. Jane and I were partners, working hand in hand for 36 years. Together we formulated the Managerial Grid â , the conceptual framework of which is contained in a book that has already exceeded sales of two million copies, and is available in sixteen languages. We also published Synergogy, a book that outlines a radical solution to many of the chronic problems facing teachers and educators today. These were only two of a long line of other books-38 in number-all mutually coauthored by us. Our major effort, however, involved the creation and development of Scientific Methods, Inc., and the leadership we provided that has sustained it for three decades. For all of these reasons, this autobiography can only be written by weaving the centrally important fact of our joint cooperation into the story which follows. (pp. 106-107) Beyond the Grid Although Blake and Mouton are known primarily for the Managerial Leadership Grid, their research extends well beyond their focus on managerial leadership. 4 In addition to the many books and articles published on the Grid, Blake and Mouton-and their occasional co-authors-wrote about a number of other subjects related to organizational behavior. They conducted many studies on group conformity and intergroup competition as it occurs in settings with diverse group and individual opinions (e.g., Blake, Helson, & Mouton, 1957;Coleman, Blake, & Mouton, 1958;Helson, Blake, & Mouton, 1958), and they wrote many articles on group dynamics and group development (e.g., Blake, Mouton, & Fruchter, 1962). Blake and Mouton wrote extensively on organizational development, its history, and its value for managers in developing respect and trust (e.g., Blake & Mouton, 1976a, 1979a, 1979b. In addition, they discussed effective management for corporate change, especially during mergers and acquisitions and international mergers (e.g., Blake & Mouton, 1983, 1985. Taking a leave of absence from the University of Texas, Blake went to work as an internal organizational development manager to examine the inner workings of Lakeside (apparently an invented company name), a manufacturing plant with more than 800 employees. He co-authored with Mouton a book that treated his experience as a case study for organizational development practices, entitled The Diary of an OD Man (1976a). In addition, Blake and Mouton wrote a number of articles on how to measure organizational training for its effectiveness.
Blake and Mouton's research primarily addressed the Managerial Leadership Grid. They continually wrote in response to questions and challenges to their model (see, for example, Blake & Mouton, 1976b, 1982a and promoted the value of the Grid. Blake and Mouton argued that the 9,9 approach to leadership was more useful, more preferred by managers, and more effective than the situational approach to leadership (Blake & Mouton, 1978, 1981, 1982b, 1982c. They repeatedly competed with Hersey and Blanchard's situational leadership theory (Hersey, Blanchard, & Natemeyer, 1979), which argued that the best leadership style varies according to the specific managerial context. Blake and Mouton refuted situationalism as an approach to leadership, arguing that it ignores principles of behavioral science and treats concerns for people and production as separate situations.
Conclusion
Together, Blake and Mouton were tremendously influential in the work they did on managerial leadership and organizational development. One of the many tributes to Robert Blake in the memory book for his memorial service shows the kind of person he was: I first met Bob Blake around 1979, shortly before I attended The Managerial Grid Seminar as a twenty-five-year old. Both Bob and Grid had a profound influence on my professional life, as I ultimately became the international Grid Associate for Ireland. Bob was a truly original thinker and possessed a first rate and constantly enquiring mind. . . . When he visited Ireland he spoke of how he had a strong feeling of recognition in the countryside from his cultural forebears. Bob was truly one of the greats. He leaves a superb testimony to his life and achievements through the countless people who have benefited from Grid. As the old Irish expression has it "May his soul rest on the right hand of God"-"Ar dheis De go raibh a anam." (James Conboy-Fischer, April 6, 2009) It is no wonder the International Association for Conflict Management selected him to receive the Lifetime Achievement Award. This award was clearly well deserved.
of Human Communication Research, is an ICA fellow, and recipient of ICA's B. Aubrey Fisher Mentorship Award. At the University of Maryland, he was chair of the Department of Communication and was acting associate dean for Graduate Studies and Research. He earned his graduate degrees from the University of Wisconsin-Madison and his undergraduate degree from Columbia University. He was born in the Bronx.
Cameron B. Walker graduated from Temple University's Klein College of Media and Communication with a Bachelor of Arts degree in Strategic Communication. He now works as an account executive for a healthcare IT consulting firm in Philadelphia, Pennsylvania.
|
2019-05-13T13:05:16.305Z
|
2019-03-11T00:00:00.000
|
{
"year": 2019,
"sha1": "f08168b99f8a34d3d45c087d8ff5086dfc77b89b",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1111/ncmr.12151",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "92b6925aca0da974918fceae28c7ed51a75c437a",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
237838870
|
pes2o/s2orc
|
v3-fos-license
|
Citizenship and Religious Freedoms in Post-Revolutionary Egypt
† This the outcome of a research project that was sponsored by the Council of Sciences (ACSS) and the Development Agency (SIDA). I’m immensely grateful to Alaa Moustafa Saad for the substantial support and valuable assistance she provided throughout the fieldwork phase upon which this writing is based. Abstract: The majority of the social and political forces that spearheaded and actively participated in the 2011 and 2013 waves of uprisings catapulted the demands to reestablish ‘citizenship’ as one of the main foundations of a new social contract aiming at redefining state–society relations in a new Egypt. Meanwhile, the concept of citizenship has been increasingly featured in the discourse and practice of a wide variety of state actors and institutions. In fact, Egypt’s experiences with the modern nation-state project concerning the conceptualization of citizenship, and the subsequent implications on religious freedoms and the role of religion in the polity at large, has gone through various ebbs and flows since the beginning of the 20th century. The concept of citizenship as such has faced a plethora of challenges and has been affected by the socioeconomic and political trajectories of state– society relations during the Nasser, Sadat, Mubarak, and, most recently, Sissi regimes. Dilemmas of geographical disparities and uneven access to resources and services, in addition to issues of discrimination against ethnic and religious minorities such as Coptic Christians, Shiites, Nubians, Bedouins, or on the basis of gender, are among the main accompanying features of the neoliberal order that was introduced and then consolidated first by Sadat’s Open Door and then Mubarak’s state-withdrawal policies, respectively. To what extent did the conception and practice of citizenship rights and religious freedoms—as defined by state and non-state actors—change after the demise of the Mubarak regime? In addition, what is the role of the Egyptian civil society vis-a-vis the state in this process of conceptualizing and/or practicing citizenship rights and religious freedoms in the new Egypt? Focusing on the aforementioned questions, this paper aims at shedding some light on the changing role of religion in the Egyptian polity post 2011, while also highlighting the impact of the sociopolitical and economic ramifications witnessed within the society on the scope of religious liberties and citizenship rights as a whole.
Introduction
Social and political activists and intellectuals from the Arab region have often used the concept of 'citizenship' to denote a set of features pertaining to the foundational identity of the inhabitants of modern nation-states in Arab countries. Nonetheless, there seems to be a research gap when it comes to the scholarly contributions on the subject in the Arab context, especially with regard to fieldwork research that tackles the topic from an ethnographic standpoint. Among the available resources in the contemporary period, one notes that in the 1990s, a few studies were published on the question of citizenship in the Arab World Joseph 2000), and later a book titled States Without Citizens (Jandora 2008) was also issued. Several contributors (Moghadam 2003;Abou-Habib 2011) also produced commendable studies on the status of citizenship in the Arab world. The bulk of these writings focused on specific (micro) features, such as issues of gender inequality and civil society activism. In the post-2011 phase, there were also significant contributions (Meijer and Butenschon 2017;Pizzo 2015) that attempted to analyze the ramifications of the Arab uprisings on the status of citizenship rights, with a focus on the major social and political forces active in Arab countries. However, there is still a need to revisit the concept of 'citizenship' from a holistic angle that incorporates primary resources and reflects the fieldwork realities experienced by the activists and practitioners on the ground. This endeavor is pivotal if we are to contextualize this subject in relation to the socioeconomic and the political underpinnings of the transformations witnessed in state-society relations in the Arab region.
Conceptual Framework
Generally speaking, the conceptualization of citizenship in the modern context has been closely tied to the development of the western model of the nation-state project. In this sense, citizenship emerged as a normative as well as an empirical (positive) notion. The normative dimension of the concept has usually been employed to describe the proactivity of the 'citizens' with regard to the public/political affairs of their communities, as opposed to the passivity of consumerism or to individualized societies and other typologies of social organizations, such as the networks that may work against the common good of the society. Elements of this approach could be traced in Jean-Jacques Rousseau's conception of the "Volonté Générale" and in Marxist and Hegelian ideas on the relationship between the individual and the modern state (Sater 2013). As such, citizenship is viewed as a process of attaining the 'common good' of the society, whereby the individual is mandated with the responsibility to proactively participate in the public sphere for the purpose of achieving that aim which should also yield the rights of all individuals (Jones and John 2002).
Another perspective on citizenship comes from the liberal/capitalist school, which views citizenship as based on a certain amalgam of rights. These rights include, but are not limited to, civil rights, such as the right to private property, the right to choose one's religion, and the right to privacy. These rights need to be protected by the state as it officiates its social contract with society. Henceforth, they could be considered as the basic foundations of the modern nation-state project, with its claims to the monopoly over the use of coercion and its responsibility to develop and exercise the rule of law. In the late 19th/early 20th Century, the notion of citizenship also widened to encompass the right to political participation, as well as the right to be protected from the potentially oppressive and coercive state apparatus. This included the rights to create political associations, vote, and strike. "The inclusion of socialist ideas in the late 19th century meant that the state would need to guarantee that these rights could be used in meaningful ways, thereby guaranteeing minimum welfare as part of the development of social rights . . . [There are] three types of citizenship rights that have evolved since the 1800s: civil, political, and social rights" (Marshall [1959(Marshall [ ] 1973. Within the Western context, a host of contributors have critiqued the notion of universal citizenship rights in modern political theory. Young (1989) notes that the conception of citizenship in modern political thought, and specifically in western liberal democracies, should be challenged on the basis that citizenship rights do not necessarily guarantee an equitable social contract between the modern nation-state and the various social and cultural subgroupings within the society. In a critique of the assumption that granting equal rights to 'citizens' is likely to erode various forms of biases and disadvantageous conditions under which such groups may duel, Young argues that acknowledging group difference is essential in order to minimize discrimination and oppression.
Applying the modern model of 'citizenship' with its variations on the Middle East with the assumption that there exist 'classical' citizenship rights as per the modern conception of the term would likely yield a wide variety of limitations. In the words of Nazih Ayubi, "In societies where theoretical individualism is weak and where classes are embryonic, neither the conventional liberal nor the conventional Marxist paradigms seem to be able to capture the realities of the situation" (Ayubi 1995). Thus, given the limitations pertaining to the aforementioned conceptions of citizenship, a 'back-to-basics' empirical approach may be more relevant when scrutinizing Middle Eastern polities (Sater 2013). Such an approach could help in shedding light on the dynamics of state-society relations in Arab states and the relevant trajectories of citizenship rights in the Arab polity, which arguably witnessed the formation of 'states without citizens', whereby the basic civil and political rights of even seemingly privileged groups were exposed to sizable infringements.
In this regard, the concept of the ethnocratic state in which the state's role is mainly to preserve the domination of one ethnic group, not that of the wider society, has also been employed by some analysts when scrutinizing state-society relations vis-à-vis citizenship rights in the Arab world . While this could be considered as an overgeneralization, given the sizable variations in state-formation experiences in various Arab countries, some features of that model have been evident in several cases, yet with different manifestations of groups-based domination over state structures. In several cases throughout the Middle East, religious majorities (Sunnis, Shiites, etc.), cliques of beneficiaries (businessmen, interest groups, etc.), and/or particular class typologies (military elite, state employees, etc.) exercised power leverages over state-society dynamics and imposed sorts of hegemony over state structures.
The study of citizenship in light of the wave of the Arab uprisings of 2011 could be quite beneficial when it comes to the production of knowledge pertaining to state-society dynamics in the Arab region and may also shed some light on the theoretical assumptions regarding the conceptualization of citizenship at large. One of the hypotheses that are often put forward is that citizenship rights are likely to gain momentum at times of sociopolitical mobilization. Again, the development of the Western polities in the 18th and 19th centuries seems to suggest that trajectory (Turner 2000). While it may still be early to reach conclusive findings in this regard when looking at the Arab uprisings, such an attempt may offer some insight concerning the prospects of sociopolitical transformation in the Arab region.
On Egypt
Egyptian intellectuals such as Qelada (1999) and Al Bishri (2004), among others, have argued that geographical as well as historical factors played a sizable role in creating a melting pot that formulated the basis of Egyptian identity over the two millennia that followed the birth of Christianity. As such, the interaction of both Islam and Christianity has resulted in a homogenous, albeit not always harmonious, relationship between the two religions, whereby a symbiotic process of co-existence and mutual acknowledgment was a vital component in the Egyptian polity since the Arab conquest. The Arabization of the Coptic Church and the protection of Copts under Muslim rule were among the most important facets of this relationship (Takawi 2012). In the post-1923 era, the principle of equality before the law and in the 'constitutional rights' given to both Muslims and Christians were also crucial features of this equation. 1 Shami (2009) provides an important insight on the development of the concept of 'citizen' in Egypt in the early 20th Century, whereby the term 'minority' was rejected by several social and political forces, including the Coptic Church, for being inapplicable to the Copts, and instead, the notion of the 'citizen' and the attributes of Egyptian citizenship were the main foci of the public debate on nation-state building. An in-depth analysis of the historical progression of the notion of citizenship in Egypt can be attained by scrutinizing the censuses that took place in Egypt from 1882 until 1986. In doing so, one is able to observe the changes pertaining to the concept of being 'Egyptian' along the years and the different roles that race, religion and national origin played in defining Egyptian citizenship in various historical phases.
There has been also a dichotomy between the liberal and Islamist definitions of citizenship rights in the Egyptian context. Whereas some Islamic thinkers perceive that 'dhimmi' 2 rights should be given to followers of Abrahamic religions as the only non-Muslim subjects acknowledged under the umbrella of the Islamic Umma, liberals, on the other hand, promote the concept of citizenship from the viewpoint of plurality and equality between the different religious, ethnic, and linguistic subgroups within the society, regardless of the dominant religion and under no particular umbrella apart from that of equal 'citizenship' rights for all subjects of the state.
Other schools of thought, such as pan-Arabism, also emphasized the centrality of the concept of citizenship and its essential role in fostering the nation-state model in Arab countries. In fact, most of the leading intellectuals that propagated the notion of pan-Arabism in the Levant, Fertile Crescent, and Egypt in the early-mid 20th Century, such as Constantin Zureiq, Ba'athism proponent Michel Aflaq, and Jurji Zaydan were Arab Christians who believed in the paramount role of Arab culture as a driver for unity both within and among Arab states. However, the autocratic applications of the Arab nationalist project exemplified in the two renditions of Ba'athism in Syria and Iraq and the Nasserist model in Egypt and elsewhere didn't really reflect or catapult the essence of citizenship (Farah 2019). Despite the rhetoric that these regimes employed regarding citizenship rights, most of them witnessed different degrees of discrimination against religious and ethnic minorities and failed to deliver on their pledges of equal rights to all citizens, irrespective of their religious denominations or their ethnic, linguistic or cultural backgrounds.
2.1.1. Disparities From a regional dimension, the executive authority of the modern state in Upper Egypt has been arguably weaker than it is in North Egypt. The main reasons for that are the sizable geographical distance and the relative strength of family and tribal connections in Upper Egypt, in particular vis-à-vis official institutions of state governance. Throughout the 1990s and 2000s, the government had to deal with a continuous state of turmoil in Upper Egypt that was characterized by a combination of an overall deteriorating socioeconomic status and a security threat posed by the increasingly powerful Islamic militants (Ghanem 2014). In the aftermath of the January 25 Revolution, these limitations pertaining to state presence were further exacerbated throughout the country at large and in Upper Egypt in specific. Yet, on the public level, there is more reason to believe that, for the most part, the Egyptian state has been only paying lip service to the issue of citizenship rights. Some observers would cite the fact that this discourse of acknowledgment of Upper Egypt and the frontier governorates is nothing new, and that it has in fact been existent since the inception of the 1952 regime, albeit with no practical willingness or ability to alter this status quo on the part of the state. It could be argued that the Egyptian state has followed a "seasonal" strategy in dealing with these disfavored areas, issuing statements and decrees of special services and projects announced only during times of turmoil and tension, such as cases of terrorism, elections, or natural disasters (Soliman 2011). An example of that could be given with the most recent mention of North Sinai in the discourse of state officials post 2011, which only took place in the aftermath of the militant attacks in Sinai in the early-mid 2000s and the subsequent clashes that occurred between the police and military apparatuses, on the one hand, and the locals, on the other (Karkabi 2013).
On another note, the problem of these disfavored areas could be viewed as a structural one. It is rather insufficient for the state to pump in investments and expect a harvest, provided that the necessary means of production are lacking, including the proper infrastructure, the trained manpower, and so on. Hence, a strategy that is rather comprehensive is absent, which limits the capacity of the state to address the core of the issue in a way that surpasses the budgetary allocations which usually end up being ineffective in tackling the dilemma of geographical disparities.
When looking at the majority of development program interventions that have dealt with citizenship rights, one notes that several parallel transformations led to the increasing emphasis of development studies on citizenship rights projects since the late 1990s. On the one hand, the focus of participatory development programs, reflected in community projects, shifted gradually towards political participation and other areas of empowerment for disenfranchised classes/groups in order to increase influence over administrative decision-making institutions. On the other, there was also the rise of the 'good governance' agenda, which emphasized issues of decentralized and efficient systems of responsive governance. This also coupled with the emerging overlap between the fields of human rights and development represented in the ascendance of the 'rights-based approach' in the late 1990s. This approach was actually adopted by a number of organizations such as the UK Department for International Development (DFID) and UNDP.
In the case of Egypt, most of the civic education and awareness projects constituted the bulk of the developmental initiatives/activities taking place under the aforementioned agenda, which targeted comparatively limited societal segments and lacked a sound impact. In fact, most of the existing ways in which donors dialogue with the Egyptian state and the CSOs working on citizenship rights seem to be quite inefficient, as they lack a formalized sustainable structure than can nurture such interaction. Several analysts and practitioners have suggested that there is a research gap regarding the scope and magnitude of the impact of citizenship rights' initiatives in the Egyptian context. Some of the existing literature on such initiatives has criticized them for being largely politicized activities designed to maintain the political status quo whilst providing a facade of pluralism, attracting more donor funding along the way (El-Mahdi 2011).
After The Spring
In the aftermath of the Arab uprisings, a multitude of contributors also revisited the concept of 'citizenship' in the Arab world in an attempt to scrutinize the emerging changes relating to the notion. In 'Arab Cities after the Spring', the authors dissect the urban fabric of a plethora of Arab cities and observe the transformations in the citizens' perceptions of their own local communities in light of the uprisings. In that regard, a variety of writings note that significant shifts in power hierarchies as well as rationales and priorities of policymaking have been recurring features in a plethora of Arab polities post 2011, which means that the conceptualization of citizens' rights and responsibilities is in the process of redefinition in most of these locales (Stadnicki et al. 2014).
In these post-Spring contributions and others, there appears to be an underlying theme of redefining citizenship in terms of shared experiences, activities, and aspirations of certain social groups, rather than top-down features and characteristics determined or dictated by elite, mostly state, institutions. The common slogans, dress styles, and even tactics and strategies of political dissent observed among youthful segments throughout the region, and elsewhere, seem to suggest that a process of identity building is also in the making among this 'youth bulge' in Egypt and several other Arab countries. It could be argued that a social non-movement (Bayat 2007(Bayat , 2013 also plays a paramount role in shaping how these youth groups formulate their perceptions regarding themselves and their social and cultural environments. In Egypt, since the demise of the Mubarak regime, there have been several alterations in the dominant power relations within the polity. These changes have impacted the evolution of citizenship rights in Egypt and, subsequently, the rights discourses of political actors were not consistent at all times. The exercise of some rights has been enhanced at certain times during the transition, but they have been also exposed to attacks and constraints at other junctures. "In particular, those rights which require a new interpretation of religious and cultural traditions to support their expansion, such as religious freedom and gender equality, have often been obstructed" (Assad and Fegeiri 2014).
Methodology and Fieldwork
In order to tackle the research questions outlined earlier in this paper, the methodology aims at attaining a deeper analysis and a considerable degree of historical depth with regard to the conception of 'citizenship' and the practice of 'citizenship rights' in Egypt. In addition to the overview of historical developments relating to the evolution, or, at times, devolution of the notion of citizenship and the rights associated with it, which could be portrayed via desktop research and analysis of a variety of writings dealing with the topic, the research will also delve into questioning the application and awareness of 'citizenship rights' within the relevant social and political groups, including those that were involved in the January 25 and June 30 movements.
The research that the paper utilizes is based on qualitative fieldwork. It mainly employs a set of ethnographic tools to portray the experiences of the respondents via participant/observant mechanisms and snowball sampling techniques. What I aim to deliver here is a hands-on account of the experiences of some of the individuals and groups involved in fostering citizenship rights in Egypt pre and post 2011 and 2013. This exercise is needed, especially in light of the dearth of material on this subject matter from an ethnographic standpoint.
During the period from 2015 to 2017, the author held several meetings with interviewees from various backgrounds, including intellectuals, civil society practitioners, and activists and party politicians. The interviews targeted prominent as well as low-key grassroots activists and intellectuals involved in different capacities with the social and political movements which aimed at the promotion of citizenship rights over the past two decades. These informative meetings tackled a series of themes related to citizenship rights, including the issue of religious freedoms. Moving from the specific to the more general, the paper follows an inductive approach as it attempts to paint a holistic picture concerning the views and activities of this set of social and political forces towards issues of citizenship rights.
In these aforementioned meetings, some of the subtopics that materialized during and after the 2011 Revolution, such as citizenship from a legal perspective, the religious cases of Baha'is, and other religious minorities and Women's rights, to name a few, were further discussed and explored. Some of the figures interviewed in order to document and analyze these experiences include intellectuals and activists who have all been associated with a plethora of voluntary and professional initiatives closely linked to the theme of citizenship in the 2000s and 2010s. Their experiences were of paramount importance to the fieldwork. 3 This sample of interviewees offered a multitude of angles from which 'citizenship' was viewed and dealt with. It is worth noting that the age-group and ideological background of the respondents played a major role in the way each of them perceived and defined citizenship. 4 This variation can be attributed to the nature/context of the professional activities and the respective sociopolitical experiences that each of these actors have gone through. Throughout the research, the observation was that, more often than not, the relatively youthful activists who were directly and closely involved in the events of the January 25 revolution tended to focus more on civil and political aspects of citizenship vis-à-vis the more seasoned figures whose priorities rested within the realm of social and economic rights. Of course, this was not a general rule, and there were still multiple cases of youth actors, especially within the ranks of the leftist parties and groups, such as the Popular Alliance and the Bread and Freedom party, who were deeply engaged in episodes of socioeconomic activism. This diversity in of itself is one of the main aspects that this study will build upon as it attempts to decipher some of the features of citizenship in the Egyptian polity in the contemporary period.
Geographical Context, Location, and Timeframe
The research focuses on several geographical locales within two governorates-the first is Cairo and the second is Alexandria, the second biggest urban conglomerate in the country-with the aim of exploring how citizenship is defined and practiced in different communities in Egypt. Focus group discussions and open-ended interviews were conducted with the relevant stakeholders in three sectors: political party members, sociopolitical activists, and civil society practitioners.
Indeed, the question of citizenship rights is closely linked to unequal development between the center and the periphery on many levels. First off, the level of attention given by the state to the various areas/provinces and the capacity (or lack thereof) of the different social, regional, and cultural subgroupings within the society to influence state-level policymaking with their agendas is a reflective indicator of the degree of leverage such groups may have vis-à-vis the state. Second, the views and actions of the social and political forces operating on the central as well as the provincial levels provide a valuable insight into the nature and impact of the discourse and practice of 'citizenship' as perceived and dealt with in Cairo as opposed to Alexandria and other places in Egypt.
The writing also aims at analyzing the ideological discourse concerning citizenship within the political parties and social movements active in these areas via critically reviewing their platforms and stances towards citizenship rights. To do so, it is rather crucial to analyse the concept of 'citizenship' in relation to the popular uprising and new governmental actors, political forces, and processes at the local and national level. Factually, the active participation of a very diverse group of citizens from various socioeconomic backgrounds and classes was a reality happening during the 18-day Tahrir sit-in that led up to the demise of the Mubarak regime. It is still questionable whether the uprising has yielded any significant changes regarding the power relations shaping the discourse and practice of citizenship rights and if the civil society was able to play a role in such a process in the first place. In addition to the turbulent transitional phase that the country has gone through, which affected CSOs in general, two main factors also contributed to the hardships faced by CSOs working in the post-Mubarak phase in general: first, the wave of attacks orchestrated by the state against CSOs under the pretext of their being agents of foreign infiltration, and second, the rise of the Islamist forces from 2011 until the military takeover in 2013.
In the aftermath of the January 25 uprising, it has become more difficult for most NGOs working in the field of human rights to get the government to approve funding than to get to the funding itself following the infamous security crackdown that took place in late 2011. As a result, a multitude of NGOs have incurred frozen funds or funds awaiting the government's approval for quite a long time. In the long run, some NGOs-especially the smaller ones (community development associations, etc)-have witnesses a variety of existential risks, including closure and termination of activities.
Background: The 2011-2013 Transition
"The Arab Republic of Egypt is a sovereign state, united and indivisible, where nothing is dispensable, and its system is democratic republic based on citizenship and the rule of law . . . Citizenship is a right to anyone born to an Egyptian father or an Egyptian mother. Being legally recognized and obtaining official papers proving his personal data is a right guaranteed and organized by law". 5 Egypt's transition from a quasi-liberal single-party-led autocracy under Mubarak to a military-backed authoritarian system post 2013 has been a considerably tumultuous process. The main socioeconomic and political features of the post-revolutionary state were set in the aftermath of the 30 June 2013 movement, which put an end to two and a half years of attempted political transition into a democratic system and reconsolidated the political power of the newly reformed state with the backing of the military institution. Having said that, the actions and demands of some of the social and political forces that participated in the January 2011 Uprising concerning the spectrum of 'citizenship rights' were still present in the post-2013 phase. These entities and forces shared different conceptions relating to the notion of 'citizenship' as a set of rights that should be acquired by members of the society on the basis of them belonging to the Egyptian nation. Despite the fact that the call for consolidating such rights gained a sizable momentum among the sociopolitical forces that participated in the 18-day sit-in that ultimately led to Mubarak's removal in 2011, the period that ensued afterwards until July 2013 wasn't specifically as shiny as far as citizenship rights and religious freedoms are concerned.
The post-January-2011 period witnessed a general rise in the prowess of the Islamist forces on the social and cultural echelons, where factions such as the Muslim Brotherhood (MB) and the Salafists were, arguably, the most powerful and influential non-state political groupings on the official level. Ultimately, the Islamists amassed around two-thirds of the parliamentary seats in 2011, and then the MB's candidate, Mohamed Morsi, was elected as president in 2012. The 2011-2013 period was thus an opportunity for Islamists to readjust the foundational aspects of the state relating to the 'identity' and 'cultural domination' of Islamic ideology as per their view (Menza 2012a(Menza , 2012b. The Salafists, for example, attempted to enforce constitutional amendments to limit some of the rights stipulated in the 1971 constitution and succeeded to a considerable extent in doing so in the scrapped 2012 constitution, which ended up being dropped and replaced with the 2014 constitution that was drafted and voted upon in the movement after 30 June 2013 (Brown 2013).
This meant that the predominant discourse and practice of the driving forces within several state institutions dominated by these Islamist forces weren't particularly favorable towards issues of religious freedoms and minority rights. Coptic Christians largely felt threatened due to a bundle of perceived and clear and present risks to their wellbeing and livelihood, and religious minorities, such as Shiites and Baha'is were also frequently targeted by state policies as well as popular and social practices by groups that were either affiliated with or influenced by the MB and the Salafist factions. Examples include many incidents of internal displacement of Coptic communities, blocked access to places of worship of Christians in several locales, as well as cases of attacks and physical assaults on Shiite figures and communities, among other instances.
Meanwhile, the military-backed regime that emerged post 2013 employed the discourse of pro-religious freedoms and citizenship rights for minorities due to a variety of factors, among which is the fact that it based its legitimacy, in no small part, on its opposition to the Islamists that reigned supreme post January 2011. The widespread popular opposition to the MB's rule which was manifested in the massive 30 June 2013 demonstrations was culminated with the military takeover of political power in July 2013 and the following declaration of a transitional roadmap that effectively consolidated political power into the hands of the newly elected Minister-of-Defense-turned-President, Abdelfattah Al-Sissi. Throughout this period of consolidation, the security apparatuses within the state were determined to crush any form of Islamist opposition. Such a tendency was actualized in several episodes of brutal confrontations with the Islamists and epitomized by the violent dispersals of the Rabaa and Nahda sit-ins that led to the death of more than 1000 Islamist supporters at the time.
The newly reshaped state post 2013 was mainly backed by the military and the security apparatuses on the basis of the need to mitigate the threat emanating from the Islamist forces which were portrayed as extreme, violent, and importantly, antagonistic to religious minorities such as Coptic Christians. Of course, albeit somewhat exaggerated and quite generalized, these labels weren't all fallacies, and a lot of the Islamists did very little to foster a different image before and after 2013. Such claims were also reemphasized with the wave of attacks that was led by MB and Salafist supporters on Christian places of worship throughout the country, especially in Upper Egypt, as a retaliation for the Coptic Church's support for the July 3 military takeover.
The impact of this minority-protection approach might be questionable, as it clearly represents a case of top-down rather than bottom-up strategy. Although there is a lack of empirical data to substantiate that most of the Copts are satisfied by the support they received from the state in the aftermath of 30 June 2013, observational evidence and qualitative studies have shown that the post-2013 regime does enjoy a relatively high rate of approval among the majority of Coptic Christians due to a number of factors. First, the regime seems to be embarking on a process of drifting away from the policies and figures associated with the Islamists in different walks of life, so this, along with the perception shared by many Copts that it actually spared a lot of Christians the burdensome threat of an Egypt dominated by Islamists, means that President Sissi is quite popular among vast sections of the Christian community. Second, the post-2013 state has also been active in working on reforming some of the existing laws and structures that were previously utilized to discriminate against Copts for a long period, such as the law concerning the right to build places of worship, which was modified only recently by the state in order to allow for Coptic Christians to build-at least-one new church in every new town or city to be founded. In addition, the regime led the process of changing the electoral law, allowing for a specific quota of three Coptic Christian names on every list of candidates running for the Parliament. As a result, the 2020 parliamentary elections witnessed the introduction of 31 Christian MPs to the Parliament, a figure that could increase if the President decides on nominating more Copts for the presidential parliamentary appointees. Presidential appointments of several cabinet ministers from the Christian faith have also increased considerably compared to the Mubarak regime.
Such state-led policies are indeed worthy of mention and cannot be overlooked when assessing the changes pertaining to citizenship rights of the Coptic Christians post 2011. In spite of these interventions, the fact remains that in the peripheral and rural areas, cases of discrimination and abuse against Christians still take place, with the state's judiciary and executive branches incapable (or at times unwilling) to enforce legal measures to limit such violations. In addition, the predominant decline in the overall status of human rights throughout the country and the growing limitations on all forms of freedom of expression, assembly, and association showcase the unsustainability of this equation of state-led citizenship advocacy. Some of the policies and practices implemented by state institutions towards religious and other minorities appear to be, in many ways, conflicting and ambiguous with regard to their freedoms and liberties. This trend is a continuation of the approach of the post-1952 state, which has attempted to 'nationalize' the discourse pertaining to Islamic piety and values and also monopolize it, so that it is only allowed to emanate from state or state-friendly entities. In that regard, such entities have often employed Islamic rhetoric, at times, to showcase the state's piety and compete with the other non-state Islamists, and at other times, to clampdown on oppositional figures and appease segments of the increasingly spreading wave of conservative Islamism which swept the society in the 1970s under Sadat. In doing so, the state has actually utilized a variety of legal instruments to oppress opposition and appease religious conservatism at times when it was deemed needed. One of these legal instruments is the Hisba law, which basically allowed for any citizen to file a case against another on the basis of religious infidelity. It was then left for the judge, after consultation with religious 'state' authorities, to decide the legitimacy of the accusation and the appropriate penalty, if any. The most notorious case which was based on this law and shook local, regional, and international spheres was that of Nasr Hamed Abu-Zaid, the prominent philosopher and university professor who, in 1995, was forced by court order to divorce his wife on the basis of his writings being 'blasphemist' according to the court. 6 However, in the midst of this myriad of macro and micro sociopolitical alterations and struggles that emerged after 2011, there was a plethora of social and political forces that attempted to push for an agenda of citizenship rights and religious freedoms, both within the 2011-2013 period and also post July 2013. The viable actors and forces within this arena shall constitute the prime focus of this paper.
The Case of Baha'is
In the wake of the 2011 revolution, Baha'is constituted a relatively small community with an estimated 5000-7000 adherents throughout Egypt. The history of the followers of the Bahai's faith in Egypt dates back to the late 1800s, a period in which Egypt was dominated by the British occupation. At the time, the country was home to a wide array of ethnic and religious minorities and communities, and in the milieu of this diversity, the state provided Baha'is with a sort of recognition that they constitute a distinctive religious group that is separate and distinguishable from Islam. Their presence was generally tolerated by state institutions, and in the 1920s, a governmental religious tribunal reaffirmed their status as a unique religious minority while also highlighting that their teachings are considered as a deviation form Islam. Overall, the Baha'is witnessed a period of relative peace and prosperity during Egypt's famed liberal age (1922-52) (Maghraoui 2006;Effendi 1974). 7 After the demise of the monarchy and the military takeover of 1952, the state's recognition of the Baha'i faith was withdrawn, thanks to the 1960 decree issued by Nasser. As such, their legal status as a recognized religious group was terminated under the Nasser government (Effendi 1974).
The Baha'is ambiguous status under Egyptian law continued to prevail in the post-Nasser phase. Both the 1971 Constitution and the most recent 2014 Constitution nominally guarantee equal rights and religious freedoms to all Egyptians in one article, while also limiting these liberties to followers of the Abrahamic (Jewish, Christian, and Islamic) religions in another. Practically, they retained a second-degree legal status due to the persistent discrimination they faced with most state institutions. The fact that personal status law in Egypt is guided by religious rather than civil law meant that Baha'is are excluded from this recognition. Consequently, all issues pertaining to their personal and family relations such as inheritance, marriage, and divorce are largely not officiated by the state. 8 Perhaps the most enduring and probably infamous legal case relating to the situation of Baha'is in Egypt is the one relating to their ID cards. In the late 1990s, the state initiated a policy of computerizing personal records, and accordingly, all citizens were required to be issued new mechanized ID cards. As opposed to the old handwritten ID cards, in which Baha'is were often allowed to leave the religion slot blank or denote their religion as 'Bahai', the new cards had a slot for the religion of each respective citizen which had to be filled automatically with a recognized religion. This left the Baha'is in an existential conundrum owing to the state's refusal to acknowledge their faith as a distinctive religion. (Rieffer-Flanagan 2016). The dilemma was exacerbated further with a specific order issued by the Minister of Interior in 2004 for all relevant authorities to refrain from issuing cards with blank religion slots. 9 The immediate impact of this official indiscernibility on the lives of Baha'is was profound as they couldn't literally deal with any state authority whatsoever, be it for the purpose of receiving basic services such as health, education, and so on, or even for livelihood matters such as employment, contractual dealings, and tax payments. In short, they were forced into existential oblivion by the state. 10 In 2006 some activists and rights-based groups filed a lawsuit against the newly enforced policy which yielded the Egyptian Administrative Court ruling that Baha'is have the right to be legally registered by the state. After an elongated legal case that witnessed several revocations and appeals from both sides, in 2009 the Supreme Administrative Court eventually ruled in favor of the Baha'is right to an ID, entailing a return to the prior status quo of a blank slot for religion in the identification card. Despite this positive development, the Baha'i faith remained unrecognized in Egypt, which means that all matters concerning their personal status are not yet officially acknowledged by the state. 11 5.1. Post-Revolutionary Realities . . . Protraction of Status Quo?
In the post-2013 era, the situation of Egyptian Baha'is remains dubious, to say the least. Notwithstanding the 2011 revolution, the 1960 decree still stands, which entails that the 2009 verdict is insufficient in terms of granting Baha'is the state's recognition as an official religion. In addition to the state-based discrimination they were exposed to, Baha'is were also subjected to a multitude of social and popular hostilities towards the end of Mubarak's reign. In line with the attacks witnessed by several religious and other minorities and communities in the wake of the increasing rise in the conservative discourse of numerous Salafists, MB, and other Islamist forces in the society after the revolution, the 2011-2013 period also saw instances of unprovoked violence against Baha'i individuals and homes. For example, in February 2011, some Baha'i homes in a locale in the Delta region were immolated by unidentified perpetrators. Several reports alleged that a few state security officers were involved in the attack. "Baha'is are still prohibited from many basic freedoms, such as practicing their religious laws and constructing places of worship. Though Baha'i representatives lobbied during the constitutional drafting processes of 2014 to expand religious freedoms to their community, this did not occur" 12 .
In fact, a significant part of the challenges that the Bahai community is exposed to stem from the constitutional vagueness regarding their status. When compared to the constitutions that were drafted after 2011 (in 2012 during the short-lived reign of the MB and then in 2014 at the time of the military-backed government), the 1971 constitution, which was the highest legal document in the country until Mubarak's removal in 2011, is considerably more progressive with regard to religious freedoms and minority rights. For instance, the clauses stating the right of the person to practice religion freely in the 1971 constitution were later omitted in the 2012 and 2014 versions (Baha'i of Egypt' n.d.).
By and large, the 1971 constitution was fairly imbalanced as it saw, on the one hand, very limited clauses on political rights and liberties and the division of powers within the state, and on the other, there were also other articles on personal and civil rights and liberties that were relatively progressive. This can be attributed to Sadat's tendency to portray an image of a country enjoying a decent level of social liberties and freedoms while also maintaining a firm grip over political power. The 1971 constitution also clearly stated that the incorporation of international covenants in the Egyptian legal system is vital. The end result was a relatively incoherent document which left both legislators and judges confused. 13 Yet several lawyers and human rights activists used the International Covenants on Civil and Political and Economic and Social rights as the basis for their appeals to free some workers accused of demonstrating against state authority and lobbying for strikes, based on the 1971 constitution. A few other cases which witnessed litigations against the state on basis of social and economic rights (the right to have a home, for example) were considerably successful, while others, such as Shiites, were met with massive challenges due to social and political factors. 14
Egyptian Initiative for Personal Rights (EIPR)
One of the relevant entities that played a key role in promoting the cause of Baha'is and several other ethnic and religious communities and individuals is the Egyptian Initiative for Personal Rights (EIPR), which was founded in 2002 by human rights activist Hossam Bahgat as an Egyptian organization with the aim of protecting and further consolidating human rights. A vast number of the activists, scholars, and practitioners involved in different capacities with EIPR played vital roles in the 2011 revolution, and as such, it's safe to say that the organization was a key platform in the arena of social and political activism, both before and after 2011. In many ways, it emerged to fill the gaps that the traditional rights movement could not occupy. Egypt's human rights movement, which was crystallized in the 1980s and 90s amidst a competition with the Islamist movement over the discourse of social activism during this phase, tended to focus on a certain spectrum of human rights violations that, often, included socioeconomic issues such labor rights, basic needs (or lack thereof) of impoverished classes and communities, and women's rights. It also targeted violations to the personal wellbeing of citizens, which were usually manifested in cases of police brutality and other forms of state violence targeted towards civilians. EIPR, on the other hand, was more willing to engage with the controversial and sensitive issues that were likely to lead to frictions and confrontations with the state. The main activities of the organization consist of research and documentation, litigation, campaigns and lobbying, and fieldwork via its offices, which divide its work into the same three components mentioned above while focusing on their respective regions. The branch offices are operational in Alexandria, the Canal region (Suez, Ismailia, and Port Said), and Luxor, which covers Upper Egypt. 15 A sizable stifling factor obstructing the work of EIPR in the post-2013 phase was the massive scrutiny, pressure, and most recently, police arrests directed at its key members. In this regard, the organization is not the only civil society entity exposed to such attacks, which come as a part of the methodical and consistent apprehension and targeting that the state security apparatuses practice against several civil society organizations, particularly the ones that work on issues that could potentially be critical of the state, as is the case with EIPR. "In July 2014, when the new civil society law was still being drafted, we got to know about it through the leaks published in Al-Ahram. There was no process of community dialogue or transparency on the part of the government whatsoever". 16 Therefore, the resultant draft was quite problematic as it mainly sanctioned the securitydriven state policies when it comes to the funding of NGO's and other CS actors and reflected the sizable level of control and limitations that the state was intent on applying on all civil society organizations. One of the tools that the law employed to ensure a scope of surveillance over the source of funding of NGOs was a committee called 'Lagnet Al-Fohous' or the Inspection Committee, which was composed of civilian and security state employees and mandated with overseeing the financial inflows coming into any NGO operational in the country. 17 According to the EIPR members who worked on the Baha'i file, the Baha'i dilemma is a case in point as far as the Egyptian state's conception of citizenship is concerned. The differential treatment that the Egyptian Baha'is have received over the past tend to display that the state's policies and practices towards certain religious minorities is by no means unbiased or equitable.
The religious freedoms portfolio is one of the most important files tackled by EIPR. When someone reports a case, it is assessed based on its placement within the strategic priorities of the organization. The victim/case has to be representative of a bigger issue relating to community-based human rights violations and not just a personal grievance or disagreement on an individual level . . . EIPR's work wasn't only focused on Egypt, but also expanded to the MENA region at large. When we received information regarding a certain group or minority being exposed to human rights violations, we would try to approach them. 18 Due to the relatively limited human and financial resources of an entity such as EIPR, it was pivotal for them to set certain criteria for the selection of the cases they would work on. These included the frequency of recurrence, the geographical/regional scope, and the scale of the violation(s) on hand. Multifaceted aspects of the cases handled had to be managed carefully as well, including, for example, the media exposure or lack thereof that a certain case receives. In the case of the Baha'is, the increasing publicity seemed to correlate with a higher degree of public scrutiny and targeting. In fact, most of the cases of setting Baha'is' homes on fire were reported during the period of media hype revolving around their conundrum, which, in return, forced groups like EIPR to curb their media activities relating to the Baha'i case.
EIPR was engaged with two court cases concerning Baha'is. The first one aimed at granting them the right to denote their religion in the ID cards as Baha'is, and it was lost. The second case is the one mentioned earlier, which ended in the Administrative Court ruling in favor of the Baha'is issuing their own ID cards with a blank slot for religion. Interestingly, despite the fact that the memorandum that the legal team drafted to argue for the cruciality of allowing Baha'is to leave the slot for religion blank was actually based on a pro-rights rationale advocating for religious freedoms, the speech that the team delivered at the court hearing itself wasn't necessarily so.
"We thought the judge was going to be conservative so the final approach we adopted was based on the hypothetical argument that he wouldn't want a Bahai to marry his daughter without knowing his actual belief, hence it is important to differentiate them in the ID. It was framed as a way to protect Muslim houses from Baha'i infiltration and also, from a security standpoint, ensure that the state is able to oversee the actions of a group of the inhabitants who dwell within it. Somehow, it worked." 19 Here, it is important to note the role played by law and its dialectical relationship with the society. In the Baha'i case, the law was utilized as an access point rather than a protective or an equitable mechanism that enables individuals to gain their rights. Laws are not created in a vacuum; they are the contextual outcome of the socioeconomic, cultural, and political circumstances prevailing at the time of their creation. Therefore, the pragmatic approach adopted by the lawyers in this court hearing is a case in point when it comes to the tactics deployed by human rights defenders in different societies, especially where the legal framework is not necessarily conceived as supportive or favorable for certain groups or minorities.
Bread and Freedom: A Party of the Revolution?
During the Mubarak period, most opposition political parties were by and large considered as cosmetic instruments willing to be utilized by Mubarak's regime as pawns while offering a façade of pluralism and political participation. They had actually earned the label of 'cartoon' parties, which was widely used in the polity to describe their ineptitude and ineffectiveness in the face of the regime. However, after Mubarak's ousting, most of the limitations which were previously imposed by the regime on the creation and activation of political parties were lifted, and as a result, a lot of the individuals and groups who participated in the 2011 Uprising embarked on the establishment of new parties. The realm of political parties is thus worthy of close scrutiny if one is to navigate through the impact of the 2011 transformations on the discourse and policies relating to citizenship rights and religious freedoms. In doing so, the Bread and Freedom Party (BFP) is one of the prime candidates for such analysis, given the sound correlation between the creation and further development of the party and the events of the 2011 revolution and the fact that they stand out as the one of the few secular political groupings that actively engaged with a host of citizenship-rights-related issues. 20 In 2012, the project of creating the Bread and Freedom party as an offshoot of the Popular Alliance party, which was an umbrella party established right after the demise of Mubarak's regime in 2011 in order to amalgamate the forces and currents of the political left at the time, was initiated. 21 Unlike the relatively more seasoned leaders and members of the Popular Alliance who were perceived to be less progressive, or more willing to accommodate some of the policies of the state regarding a variety of social, economic, and political issues, the BFP was mainly composed of a bundle of revolutionary youth who were, according to the leader of the party's politburo, keen on taking a more emphatic stance towards most of these issues, especially with regard to citizenship rights and religious freedoms.
"Citizenship rights should be granted to anyone who inhabits this country . . . People should have equal rights and hence it is vital for this struggle to be materialized and fought on everyday basis because it is not merely a legal case or two to be won. We still believe that the current state structure can be reformed from within . . . We think that citizenship rights should be earned and that state institutions, such as the judiciary, are actually regressive. Anyone calling for citizenship or equal religious rights is likely to be persecuted, especially if it is against the will of the dominant powers within the state and the society, be it the Islamists from 2011-2013 or the security and military apparatuses afterwards. As such, the state has bestialized and therefore it's only via social and political struggle that we can change that." 22 A similar take on the wholistic nature of the struggle for citizenship and the fact that it surpasses a mere set of legal battles was also echoed by the focal point of the citizenship rights portfolio of the BFP. "Litigation is an important component of the battle for more equitable citizenship rights but it's by no means the only one. The democratic movement and the CSOs need to keep on looking for entry points to infiltrate and influence state structures vis-à-vis their approach towards citizenship issues". 23 Indeed, several elements of the sectarianism and patriarchy in the state have been embedded within its institutions since their foundation, so it's a multifaceted and long-term battle that is likely to last for decades.
One of the main challenges facing the BFP is to attain a sort of a balancing act between the focus on issues of social and economic rights, on the one hand, and civil and political ones, on the other. Being a leftist party with a communist tradition, and given the plethora of atrocities and violations witnessed in the socioeconomic arena in a country like Egypt, especially with the labor sector which represents the main constituency of a leftist party like BFP, achieving such a balance is not a straightforward feat, given also its relatively small size and the limited resources it has. "For example, some of our supporters in conservative pockets in Upper Egypt think that we are too liberal because we tend to focus on women and citizenship issues more than workers and farmers. This, of course, isn't quite accurate because we exert our utmost effort to tackle both sets of issues which, more often than not, intertwine at many instances" 24 . This duality is shown in many of the activities that the BFP has undertaken with the syndicates, farmers unions, and female workers in several factories, where it was clear that the struggle for both sets of rights is closely interlinked. The following section offers a brief overview of the condition of citizenship rights of two of Egypt's most sizable religious groupings, while also highlighting the interventions that were utilized by the BFP in the midst of its efforts to gain a foothold in the struggle for religious freedoms post 2011.
Coptic Christians
The rise of the Islamists, the oppression of Mubarak's politics, and the general decline in the political forces and groups adopting a leftist agenda meant that a sizable segment of the potential target audience of the leftist camp became increasingly alienated with the leftist current as a whole in the 1980s and 90s. With the increase in terrorist attacks against Coptic Christians during Mubarak's rule in the 1990s and 2000s, and their subsequent targeting in the aftermath of the Islamists' empowerment from 2011-2013, it became clear that the existing representations of the voice and concerns of Coptic Christians in the public sphere was lacking, to say the least (Hamzawy 2014). Throughout Mubarak's reign, the Coptic Church attempted to monopolize the representation of Copts vis-à-vis the state and, more or less, it managed to do so with relative ease.
The massive popularity of Pope Shenouda III, who led the Orthodox Church throughout Mubarak's rule and who was also on good terms with the Mubarak regime thanks to his accommodationist and diplomatic approach towards the state, and the fact that Mubarak's regime also coopted and catapulted the Coptic Church to be considered as the sole representative of all Coptic Christians, again reemphasized the sectarian nature of this relationship. Despite the high hopes that were associated with the 2011 revolution regarding the potential of restructuring this state-church dichotomy, the military-backed government that took over in 2013 ensured that a return to the status quo of this state-church relationship under Mubarak was a reality happening post 2013. Instead of expanding the civic code to be the prime legal framework to which all Egyptians (Muslims and Christians alike) are held accountable, it seemed that post 2013, the state was also keen on maintaining the status quo which was prevalent during Mubarak concerning its relationship with the Coptic Church.
In the meantime, various political parties, including BFP, were also quite eager to play an active role in restructuring this dichotomy in a way that allowed for Coptic Christians to be agents of change in the polity and further their own interests with a sense of ownership instead of solely relying on the Church to do so on their behalf.
"The violence and targeting that a lot of Coptic Christians were exposed to in the aftermath of the January 25 revolution led to the resurfacing of a lot of the debates surrounding citizenship rights and religious freedoms in the country. These issues, along with the question of Women's rights (or lack thereof) constitute the pillars of any policy or discourse pertaining to citizenship rights in Egypt today. Hence, the left which has not been very active in these issues because they were supposedly already on the surface of the public debate, had to reengage itself with them again in the post-revolutionary phase. The idea of the Supreme Council of the Armed Forces (SCAF) adopting the extremist Islamists' rhetoric and allegedly mass-murdering Christians in the events of Maspero 25 in order to silence them was really horrifying on top of it being incomprehensible as well." 26 In many ways, the BFP attempted to take a more emphatic stance against these episodes of violence that targeted the Christian community in the country. In fact, an integral part of the raison d'etre of BFP (as opposed to other old-school or mainstream 'opposition' parties) is based on this notion of adopting an unequivocal position on issues of sectarianism with little room for compromise. "When [the core group that eventually founded BFP] was still involved with the Popular Alliance, we were keen on being in the field battling against issues of sectarian discrimination. Our members visited Upper Egypt when several locales there were being targeted by extremists in order to show solidarity with the people there and also document the scale of the violations they were exposed to". 27 However, there is yet a long way to go regarding any genuine mobilization of viable segments of the Coptic Christian community in the direction of tangible social or political movements calling for more citizenship rights, let alone the direction of leftist parties per se. This can be attributed to a bundle of factors, including the traumatic impact of the 2011-13 period on the majority of the Coptic Christian community and the resultant allegiance that most of it has pledged-via the Coptic Church-to the state institutions, particularly the military and security apparatuses. This adherence to the state, coupled with the historical mistrust towards the left, which was largely in the making since the time of Sadat and more clearly during Mubarak, entailed that the appeal of leftist ideologies and leaders is quite limited within the Coptic Christian community. BFP members recognize this challenge and are aware that a lot more needs to be done in order to showcase that the revolutionary discourse does not actually contradict the interests of the Christian community, but on the contrary, is actually wholly sympathetic and supportive to the demands of the Christian community as far as equitable citizenship rights and religious freedoms are concerned.
The Shiite Question
Exact figures of the Shiite population in Egypt vary greatly, and there is no official number, given that the state does not include sectarian data in the periodical census. Some estimates state that in the year 2017 their population was around 1,000,000. 28 (Shi'a of Egypt n.d.) Despite the fact that Egypt is usually considered as a predominantly Sunni society, various aspects of the Shiite doctrine and practices remain deeply imbedded in the Egyptian community. In fact, both the country's capital, which was built around 970 A.D., along with its most prominent religious institution, Al Azhar, came into being at the hands of the Fatimids, who were the first Pan-Islamic Shiite Caliphate rooted in North Africa. The Fatimids ruled Egypt for about 200 years and arguably had the biggest impact on the social habits, belief-systems, and cultural practices of the Egyptians vis-à-vis the other non-native Muslim rulers that reigned over the country. A lot of the cultural facets prevalent contemporarily in Egypt can be traced back to the Fatimids, including the immense reverence of the House (descendants) of Prophet Muhammad, the presence of patron saints who are also venerated in pretty much each major city or town throughout the country, and the abundance of the festivities that are still being celebrated in Egypt today in commemoration of Shiite events such as Ashura and the birthdays of Prophet Muhammad and his family members, such as his grandson, Al Hussein, whose shrine is considered as one of the most visited religious sites in the heart of Cairo.
In spite of these historical and cultural features, the Shiite population remained marginalized for most of Egypt's medieval and modern history. Successive ruling authorities tended to play the Shiite minority as a political card at times of turmoil and instability and the most recent set of episodes relating to the discrimination against Shiites came in the aftermath of the Iranian Revolution in 1979 when Egypt's President Sadat decided to host the ousted Shah of Iran, which juxtaposed the country directly against the newly established Islamic Republic at the time. This, coupled with the fact that the state has become increasingly reliant on Gulf Cooperation Council (GCC) countries for economic and political support ever since the Open-Door policies in 1974, meant that a harsh tone and a firm stance on any manifestations of Shiite rituals or festivities were logical in order to appease the ultra-Sunni doctrines dominant within the GCC countries. These policies also aimed at easing some of the fears emanating from the GCC concerning the increasing wave of Shiite spread, which was already on display within other Arab countries with sizable and influential Shiite communities such as Iraq, Lebanon, and most recently, Yemen.
In the post-revolutionary phase, and just like most other minorities, the Shiite voices that were calling for recognition and rights were becoming increasingly audible. Yet the constant rise in the Islamist forces within the state and society from 2011 to 2013 meant that incidents of targeting and persecuting the Shiite community were also on the rise. Most of these forces were predominantly Salafist or pro-Salafist and MB, entailing that they adopted a strictly orthodox interpretation of Sunni Islam and perceived non-Orthodox Muslims such as the different Shiite sects and Alawites, among others, as deviators from the core of Islam. This wave of state-society intolerance of the Shiites also affected institutions such as Al Azhar, which, despite its historical affiliation with the Shiite doctrine and the fact it acknowledges the Shiite school of thought in some of its curricula, also joined in this wave of antagonism against Shiites by declaring in 2013 that the Shiite practices actually stand against the tenets of proper Islam.
"The Grand Imam of Al-Azhar, Ahmed Al-Tayyeb, has used television appearances to implore his audience to beware of Shi'a proselytizers. Moreover, the Ministry of Religious Endowments runs mosques in Egypt in accordance with Sunni doctrine and does not recognize Shi'a mosques or rituals. In May 2015 a Shi'a dentist from Daqahlia governorate received a six-month prison sentence for contempt of religion after authorities found in his home books and other items supposedly used to perform Shi'a religious rituals. A week later, Shi'a cleric Taher al-Hashimy was arrested following a raid on his apartment where books and other items were confiscated by security forces." 29 As an institution, Al-Azhar was attempting to assert its own power as the 'official' representative of Sunni Islam in Egypt vis-à-vis Salafists, MB, and other conservative forces, while also consolidating its credentials as the protector and keeper of Sunni Islam, especially in the battle of legitimacy of Egyptian state institutions post 2011 and the geopolitical context in the region in light of the rivalry between KSA and Iran.
This overwhelming state of hostility against Shiites eventually led to a spike in the incidents of aggression and violence against their communities and households. One of the most publicized attacks took place in June 2013, a few days before the mass protests on June 30, 2013, when a mob led by ultra-Salafists launched an attack on a group of Shiites celebrating a religious ceremony at a private house in a village in Giza. "Though four men were killed, including a prominent Shi'a figure, Sheikh Hassan Shehata, and other Shi'a houses were also set on fire, the police allegedly failed to take action to halt the attacks". 30 The incident actually came after a period of antagonistic sermons by local Salafi preachers in the communal mosques of the village where the attack happened.
The BFP was probably one of the few parties that managed to engage with the debate relating to the Shiite community in Egypt. Despite the religious sensitivity of the matter and the fact that there were sizable social and political prices to be paid by the party as a result, its core members were still adamant on tackling that file due to its priority as a clear breach of the citizenship rights of Shiite Egyptians. The party managed to attract some Shiite youth into its ranks, most of whom had converted to (or embraced) Shiism due to what they perceived to be the appalling nature of the rhetoric and policies of the ultra-conservative groups such as the Salafists and the MB. The fact that the ideas and allegiances of Shiites in a country like Egypt pose a sense of minority resistance to the overwhelming majority makes them also somehow appealing to a lot of the leftists who usually tend to support the struggle of smaller social and political groups against the hegemonic state. For the most part, and because of the considerably high stakes involved, embracing Shiism in Egyptian society has become a political and a social commitment in addition to being a religious and a theological one.
Alexandria
Historically, Alexandria was one of the most multicultural and open communities in modern Egypt. As the country's main port, and with a long and ancient tradition of coexistence with and close proximity to southern Mediterranean communities of Greeks, Italians, and Turks, among others, the city was, in many ways, a cornerstone of regional and international economic and cross-cultural exchange up until the mid-20th Century. This picture changed gradually after 1952, and by the 1970s, a significant part of this internationalisation was non-existent due to a multitude of socioeconomic and political factors.
Contemporarily, Alexandria is characterized by a bundle of socioeconomic, cultural, and topographical features that distinguish it from Cairo, and subsequently affected the dynamics relating to Egypt's second largest city's experiences with the January 25 Uprising and the mobilizational events that ensued afterwards. As opposed to Cairo's Tahrir Square, which served as a mega-hub and a gathering point for the various social and political groupings participating in the mobilization, the Alexandrian protesters, on the other hand, had to come up with alternatives, as there was no such parallel structure in their city. "We used to roam around the streets and cross paths with people from different walks of life, age-groups and, of course, political affiliations. This allowed for a constant space for dialogue and, often, conflict with different segments of the Alexandrian society . . . The experience here was thus relatively different. It gave us the opportunity to be exposed to and the expertise to engage with different viewpoints unlike Tahrir where the majority was those who wanted to be there". 31 The rather intimate setting of Alexandria's mobilizations also entailed that most of the social and political groups who were interested in a particular cause related to the revolution would eventually get to know each other through common activities and experiences. This facilitated coordination between the various forces irrespective of their ideological platforms, as manifested in the Alexandrian secular and democratic forces' ability to create various unified fronts that combined social and political forces from different platforms and managed to jointly organize common activities and participate in some electoral events as a collective.
A facet that also distinguishes Alexandria from Cairo is the fact that it is relatively smaller in terms of geographical area. Neighborhoods and communities are within close proximity to each other and can be differentiated from one another on the basis of socioeconomic class, educational levels, predominant cultural and political affiliations, and so on. This makes the targeting process of certain groups and communities much more focused, as it enables the political actors to clearly zero-in on their target audiences, and accordingly, devise their communication messaging tactics and strategy in line with the typology of the prospective stakeholders. This has also led to the city being comparatively condensed with specific constituencies that are often at odds with each other. "In Alexandria, entry points are clearer vis-à-vis Cairo . . . We are more concentrated here. You will find high percentages of religious extremism as well as strong support for the secular ideology and parties. This was shown in the electoral events that preceded 2013 in which the Salafist current was allowed to run freely and amassed a sizable chunk of the votes" 32 . In the meantime, the Alexandria Governorate also harbored one of the most active anti-Islamist oppositions before 2013, which was reflected in the considerable percentage of opposition votes that the draft constitution of 2012 received when it was put to vote. 33 When compared to other regions throughout the country, Alexandria actually saw one of the highest turnouts against this constitutional referendum of 2012.
Just like in Cairo and the other governorates, the capacity of some of the activists involved in the left to multifunction on several fronts of sociopolitical and economic struggle against the predominant authorities is yet limited: "Some people do not conceive the importance of working simultaneously on different angles and themes. Workers, for instance, only conceive citizenship through equal economic rights, not equal access to opportunities regardless of religion, sect or gender. Those who are burdened by citizenship concepts other than inequality of economic benefits are usually upper and middle bourgeoisie. For instance, although a female Christian worker could be experiencing multiple tiers of discrimination, she would still focus on the economic aspects due to her conviction that if she would engage with issues that relate to religious freedoms, for example, then she would be backing out on her major battle." 34 Due to the entrenched centrality of Cairo in most of the social and political structures in the country, including political parties and other groupings, the 'brains' have been predominantly coming from Cairo. This means that the agendas, platforms, and sometimes even priorities of action of most of the mobilization taking place used to be directed based on Cairo. Several revolutionary parties attempted to adopt a different model based on decentralization, but it is an experience that is yet to be assessed. Due to such factors of close proximity and relatively small area, it was also more difficult for the human rights and democratic movements to tackle certain issues, especially the ones that received general social attention and caused controversial debates and heat within society, such as homosexuality or the rights of the Baha'i community, as opposed to other causes perceived to be less controversial, like the discrimination against Coptic Christians in some workplaces, for instance.
Conclusions
The concept of citizenship has been used by a multitude of social and political forces in the aftermath of the Egyptian uprising, sometimes in conflicting ways. On the one hand, the state has-at various junctures-attempted to monopolize the process of interpreting and operationalizing the concept, mostly on its own terms, in order to highlight notions that relate to 'nationalism' rather than citizenship, such as the responsibilities of the 'citizens' and their perceived loyalty to their homeland. On the other hand, a plethora of the social and political forces and movements that actively participated in the Egyptian uprising seemed to be pushing for an agenda of 'citizenship rights' that advocates equality between the subjects of the state regardless of religion, ethnicity, creed, color, and so on, and calls on the state to uphold its role as a guarantor of equal rights to all citizens. As shown in this paper, there is also a growing body of CSOs that have been engaged in an array of projects revolving around the theme of citizenship rights, a lot of which have gained bigger momenta post January 25.
This state-society dichotomy has shaped the features of the discourse on citizenship and the rights associated with it throughout Egypt's modern history. Since Mubarak's ousting, there have been several shifts in the dominant power relations within the polity, which influenced the evolution of citizenship rights in Egypt. Subsequently, the rights discourses of political actors, be it state institutions such as the military and the relevant ministries or the informal political movements and formal political parties, were inconsistent at times. At certain points during the transition, the exercise of some citizenship rights has been actualized, but they have also been subjected to attacks and restrictions at other times. More often than not, citizenship rights that require revisiting certain religious and cultural norms to support their application, such as religious freedoms, have been limited due to the tendency of the dominant political forces to use populist rhetoric and attract larger numbers of supporters among more powerful societal groups.
The ethnographic approach adopted in this writing offers a deeper look into the trials and tribulations of some of the social and political movements associated with various issues of citizenship rights, against the backdrop of the January 25 Revolution. In pursuing a civil/political agenda, the experiences of many of these social and political actors represent an example of practicing active citizenship via working on the rights associated with it. However, as highlighted by some of them, these experiences are not without limitations. The groups involved in such activities remain to be from a certain socioeconomic class that is predominantly within the upper middle and well-educated segments of the society. For the most part, they can't be considered as part of a grassroots movement claiming their rights. Henceforth, a sizable portion of the debates and deliberations on issues of citizenship rights remain to be limited within elite circles of activists, academics and some of the intelligentsia involved or concerned with these issues.
A plethora of the social and political activists engaged with citizenship rights have embarked on multiple struggles aiming at raising more awareness among the popular echelons of the society regarding some of these rights, while also working on creating, amending, or abolishing laws related to the social status of several groups of religious minorities. Some of them do believe that even if the strategic objective is to overthrow the entire legal system, or radically change it, the micro-level struggle is still also pivotal. This means that the short-term tactical approaches-as well as the strategic ones-are both vital for this kind of confrontation between the sociopolitical forces, on the one hand, and the relevant state authorities, on the other. Thus, the importance of the litigations advocated by democratic and human rights movements, and their ramifications regarding furthering the cause of religious freedoms, for example, cannot be understated. The findings of the fieldwork conducted for the purpose of this study contribute to our understanding of citizenship and religion by showcasing the challenges that face some of these social and political activists and practitioners as they attempt to actualize their role on the ground. In doing so, the study highlights that the battle for citizenship rights in Egypt is likely to be a long-term and multifaceted one, involving structural changes within the power dynamics of state-society relations.
The increasing suppression that a few of the human rights and political movements discussed in this paper, such as EIPR and BFP, were exposed to post 30 June 2013 shows that the state is intent on limiting the potential of sociopolitical pressure emanating from the society with regard to citizenship rights and religious freedoms. This is coupled with a clear tendency to pursue the policies of the pre-2011 era concerning the confiscation and monopolization of the discourse and policies relating to the role of religion in the society. In doing so, the state reinvigorates the top-down approach adopted since 1952 concerning the actualization of the relevant spectrum of citizenship rights.
Funding: This research was partially funded by ACSS.
Institutional Review Board Statement:
The study was conducted according to the guidelines of the Declaration of Helsinki, and approved by the Arab Council of Social Sciences.
Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
1
This principle is evidently present in all of the country's constitutions from 1923 onwards, however, the application of certain laws, such as the one for building place of worship, sometimes reflected discrepancy and, at times, discrimination againts Coptic Christians. 2 The term 'dhimmi' refers to the Islamic Jurisprudent description of non-Muslims living within the boundaries of Islamic societies. The legal status of the majority of non-Muslims communities living under the various Islamic caliphates was defined by this term, which implied that they were granted state protection in return for paying 'Jizya' or special tax. With the advent of the predominantly secular nation-state projects in most of the Muslim societies in the early 20th Century, the term 'dhimmi' became rather controversial as it was perceived by the seculars and liberals as a discriminatory concept that doesn't grant equal rights to all citizens. As such, its usage began to be limited and mostly associated with the Islamic schools of thought.
Figures like Nabil Morqos, Akram Ismail, Bahaa Ezzelarab, Mona Ezzat and Amr Abdelrahman were all engaged in a wide variety of schemes concerning citizenship rights at various junctures before and after the 2011 Revolution. Morqos is an Egyptian intellectual and a development expert, who's been engaged in the field of civil society for over five decades, Ezz-El Arab is a legal expert and was in charge of the litigation arm of the Egyptian Initiative of Personal Rights (EIPR) at the time of writing this paper. He also participated actively in the January 25 Uprising. Eidarous is the focal point of the civil and political rights portfolio in the Bread and Freedom Party. 4 For example, whereas Nabil Morqos' focus was mostly on the socioeconomic rights of the citizens and the essentiality of highlighting this theme in any study/scrutiny of citizenship rights in a country like Egypt, other activists and practitioners, such as Bahaa Ezz-El Arab and Elham Eidarous, emphasized the cases of civil and political rights abuses as the most striking forms of infringement on citizenship rights. 5 Articles #1 and 6 of the Egyptian Constitution of 2014. 6 The law was based on the 1971 constitution's assertion that Shariaa is a source of legislation. It was then modified to allow only for the public prosecutor to decide validity of claims instead of granting individuals the right to raise cases in court directly. Another case in point here is Law # 98 (concerning religious blasphemy); a largely ambiguous law that's mostly used against non-Muslims for 'insulting' Islam, which is an extremely vague notion as it's not clearly stipulated in the law itself and mainly left for the interpretation of Judges. The number of writers and intellectuals who were put on trial as a result of this law expounded after 2011 and continued to increase even after 2013. Sadat's bid to portray himself and his regime as the protectors of Islamic piety and values was also reflected in the novel inclusion of the term 'Shariaa' into the preamble of the 1971 constitution. Article # 2 which previously read: "Islam is state's religion and Arabic is its official language" was modified to "Islam is the state's religion; Arabic is its official language and the principles of Islamic Shariaa are a main source of legislation". 14 Interview with Amr Abdelrahman, Head of Civil & Political Rights Program, EIPR, Cairo, April 2016. 15 Ibid. 16 Ibid. 17 After episodes of local and international opposition to the new draft law it was eventually scrapped and reintroduced in a modified and reformed format in 2019. The new draft affords more flexibility for CSO's in terms of funding, and in doing so, it also removed the security apparatuses personnel representation from the Committee. 18 Interview with Bahaa Ezz El-Arab, Cairo, January 2016. 19 Ibid. 20 In the aftermath of the Jan 25 revolution, several other secular political parties were also created with the aim of fostering the socioeconomic and political objectives of the January 25 movement, representing a variety of ideological platforms. These include parties such as the centrist Egyptian Social Democratic Party (ESDP), the Baradei-led Dostour (Constitution) Party and the social-liberal Egypt's Freedom Party. 21 Up until the time of this writing, and in spite of its structural presence and ongoing activities, the BFP wasn't yet recognized as an official party by the state due to its inability to collect a required threshold of 5000 signatures from 10 different Egyptian governorates. This caused the party to be exposed to numerous cases of harassment by state security apparatuses, especially in the aftermath of the June 2013 movement, whereby some of its members where subject to pretrial detention.
Interview with Akram Ismail, BFP Politburo, Cairo, June 2016. 23 Interview with Elham Eidarous, focal point of the citizenship rights portfolio, BFP, Cairo, Jan 2017. 24 Interview with Akram Ismail, BFP Politburo, Cairo, June 2016. 25 In 2012, more than 20 Coptic Christian demonstrators were killed in confrontations with military troops in the neighborhood of Maspero on the Nile Corniche. The Military later denied any responsibility for the deaths of demonstrators and blamed it on 'third party' elements who had an interest in agitating sectarian conflict by antagonizing the situation between Coptic Christians and the SCAF.
|
2021-09-28T01:09:37.535Z
|
2021-07-08T00:00:00.000
|
{
"year": 2021,
"sha1": "14b8e583719f7a4ce57358bb3c8dda498b878297",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-1444/12/7/516/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "7b700fc887cd4570c2f6cd5c0a19b25a81ec7feb",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": [
"Political Science"
]
}
|
133926006
|
pes2o/s2orc
|
v3-fos-license
|
Three-Dimensional Glacier Changes in Geladandong Peak Region in the Central Tibetan Plateau
In this study, contour lines from the topographic maps at a 1:100,000 scale (mapped in 1968), Landsat MSS/TM/OLI images, ASTER images and SPOT 6-7 stereo image pairs were used to study changes in glacier length, area and surface elevation. We summarized the results using the following three conclusions: (1) During the period from 1973 to 2013, glaciers retreated by 412 ± 32 m at a mean retraction rate of 10.3 ± 0.8 m·year−1 and the relative retreat was 5.6 ± 0.4%. The glacier area shrank by 7.5 ± 3.4%, which was larger than the glacier length. In the periods of 1968–2000, 2000–2005 and 2000–2013, the glacier surface elevation change rates were −7.7 ± 1.4 m (−0.24 ± 0.04 m·year−1), −1.9 ± 1.5 m (−0.38 ± 0.25 m·year−1) and −5.0 ± 1.4 m (−0.38 ± 0.11 m·year−1), respectively. The changes in the glacier area and thickness exhibited similar trends, both showing a significant increasing reduction after 2000. (2) Eleven glaciers were identified as surging glaciers. Changes of the mass balance in surging glaciers were stronger than in non-surging glaciers between 1968 and 2013. Changes of area in surging glaciers were weaker than in non-surging glaciers. (3) Increasing temperature was the major cause of glacier thickness reduction and area shrinkage. The increase in precipitation, to a certain extent, inhibited glacial ablation but it did not change the status of the shrinkage in the glacial area and the reduction in the glacier thickness.
Introduction
The Intergovernmental Panel on Climate Change (IPCC) summarizes the changes in glacial area and mass balance in 19 regions of the world. It was demonstrated that the change rate of the global glacial area was −0.01%·year −1 to −1.8%·year −1 and the mass loss was −50 ± 7 Gt·year −1 to 0 ± 1 Gt·year −1 from 1940-2010. The glacial area in Canada, the western United States, central Europe and low-latitude areas shrunk the fastest, while Alaska, Greenland and areas near the Arctic Circle north of Canada experienced from the fastest mass loss. The variation of the glacial area is not synchronized with that of ice volume [1]. Results from China's Qilian Mountains indicate greater thinning of parts of glaciers with less area shrinkage, which raises questions about the traditional method that uses changes in glacial area to predict changes in ice reserves [2]. The study showed that in the Tuanjiefeng Peak region of the Qilian Mountains, the ice volume that was calculated using the traditional statistical method was underestimated by 17% [2]. Therefore, a re-evaluation of the methods used to calculate the ice reserves is necessary, which requires exploration of the three-dimensional changes of glacier length, area, surface elevation and the volume as well as the correlations between these characteristics. 22 February 2000, which include C band and X band. The SRTM C band DEM is provided by the USGS and X band DEM is provided by Deutsches Zentrum für Luft-und Raumfahrt (DLR, https://download.geoservice.dlr.de/SRTM_XSAR/). The SRTM C band DEM includes SRTM 1 arc second and SRTM 3 arc second, with corresponding resolution accuracies of 30 m and 90 m, respectively. SRTM DEM has been widely used in the study of glacial surface elevation and ice volume change. Results showed that the RMSEs of SRTM 1 arc second in the north-eastern flank of the Tien Shan Mountains area was ±10 m compared with the elevation of differential GPS [19]. In addition, there is a 2.8 m difference between the data obtained using SRTM 3 arc second and the Geoscience Laser Altimeter System (GLAS) in the Tuanjiefeng Peak region of the Qilian Mountains [2].
In our study, the SRTM 1 arc second at C band DEMs were used for determining changes in the surface elevation and the ice volume of glaciers in the Geladandong Peak region, while SRTM at the X band was used for the penetration of SRTM at the C band correction because it covered only 1/20 of our study area. Two tiles of C band SRTM 1arc-second Global (SRTM1, n33_e90_1arcv3 and n33_e90_1arc_v3) and 6 tiles of X band SRTM (E0910000N330000, E0910000N331500, E0910000N334500, E0900000N330000, E0900000N331500, E0900000N334500) with void filled were used. It should be noted that all the void holes of the original SRTM were small, with the largest no more than 50 pixels and the void part will be excluded while comparing the other DEMs. The World Geodetic System 1984 (WGS84) coordinate and the 1996 Earth Gravitational Model (EGM96) (https://en.m.wikipedia.org/ wiki/EGM96) reference were used for SRTM DEMs.
SPOT 6-7 and ASTER Stereo Image Pair
An ASTER image is one of the important data sources in the study of glacier surface elevation changes and it has been widely applied to China's Altai Mountains region and the Bangong Lake region of the Tibetan Plateau [6,7]. To extract the DEM of the main Geladandong Peak, the ASTER image, obtained on 8 December 2005, was used. Because of a lot of cloud in the summer images, it was difficult to obtain high-quality ASTER images. Moreover, SPOT 6-7 stereo image pairs obtained on 6 October 2013 and 18 November 2014, were purchased, covering an area of 2000 km 2 . Seasonal snow was present to a small extent on both ASTER and SPOT images but the flow trajectories and crevasses on glaciers was clearly visible especially on the glacier tongues.
The SPOT 6-7 satellite was launched on 9 September 2012. The spatial resolutions are 1.5 m (panchromatic) and 6 m (multispectral). Three-line-array stereo imageries were included in the panchromatic band. The data format of SPOT 6-7 is DIMAP V2 and the RPC file is a rational function model and a general sensor model that was applied for image ortho-rectification.
Contour Lines of 1968
Contour lines from four topographic maps of 1968 based on aerial photography at a scale of 1:100,000 were provided by the National Geomatics Center of China. It should be noted that the coordinate systems used for these contour data were the Xi'an 1980 and Yellow Sea 1985 elevation systems, which were converted from the Beijing 1954 coordinate system by the National Geomatics Center of China. According to the industry standards of basic geographic information products regulated by the National Administration of Surveying, Mapping and Geoinformation, the precision should be 3-5 m in flat areas and 8-14 m in mountainous areas [20]. DEM data derived from topographic maps have been widely used in the studies of glacial surface elevation and ice reserve changes in western China, such as the Altai Mountains, the Bangong Lake Region of the Tibetan Plateau, the Qilian Mountains and Gongga Mountain [2,6-8].
GLAS and Reference Points
The Geoscience Laser Altimeter System (GLAS) is used on the Ice, Cloud, and land Elevation Satellite (ICESat) for monitoring land and water surface topography. The diameter of the spot projected by the laser pulse on the Earth surface is 70 m. The horizonal positioning precision is 20 cm and the elevation precision is 13.8 cm. GLAS data is often used in the study of glacial surface elevation changes, such as the west Kunlun Mountains region [3]. However, due to the lack of repeated observation data of the study area, it was only used to evaluate the accuracy of DEMs. GLAS data were obtained from the National Aeronautics and Space Administration (NASA) and they were divided into several types (https://icesat.gsfc.nasa.gov/icesat/). GLAS/ICEsat Level 2 Global Land Surface Altimetry (GLA14) products acquired on 18 March 2008, were used in this study.
Reference points were measured by the National Geomatics Center of China using GPS. There were 39 points in this study, of which six had both the WGS84 coordinate system and the Xi'an 1980 coordinate system, while others had only the WGS84 coordinate system. These points were mainly used for the evaluation of DEM accuracy, while the six points with both the WGS84 coordinate system and Xi'an 1980 coordinate system were used for coordinate system transformation.
Glacier Boundaries and Lengths Acquisition
There are several well-known methods for the delineation of debris-free glaciers based on Landsat TM/ETM data [21][22][23][24][25][26][27]. The band ratio threshold method has higher precision than other methods [24,27]. There were no debris-covered glaciers in the study area, so that the ratio threshold method was applied to the extraction of bare ice area based on TM and OLI images. Band TM3/TM5 ≥ 2.1 was used as the threshold to extract the bare-ice glacier boundary. This algorithm was implemented using the interactive data language (IDL). As this method can effectively extract glacier boundaries, it has been widely applied [24][25][26][27]. For Landsat MSS data, the glaciers on the 3, 2 and 1 band combination images were white. Visual interpretation and manually digitization were used. To ensure the precision of data, this method was conducted under the guidance of experienced glaciologists. At this point, five stages glacial boundaries were only the outlines, which needed to be split into individual glaciers by ridgelines. The ridgelines provided by China's second glacier catalog program has been applied to the division of glaciers throughout western China [28,29]. In our study, the ridgeline and glacier polyline files were merged into a new polyline file using ArcGIS software (10.1.0, Environmental Systems Research Institute, Redlands, CA, USA). Then, topology was built to generate glacier polygons based on "clean" command. Thus, a single glacier with a topological structure was formed.
At present, there are two commonly used methods for the assessment of the uncertainty in glacier boundary extraction: (i) a method based on the resolution and the co-registration errors of remote sensing images [30]; and (ii) a method based on the buffer of a specific value [26,31]. A buffer with a width equal to half pixel was applied to expand the extracted glacier boundary using ArcGIS 10.1 software. Then, the original glacier boundary is compared with the glacier boundary of the buffer to obtain the uncertainty percentage of the glacial area. Here, the second method, which is more suitable for this study, was selected. Specifically, the half-pixel buffer calculation was performed on the glacier boundaries of 1973, 1988, 2000, 2006 Although there are automatic extraction methods for glacier length, the operation is complicated [32]. In our study, a manual digitization of the middle flowlines was applied for length generation. Here, the glacier length was the maximum length from the maximum elevation point to the terminal point. The elevation referenced as SRTM DEM. Thirty-three lengths of glaciers were digitized in our study, showing in the Figure 1. The accuracy of the changes in glacier lengths depended on the accuracy of the delineation of glacier termini. This was determined, above all, by the accuracy of image co-registration. The accuracy of the co-registration of all pairs of images was less than half pixel. Therefore, the accuracy of glacier length calculation was taken as a half of the pixel. This was 28.5 m for imagery obtained in 1973 and 15 m in all other years.
DEM of 1968
DEM1968 with the Xi'an 1980 coordinate system and the Yellow Sea 1985 reference was different to the SRTM DEM with the WGS84 coordinate system, which could cause an error of more than 10% in the study of glacial surface elevation changes [2,33]. Therefore, it was necessary to transform the coordinate system of the topographic map DEM into the same coordinate system as SRTM. The whole transformation process consisted of two parts by using the seven-parameter transformation model, which includes the horizonal coordinate system from the Xi'an 1980 coordinate system (XI'AN80) to the WGS84 coordinate system and the transformation of the geodetic height of WGS84 to the normal height.
The XI'AN80 geodetic coordinate system was transformed to the XI'AN80 rectangular coordinate system, which was then transformed to WGS84 based on the seven-parameter model. This method has been applied to the study of glacial surface elevation change several times [2,8]. Three reference points were applied for the coordinate transformation based on the BursaWolf seven-parameter model. The formula is as follows: bands. The reference image used to extract the horizonal coordinates of the ground control points (GCPs) was the panchromatic band of the Landsat OLI image of 15 m resolution taken on 9 August 2013 and height for the GCPs was from SRTM which was the reference DEM. Six GCP points were chosen and 45 tie points were extracted. The resolution of DEM2005 was resampled to 20 m.
DEM2013 was generated from stereo image pairs of SPOT 6-7 images from 2013/2014 based on stereo image pairs of panchromatic bands A and B. The 3N band of the ASTER image after ortho-rectification, taken on 8 December 2005, was used as the reference image for choosing the two-dimensional horizonal coordinates of GCPs. The height for the GCPs was also from SRTM DEM. Six GCPs were chosen and more than 100 tie points were adopted from each image. The resolution of DEM2013 was resampled 20 m, which was the same as the DEM2005.
A thin layer of seasonal snow, present on both SPOT and ASTER images, reduced contrast and affected DEM extraction resulting in data gaps in the areas where contrasts were weak. In addition, a small area of the image was covered by thin clouds, which will also affect DEM extraction. The area under the cloud may have been overestimated, whereas the area in the shadow of clouds might be underestimated. Therefore, post-processing of the extracted DEM was performed to correct the artefacts in the extracted elevations in the areas affected by the snow cover and clouds.
DEM Error Assessment
DEM error comes from the source images and the interpolation models. However, it is difficult to quantify the error propagation. In our study, the method of DEMs comparing with the check points was used for error assessment. Thirty-nine GPS points and 2109 GLAS laser points of the WGS84 coordinate system in off-glacier area were employed to evaluate the errors of DEM1968, DEM2005, SRTM and DEM2013 ( Figure 2). The distribution of all check points is shown in Figure 2. However, due to the great differences in the coverage range between the DEMs, the GPS points and GLAS points covered by each DEM were not the same. Since only one point in the SPOT data intersected with the GPS points on the boundary, GPS points were not taken to evaluate the error of the SPOT DEM. Instead, GLAS points were used for the assessment of DEM2013.
By comparing the DEM1968, SRTM and DEM2005 with the GPS measuring points, it was found that the difference was −24 ± 10.8 m (mean ± RMSE), −4 ± 4.5 m and −4 ± 6.8 m, respectively ( Table 2). As can be seen, the accuracy of the SRTM data was the highest, followed by the DEM2005 and DEM1968. By comparing the value of the DEM1968, SRTM, DEM2005 and DEM2013 with GLAS points, it was found that the differences were 8.7 ± 15.4 m, 0.1 ± 4.7 m, −9.9 ± 18.4 m and 5.0 ± 13.2 m, respectively. It is clear that the accuracy of SRTM and DEM2013 was the higher than that of DEM1968 and DEM2005.
Here, the systematic bias of DEM1968, DEM2005, SRTM and DEM2013 in bare land was significant as comparing with GPS points, which has been mentioned several times in previous studies [34][35][36][37]. It is mainly caused by the co-registration error of DEMs and the resolution [35,37,38]. This bias shows some relations with altitude, slope, aspect, curvature and other terrain factors [34,38,39]. These DEMs should be adjusted to GPS elevations before elevation changes study. Unfortunately, no enough GPS points for the adjustment were covered by DEM1968, DEM2005 and DEM2013. Thus, the adjustment was implemented by two steps. First, elevation of SRTM was adjusted to the elevation of GPS points. Second, DEM1968, DEM2005 and DEM2013 were adjusted to the adjusted SRTM.
No clearly relations between SRTM-GPS and slope, aspect and curvature was found in this study. But a significant linear trend (α < 0.05) between the SRTM-GPS and the altitude was found in Figure 3. The equation as following was used for SRTM adjustment.
where Y is the adjust value that should be subtracted from SRTM and X is the elevation of SRTM. The elevation difference between SRTM and GPS was −1.0 ± 4.1 m. Although most of the bias was corrected, there was a 1 m bias remaining. This was considered to be the error of SRTM DEM. By comparing the DEM1968, SRTM and DEM2005 with the GPS measuring points, it was found that the difference was −24 ± 10.8 m (mean ± RMSE), −4 ± 4.5 m and −4 ± 6.8 m, respectively ( Table 2). As can be seen, the accuracy of the SRTM data was the highest, followed by the DEM2005 and DEM1968. By comparing the value of the DEM1968, SRTM, DEM2005 and DEM2013 with GLAS points, it was found that the differences were 8.7 ± 15.4 m, 0.1 ± 4.7 m, −9.9 ± 18.4 m and 5.0 ± 13.2 m, respectively. It is clear that the accuracy of SRTM and DEM2013 was the higher than that of DEM1968 and DEM2005. However, the bias of SRTM in glacier area was different from that in off-glacier area due to penetration of the SRTM at C band in snow and ice [38]. The penetration depth on dry and cold firn was up to 10 m, while it was 1~2 m on exposed ice [40]. The best way to correct the penetration is to compare the SRTM at the C band with the X band which almost no penetration [38]. In our study, seasonal snow from 11 and 20 February 2000 (obtained date of SRTM) from Landsat TM images covered the entire glacier area and part of the off-glacier area. This caused a bias of SRTM X and SRTM C in the off-glacier area. The difference between SRTM X and SRTM C showed a significant linear trend (α < 0.05) with altitude both on-glacier and off-glacier ( Figure 4) but different change slopes. This means the thickness of seasonal snow increased by 3 m per kilometer. This seasonal snow thickness should be removed before the correction of the penetration depth of SRTM C. Finally, the following equation was used to correct the penetration in snow and ice: where Y is the value that should be added to SRTM C and X is the elevation of SRTM C. Generally, 3 m penetration of the SRTM C in glacier area was corrected.
studies [34][35][36][37]. It is mainly caused by the co-registration error of DEMs and the resolution [35,37,38]. This bias shows some relations with altitude, slope, aspect, curvature and other terrain factors [34,38,39]. These DEMs should be adjusted to GPS elevations before elevation changes study. Unfortunately, no enough GPS points for the adjustment were covered by DEM1968, DEM2005 and DEM2013. Thus, the adjustment was implemented by two steps. First, elevation of SRTM was adjusted to the elevation of GPS points. Second, DEM1968, DEM2005 and DEM2013 were adjusted to the adjusted SRTM.
No clearly relations between SRTM-GPS and slope, aspect and curvature was found in this study. But a significant linear trend (α < 0.05) between the SRTM-GPS and the altitude was found in Figure 3. The equation as following was used for SRTM adjustment.
where Y is the adjust value that should be subtracted from SRTM and X is the elevation of SRTM. The elevation difference between SRTM and GPS was −1.0 ± 4.1 m. Although most of the bias was corrected, there was a 1 m bias remaining. This was considered to be the error of SRTM DEM. However, the bias of SRTM in glacier area was different from that in off-glacier area due to penetration of the SRTM at C band in snow and ice [38]. The penetration depth on dry and cold firn was up to 10 m, while it was 1~2 m on exposed ice [40]. The best way to correct the penetration is to compare the SRTM at the C band with the X band which almost no penetration [38]. In our study, seasonal snow from 11 and 20 February 2000 (obtained date of SRTM) from Landsat TM images covered the entire glacier area and part of the off-glacier area. This caused a bias of SRTM X and SRTM C in the off-glacier area. The difference between SRTM X and SRTM C showed a significant linear trend (α < 0.05) with altitude both on-glacier and off-glacier ( Figure 4) but different change slopes. This means the thickness of seasonal snow increased by 3 m per kilometer. This seasonal snow thickness should be removed before the correction of the penetration depth of SRTM C. Finally, the following equation was used to correct the penetration in snow and ice: where Y is the value that should be added to SRTM C and X is the elevation of SRTM C. Generally, ~3 m penetration of the SRTM C in glacier area was corrected. When corrected DEM1968, DEM2005 and DEM2013, the co-registration error will result in deviation of the elevation difference as a function of the aspect, the coarse resolution will result in deviation of the elevation difference as a function of the maximum curvature [35,38]. The relationship between dh/tan(slope) (the elevation difference/the tangent of the slope) and the aspect occurs across the entire aspect range is described by a sinusoidal function [35,38]. However, the non-constant coregistration shift between two DEMs result in no a complete sine wave within the whole aspect range (360 degrees). The solution is to divide the DEMs according to the shift value, the boundary of which is difficult to define. The same problem was found in the Qilian Mountains [2]. Thus, it is difficult to correct the bias using the method developed by Nuth and Kääb (2011). Gardelle and others (2012) found that there was a linear bias of elevation difference as a function of maximum curvature and that it could be corrected by the linear function [38]. In view of this, DEM1968, DEM2005 and When corrected DEM1968, DEM2005 and DEM2013, the co-registration error will result in deviation of the elevation difference as a function of the aspect, the coarse resolution will result in deviation of the elevation difference as a function of the maximum curvature [35,38]. The relationship between dh/tan (slope) (the elevation difference/the tangent of the slope) and the aspect occurs across the entire aspect range is described by a sinusoidal function [35,38]. However, the non-constant co-registration shift between two DEMs result in no a complete sine wave within the whole aspect range (360 degrees). The solution is to divide the DEMs according to the shift value, the boundary of which is difficult to define. The same problem was found in the Qilian Mountains [2]. Thus, it is difficult to correct the bias using the method developed by Nuth and Kääb (2011). Gardelle and others (2012) found that there was a linear bias of elevation difference as a function of maximum curvature and that it could be corrected by the linear function [38]. In view of this, DEM1968, DEM2005 and DEM2013 were corrected in two steps. First, linear equations corrected the deviation of the elevation difference with maximum curvature. Second, the remaining deviation was fitted to aspect using polynomials. This method was also applied in a glacier changes study in the Qilian Mountains [2].
First, three linear equations showing in Figure 5 were employed to correct the deviation of DEM1968, DEM2005 and DEM2013 with SRTM as a function of maximum curvature. All the R-squared was more than 0.9 and significance level (α) was less than 0.01. Second, two-order polynomial equation was found on both DEM2005-SRTM and DEM2013-SRTM as a function of aspect. Two similar equations were shown on Figure 6A,C. There was no strong relationship between the DEM2005-SRTM and aspect ( Figure 6B) implying that the bias was corrected well by the maximum curvature dependent correction. No aspect dependent correction was implemented. As can be seen, the deviation of DEM1968, DEM2005 and DEM2013 with SRTM was improved significantly after correction ( Figure 6). The value of SRTM-DEM1968, DEM2005-SRTM and DEM2013-SRTM was 0.18 ± 12.5 m, 0.86 ± 16.6 m and 0.06 ± 3.7 m, respectively. Follow the law of propagation of error. The total error was calculated as the following equation. Second, two-order polynomial equation was found on both DEM2005-SRTM and DEM2013-SRTM as a function of aspect. Two similar equations were shown on Figure 6A,C. There was no strong relationship between the DEM2005-SRTM and aspect ( Figure 6B) implying that the bias was corrected well by the maximum curvature dependent correction. No aspect dependent correction was implemented. As can be seen, the deviation of DEM1968, DEM2005 and DEM2013 with SRTM was improved significantly after correction ( Figure 6). The value of SRTM-DEM1968, DEM2005-SRTM and DEM2013-SRTM was 0.18 ± 12.5 m, 0.86 ± 16.6 m and 0.06 ± 3.7 m, respectively. Follow the law of propagation of error. The total error was calculated as the following equation.
where σ is the error of DEM1968, DEM20015 and DEM2013, σ 1 is the error of SRTM, −1 m, σ 2 is the value of SRTM-DEM1968, DEM2005-SRTM and DEM2013-SRTM. Thus, the error of DEM1968, DEM2005 and DEM2013 was 1 m, 1.3 m and 1 m, respectively. Before comparing the DEMs it should be noted that we removed the no-data area in the ASTER DEM and SPOT DEM and excluded the overestimation and underestimation area using the threshold ±150 m. After that, the DEM from ASTER covers 70% of glaciers, while DEMs from SPOT 6-7 covered only 50%.
Area Change
The Before comparing the DEMs it should be noted that we removed the no-data area in the ASTER DEM and SPOT DEM and excluded the overestimation and underestimation area using the threshold ±150 m. After that, the DEM from ASTER covers 70% of glaciers, while DEMs from SPOT 6-7 covered only 50%.
Area Change
The Generally, the annual rates of relative area change were smaller in larger glaciers throughout the assessment period (Table 3). However, the largest glaciers which was more than 10 km 2 did not exhibit the slowest shrinkage. This was different from the Nyainqentanglha Range, the Yili river catchment and the Tarim Interior River basin, which had the smallest shrinkage rate of the glaciers larger than 10 km 2 [31,41,42]. The reason will be analyzed together with elevation changes and length changes. This observation that area changes depend on glacier size, with larger glaciers shrinking at slower percentage rates, appeared in most mountains in the world Generally, the annual rates of relative area change were smaller in larger glaciers throughout the assessment period (Table 3). However, the largest glaciers which was more than 10 km 2 did not exhibit the slowest shrinkage. This was different from the Nyainqentanglha Range, the Yili river catchment and the Tarim Interior River basin, which had the smallest shrinkage rate of the glaciers larger than 10 km 2 [31,41,42]. The reason will be analyzed together with elevation changes and length changes. This observation that area changes depend on glacier size, with larger glaciers shrinking at slower percentage rates, appeared in most mountains in the world [1].
Length Change
Judging from the length change of 33 glaciers from 1973 to 2013, glaciers retreated by an average of 412 ± 32 m. The mean retreat speed was 10.3 ± 0.8 m·year −1 and the relative retreat rate was 5.6 ± 0.4%, which was lower than the area shrinkage rate. The annual retreat in the periods 1973-1988, 1988-2000, 2000-2006 and 2006-2013, were 5.4 ± 2.1 m·year −1 , 9.8 ± 1.8 m·year −1 , 17.0 ± 3.5 m·year −1 and 15.8 ± 3.0 m·year −1 , respectively, suggesting an accelerated retreating trend. After 2000, the glacier retreat slowed. The trend is the same as that of the glacial area changes.
Generally, there were 29 glaciers retreating and 4 glaciers advancing from 1973-2000. The greatest reduction was seen in the glacier Gangjiaquba (5K444B0064) (Figure 1), covering an area of 36.56 ± 1.1 km 2 , by 3240 ± 32 m (with annual recession of 81 ± 0.8 m·year −1 ) from 1973 to 2013 ( Figure 8). Meanwhile, the area of this glacier changed by −18.3 ± 3.5%, which was much higher than the shrinkage of glaciers in the same area size. The biggest advancing glacier was 5K451F0012, with an area of 9.05 ± 0.28 km 2 , by 892.3 ± 32.2 m. However, it was found that the change in the length of the glacier was very complicated when comparing the length changes over different periods. Some glaciers retreat and advance alternatingly. There were 10 glaciers (5K444B0039, 5K451F0012, 5K451F0030 (the north branch glacier of Jianggudiru), 5K451F0036, 5K451F0047, 5Z213A0004, 5Z213A0007, 5Z213B0001, 5Z213B0015 (Zengpusong Glacier) and 5Z213B0016 that have advanced. Six glaciers were advancing during 1973 to 2000, while two were advancing during 2000 to 2006 and only one after 2006.
There were another two glaciers (5Z221H0011 and 5K451F0040) that were more than 10 km 2 exhibiting particularly strong retreat rates. The glacier 5Z221H0011, covering an area of 16.13 ± 0.5 km 2 , retreated 2092 ± 32 m (with annual recession of 52.3 ± 0.8 m·year −1 ) and shrank by 24 ± 3.1% in area from 1973 to 2013. The glacier 5K451F0040, covering an area of 17.35 ± 0.53 km 2 , retreated 1244 ± 32 m (31 ± 0.8 m·year −1 ) and shrank by 11 ± 2.4% in area from 1973 to 2013. Both of them changed much larger than glaciers in the same area size. This suggests that the particularly high retreat rates of these three glaciers partly explain large mean changes in glaciers greater than 10 km 2 . The reason for this rapid retreat and shrinkage is unclear and will require further analysis in the context of glacier geometry and surface velocities.
Surface Elevation Changes
As there was a large non-overlapping invalid data zone in the DEM2005 and DEM2013, the elevation difference between these two DEMs was not shown in this study. Thus
Surface Elevation Changes
As there was a large non-overlapping invalid data zone in the DEM2005 and DEM2013, the elevation difference between these two DEMs was not shown in this study. Thus
Changes of Advancing/Surging Glaciers
Length change and surface elevation change studies have shown that there were 10 advancing and another 6 potential surging glaciers in Geledandong peak region from 1973 to 2013. The mean area of all these glaciers was 22.6 ± 0.7 km 2 . Ten of these glaciers was more than 10 km 2 , the other glaciers were close to 10 km 2 (Table S1). The area shrinkage rate of the 13 advancing/surging glaciers was much lower than that of the non-advancing glaciers which were more than 10 km 2 . These advancing/surging glaciers shrank in area with a rate of 0.13%·year −1 , while the glaciers with area more than 10 km 2 shrank by 0.17%·year −1 (Table 3).
Previous studies showed that the characteristic of ice displacing downward and the heavy crevasses (especially newly developed) on the glacier tongue were the sign to identify the surging glaciers [43,44]. Here, we compared the images before and after advancing and displacement behavior to find the crevasse development. The results showed that 11 glaciers were identified as surging glaciers (Table 4). Much new crevasses developed on all these surging glaciers, see Figures S1-S15. Five glaciers surged after 2000, three glaciers surged during 1988-2000, only one surged during the 1973-1988 period. As a result, areas of several glaciers increased (Table S1). Glaciers 5K444B0039, 5K451F0047, 5213A0007 and 5213B0001 were the normal advancing glaciers, which was response to the positive glacier mass balance controlled by the climate forcing. This kind of advancing was also found on Franz Josef Glacier in New Zealand and Younger Dryas in north Greenland [45,46]. Although thickness of glacier 5Z213B0014 increased in the tongue area, it did not advance between 1973 and 2013 and there was no heavy crevassing on this glacier suggesting that it is a surging glacier. The assessment of changes in elevation of the glacier surface result showed
Changes of Advancing/Surging Glaciers
Length change and surface elevation change studies have shown that there were 10 advancing and another 6 potential surging glaciers in Geledandong peak region from 1973 to 2013. The mean area of all these glaciers was 22.6 ± 0.7 km 2 . Ten of these glaciers was more than 10 km 2 , the other glaciers were close to 10 km 2 (Table S1). The area shrinkage rate of the 13 advancing/surging glaciers was much lower than that of the non-advancing glaciers which were more than 10 km 2 . These advancing/surging glaciers shrank in area with a rate of 0.13%·year −1 , while the glaciers with area more than 10 km 2 shrank by 0.17%·year −1 (Table 3).
Previous studies showed that the characteristic of ice displacing downward and the heavy crevasses (especially newly developed) on the glacier tongue were the sign to identify the surging glaciers [43,44]. Here, we compared the images before and after advancing and displacement behavior to find the crevasse development. The results showed that 11 glaciers were identified as surging glaciers (Table 4). Much new crevasses developed on all these surging glaciers, see Figures S1-S15. Five glaciers surged after 2000, three glaciers surged during 1988-2000, only one surged during the 1973-1988 period. As a result, areas of several glaciers increased (Table S1). Glaciers 5K444B0039, 5K451F0047, 5213A0007 and 5213B0001 were the normal advancing glaciers, which was response to the positive glacier mass balance controlled by the climate forcing. This kind of advancing was also found on Franz Josef Glacier in New Zealand and Younger Dryas in north Greenland [45,46]. Although thickness of glacier 5Z213B0014 increased in the tongue area, it did not advance between 1973 and 2013 and there was no heavy crevassing on this glacier suggesting that it is a surging glacier. The assessment of changes in elevation of the glacier surface result showed that the glacier thickness increased in the 2000−2013 period (Table 5). Usually, there was a time lag of the changes in glacier length response to the changes of mass [47]. The glacier 5Z213A0007 has proved this lag, which thickened by 15.7 ± 1.3 m from 1968 to 2000 and advanced after 2006 by 450 ± 15 m. It was a 6-year lag in the length change of the glacier behind the elevation changes.
Discussion
In our study, the ASTER and SPOT 6-7 were used to measure changes in surface elevation of glaciers. Seasonal snow was present on parts of images and depth was not known in the absence of field measurements. The field measurement of mass balance of Xiao Donkemadi glacier, which was close to our study region, showed that the cumulative mass balance was −1584 mm w.e. (the water equivalent) (equal to −1.76 m changes of the ice thickness) in the 2008-2012 period. Only 25% of snowfall occurred in the winter [48].The presence of seasonal snow could have resulted in a thickness uncertainty of ±0.11 m. The results covered by shadows has been excluded and the influence of shadow on the surface elevation change results was not discussed.
The results showed that surging or advancing glaciers covered an area of 361.27 ± 11.2 km 2 , accounting 42% of the total glacier area (Table S1). We acknowledge that a more in depth study is required to establish whether these glaciers were of surging type or their advance was climatically driven. Surging glaciers respond to climate change in different way from non-surging glaciers and should be treated separately from non-surging glaciers in when assessing impacts of climate change [49].
Here, we calculated the changes of non-surging glaciers in length, area and surface elevation separately from the surging/advancing glaciers. The combined area of non-surging glaciers declined by 9.1 ± 2.8% (0.23 ± 0.07%·year −1 ) from 1973 to 2013. The rate of retreat of non-surging glaciers and surface elevation were 7.0 ± 0.8 m·year −1 from 1973 to 2013 and 0.23 ± 0.12 m·year −1 from 1968 to 2013, respectively.
Areas of several such glaciers increased while their surface elevation declined during the surging periods (Table S1 and Table 5). The changes in length were less pronounced than changes in area and many such glaciers exhibited rapid retreat following surges. Changes in surface elevation of the surging glaciers were stronger than in the non-surging glaciers probably because the ice mass displaced downward glacier at rate of 10-1000 times velocity during the surging period [50,51]. This process pushed more ice towards the lower altitude region with more rapid ablation. The previous studies suggested that changes in surface elevation were the most relevant for assessing glacial responses to climate variability. However, we suggest that in the glacierized regions accommodating surging glaciers, changes in the mass balance of such glaciers were stronger than changes in mass balance of others glaciers [4,49].
The reason for rapid recession of three glaciers (5K444B0064, 5Z221H0011 and 5K451F0040) were different. Glacier 5K451F0040 separated into two almost equally sized glaciers between 1973 and 2013 ( Figure 10). The up-branch glacier shrank much faster than the lower-branch glacier although there were no significant differences in aspect, slope and elevation. Glacier 5Z221H0011 was a long valley glacier with a flatter and longer glacier snout than others. This geometry resulted in slower velocity of ice movement towards the glacier terminus [47]. Other researchers reported the similar results. Thus valley glaciers in southern slope of the central Main Caucasus ridge shrank faster than other glaciers [52]. Glacier 5K444B0064 was a surging glacier which advanced in 2014 judging from the SPOT 6-7 images of 2013 and 2014. responses to climate variability. However, we suggest that in the glacierized regions accommodating surging glaciers, changes in the mass balance of such glaciers were stronger than changes in mass balance of others glaciers [4,49]. The reason for rapid recession of three glaciers (5K444B0064, 5Z221H0011 and 5K451F0040) were different. Glacier 5K451F0040 separated into two almost equally sized glaciers between 1973 and 2013 ( Figure 10). The up-branch glacier shrank much faster than the lower-branch glacier although there were no significant differences in aspect, slope and elevation. Glacier 5Z221H0011 was a long valley glacier with a flatter and longer glacier snout than others. This geometry resulted in slower velocity of ice movement towards the glacier terminus [47]. Other researchers reported the similar results. Thus valley glaciers in southern slope of the central Main Caucasus ridge shrank faster than other glaciers [52]. Glacier 5K444B0064 was a surging glacier which advanced in 2014 judging from the SPOT 6-7 images of 2013 and 2014. Compared with other regions in western of China, the thinning of glaciers in Geladandong Peak is greater than that in the Tuanjiefeng Peak region of the Qilian Mountains, on the north face of Tuomuer Peak of the Tien shan Mountains and in Bangong Lake basin of the Tibetan Plateau, while the area change of glaciers in this region is less than that in these regions [2,6,42,53,54]. It suggests that the response of glacier size to climate change has a certain lag behind the responses of glacier thickness. Studies have shown that the larger the glaciers were, the longer lag of glaciers respond climate change [47]. The mean area of the Geladandong Peak region is 6.0 km 2 , much larger than that of the Tuanjiefeng Peak region of the Qilian Mountains (1.7 km 2 ). Therefore, the shrinkage of glaciers area in Geladandong Peak region was smaller than that in the Tuanjiefeng Peak region, although the thickness reduction of glaciers in Geladandong Peak is more than those in the Tuanjiefeng Peak region. In addition, this lag was also the cause of the rate of the reduction of glacier areas decreasing with increasing glacier size.
Studies have shown that air temperature and annual precipitation are the major causes of glacier changes, which affect the accumulation and the ablation of glaciers [1]. According to the recent data provided by the Tuotuohe meteorological station in the Geladandong Peak region, both temperature and precipitation increased from 1957-2014 ( Figure 11). The temperature increased by 0.3 • C per decade and the precipitation increased by 12.7 mm per decade. In addition, the decadal means of temperature and precipitation showed significant upward trends. After 2000, the increases in temperature and precipitation became more significant.
Water 2018, 7, x FOR PEER REVIEW 19 of 23 that the response of glacier size to climate change has a certain lag behind the responses of glacier thickness. Studies have shown that the larger the glaciers were, the longer lag of glaciers respond climate change [47]. The mean area of the Geladandong Peak region is 6.0 km 2 , much larger than that of the Tuanjiefeng Peak region of the Qilian Mountains (1.7 km 2 ). Therefore, the shrinkage of glaciers area in Geladandong Peak region was smaller than that in the Tuanjiefeng Peak region, although the thickness reduction of glaciers in Geladandong Peak is more than those in the Tuanjiefeng Peak region. In addition, this lag was also the cause of the rate of the reduction of glacier areas decreasing with increasing glacier size. Studies have shown that air temperature and annual precipitation are the major causes of glacier changes, which affect the accumulation and the ablation of glaciers [1]. According to the recent data provided by the Tuotuohe meteorological station in the Geladandong Peak region, both temperature and precipitation increased from 1957-2014 ( Figure 11). The temperature increased by 0.3 °C per decade and the precipitation increased by 12.7 mm per decade. In addition, the decadal means of temperature and precipitation showed significant upward trends. After 2000, the increases in temperature and precipitation became more significant. The surface elevation changes of glaciers exhibited the same trend as the area change over the period 1970s to 2013. This trend is related to the climate change trend. The rising temperature is the major cause of glacier thickness reduction and area shrinkage. In the late 1980s, the temperature increased and the decrease in precipitation intensified glacial ablation and weakened accumulation, which aggravated glacier thickness reduction and area shrinkage after 2000. The increase of precipitation over the past 10 years did not weaken the trend of the shrinkage in the glacial area and the reduction in the glacier thickness.
Conclusions
In this study, the response of glaciers in the Geladandong Peak region in the central of the Tibetan Plateau to climate change is revealed based on the multi-temporal variations in glacier length, area and surface elevation. The conclusions are as follows: (1). From 1973-2013, glaciers retreated 412 ± 32 m on average, with a mean retreat rate of 10.3 ± 0.8 m·year −1 and a relative retreat rate of 5.6 ± 0.4%. The area of glacier decreased. The overall shrinkage was 7.5 ± 3.4%, which was greater than the decrease in length. The glacial area shrank by 2.8 ± 2.1% (0.18%·year −1 ), 1.9 ± 1.4% (0.16%·year −1 ), 1.4 ± 1.4% (0.23%·year −1 ) and 1.6 ± 1.4% The surface elevation changes of glaciers exhibited the same trend as the area change over the period 1970s to 2013. This trend is related to the climate change trend. The rising temperature is the major cause of glacier thickness reduction and area shrinkage. In the late 1980s, the temperature increased and the decrease in precipitation intensified glacial ablation and weakened accumulation, which aggravated glacier thickness reduction and area shrinkage after 2000. The increase of precipitation over the past 10 years did not weaken the trend of the shrinkage in the glacial area and the reduction in the glacier thickness.
Conclusions
In this study, the response of glaciers in the Geladandong Peak region in the central of the Tibetan Plateau to climate change is revealed based on the multi-temporal variations in glacier length, area and surface elevation. The conclusions are as follows: (1). From 1973-2013, glaciers retreated 412 ± 32 m on average, with a mean retreat rate of 10.3 ± 0.8 m·year −1 and a relative retreat rate of 5.6 ± 0.4%. The area of glacier decreased. The overall shrinkage was 7.5 ± 3.4%, which was greater than the decrease in length. The glacial area shrank by 2.8 ± 2.1% (0.18%·year −1 ), 1.9 ± 1.4% (0.16%·year −1 ), 1.4 ± 1.4% (0.23%·year −1 ) and 1.6 ± 1.4% (0. (3). Data provided by meteorological stations showed that, the increase in temperature was 0.3 • C per decade and that the precipitation increased by 12.7 mm per decade from 1957-2014. The shrinkage of the glacial area was primarily due to increasing temperature (mainly in summer). It is obvious that increasing temperature was the major cause of glacier thickness reduction and area shrinkage. The increase in precipitation, to a certain extent, inhibited glacial ablation but it did not change the status of the shrinkage in the glacial area and the reduction in the glacier thickness.
|
2019-04-27T13:12:20.934Z
|
2018-11-28T00:00:00.000
|
{
"year": 2018,
"sha1": "da6294cd1a677fe6f1a1d49251590ca71129453b",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4441/10/12/1749/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "259d7e962d8feb26bf8f139c2eb7d8664c7a0a2a",
"s2fieldsofstudy": [
"Environmental Science",
"Geography",
"Geology"
],
"extfieldsofstudy": [
"Geology"
]
}
|
204058240
|
pes2o/s2orc
|
v3-fos-license
|
Correlation between histopathology and frozen study of ovarian carcinoma
Introduction: To compare the frozen section results with definitive histopathological results of ovarian tumors diagnosed intra operatively at Saveetha medical college and hospital, Chennai. Materials and Methods: In this study we compared the results of 30 cases of frozen histology with histopathological diagnosis at the department of pathology , Saveetha medical college and hospital, Chennai during July 2017-July 2018. Results: A total of 30 cases were studied correlating the histopathological and frozen diagnosis of ovarian carcinoma. Out of which the diagnosis of 28 cases were concordant whereas diagnosis of 2 cases were discordant. Conclusion: The frozen section is a very accurate method and it provides rapid results. Out of the 30 cases, 2 cases were discordant, which might have resulted due to any sampling errors, technical problem or intraoperative error. Appropriate measures should be taken to reduce error rates. Publication.
Introduction
The frozen section procedure is a pathological laboratory procedure to perform fast microscopic analysis of a specimen. 1 The technical name for this procedure is cryosection. Using this procedure The accuracy of frozen section diagnosis concluded that for tumors that were clearly either benign or malignant the accuracy of the frozen section was good which was later confirmed by regular biopsy. On the contrary, where the frozen section diagnosis was a borderline tumor, the diagnosis was less accurate. 2 The frozen section is used to guide intraoperative or perioperative patient management as it provides rapid diagnosis. Thus it is used to provide a more efficient management to the patient. 3 Ovarian cancer is one of the most common cancer in women, especially women aged over 60 years.
Ovarian cancer mostly goes undetected until it has spread within the pelvis and abdomen. At the late stage, ovarian cancer is more difficult to treat but if it is detected in early * Corresponding author. E-mail address: krithiga.m99@gmail.com (K. Priya).
stages, in which the disease is confined to the ovary, is more likely to be treated successfully. 4 The type of ovarian cancer is determined from the type of cell from where the cancer has begun. WHO has classified ovarian tumours into 4 categories: Epithelial tumours -it is the most commonest type of ovarian tumours 1. Germ cell tumours -it comprises 10-20% of ovarian tumours 2. Sex cord -stromal tumours -it comprises about 5% of ovarian tumours 3. Others The cryostat is the instrument to freeze the tissue and additionally to chop the frozen tissue for microscopic section. The freezing of the tissue sample converts the water to ice. 5 Within the tissue there is a firm ice which acts as embedding media to cut the tissue. 6 Periodic review of the correlation between the frozen section diagnosis and final diagnosis is useful to identify the potential causes of errors and thus measures can be implemented to help prevent similar occurrences. 7 guidelines will definitely help to reduce such occurrences. So strict guidelines should be followed to prevent these errors.
Methods and materials
The study was carried out in the Frozen Section and Histopathology Division of Department of Pathology, Saveetha medical college and hospitals, Chennai from July 2017 to July 2018. A total of 30 cases were taken. Fresh tissue was sent to the frozen section room and the specimens were dissected and inspected. 8 Optimal cooling temperature compound is used to cut out blocks on the cryostat. After which it is stained by hematoxylin-eosin staining. Immediately the frozen section diagnoses are informed to the concerned authorities. 9 The non-frozen tissues were then sent to the histopathological lab where it is fixed in 10% for malin solution and processed for routine paraffin section followed by hematoxylin-eosin staining on the next day and further reporting was done. 10 The impression of frozen histology and histopathology was compared and the accuracy and specificity of the frozen section reporting was determined in comparison to the routine histopathology reporting. 11 A total of 30 cases were taken and the histopathological and frozen section diagnosis were compared.
Discussion
The histopathological section diagnosis of all 30 ovarian specimens revealed 66.66% benign tumours and 33.34%malignant tumours. The final frozen section revealed 60% benign tumours and 40% malignant tumours.
The overall accuracy rate of frozen section analysis is 93.33%. However there is a failure rate of 6.67%. The 6.67% negative results could have occurred due to any sampling errors.
These findings are in concordance with that of Chandramouleeswari K. et al 12 and. 3 Shrestha S. et al. 2 They have reported the accuracy rates as 92% and 94.6%respectively. But the study of Junn-Liang et al 13 and Farah-Klibi F. et al. 14 Showed slightly higher accuracy ratesof 97.7% and 97.5% respectively. These showed a relative decrease in the negative results.
In one case, benign ovarian tumor reported on frozen section turned out to be fibroma of ovary on conventional paraffin section. 15 In another case, it was reported as benign serous cystofibroma on frozen section but it turned out to be serous borderline tumor on paraffin section.
Sometimes these kind of negative results can also be observed. 5 The negative diagnosis was due to the error by the pathologist which may have resulted due to the method of freezing, type of procedure, type of lesion etc.
Appropriate measures and strict guidelines would help to reduce the failure rates.
Conclusion
Intaoperative frozen section diagnosis appears to be an accurate technique for the histopathological diagnosis of ovarian tumours.
The results can be used to guide the surgery. Frozen diagnosis can provide rapid, reliable, cost effective information necessary for optimum patient care. 16 Evaluation of the frozen section diagnosis and histopathological diagnosis should be carried out regularly for more efficient management of ovarian tumors.
The diagnostic accuracy of frozen section as an important source of information in surgical procedure is important not only in the management of surgical patients but also as a measure of quality control in surgical pathology. 17 To reduce error rates and to improve frozen section diagnosis, continues monitoring in the pathology department should be done. This should be done on a regular basis to attain better results. 18 This correlation between the histopathological diagnosis and frozen section diagnosis is definitely very useful to identify the tumours.
Source of Funding
None.
Conflict of Interest
None.
|
2019-09-26T08:53:18.147Z
|
2019-09-15T00:00:00.000
|
{
"year": 2019,
"sha1": "8b92fa60bba77889ec420370617ee312f371b26d",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.18231/j.jdpo.2019.039",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "89e486503bdc9bf69b586992ff35b2fad149c9b2",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Medicine"
]
}
|
54568857
|
pes2o/s2orc
|
v3-fos-license
|
Efficient Model-Free Reinforcement Learning Using Gaussian Process
Efficient Reinforcement Learning usually takes advantage of demonstration or good exploration strategy. By applying posterior sampling in model-free RL under the hypothesis of GP, we propose Gaussian Process Posterior Sampling Reinforcement Learning(GPPSTD) algorithm in continuous state space, giving theoretical justifications and empirical results. We also provide theoretical and empirical results that various demonstration could lower expected uncertainty and benefit posterior sampling exploration. In this way, we combined the demonstration and exploration process together to achieve a more efficient reinforcement learning.
Introduction
Over the past years, Reinforcement Learning (RL) has achieved a great success in tasks such as Atari Games (Mnih et al., 2015), Go (Silver et al., 2016), robot control (Levine et al., 2016) and high-level decisions (Silver et al., 2013). But in general, the conventional RL approaches can hardly obtain a good performance before a large number of experiences are collected. Therefore, two types of methods have been proposed to realize sample efficient learning, i.e. leveraging human demonstration (e.g. inverse RL (Ng et al., 2000)) and designing better exploration strategies. Although the literature has plenty of interesting studies on either one, there seems lack of work combining them to our best knowledge. In this paper we propose a new model-free exploration strategy which leverages all kinds of demonstrations (even including unsuccessful ones) to improve learning efficiency.
Existing works on learning from demonstration are mainly focused on inferring the underlying reward function (in IRL) or imitating of the expert demonstrations (Ng et al., 2000;Abbeel & Ng, 2004;Ho & Ermon, 2016;Hester et al., 2017). Hence, most methods can only exploit demonstrations that are optimal. However, the very optimal demonstrations are * Equal contribution 1 Nat'l Engineering Laboratory for Video Technology Cooperative Medianet Innovation Center Key Laboratory of Machine Perception (MoE) Sch'l of EECS, Peking University, Beijing, 100871, China. Correspondence to: Yizhou Wang <yizhou.wang@pku.edu.cn>.
hard to obtain in practice since it is known that humans often perform suboptimal behaviors. Therefore, mediocre and unsuccessful demonstrations have long been neglected or even expelled in RL. In this paper, we show how to make use of seemingly-useless demonstrations in the exploration process to improve sample efficiency.
Speaking of efficient exploration strategy, it expects an agent to balance between exploring poorly-understood state-action pairs to get better performance in the future and exploiting existing knowledge to get better performance now. The exploration vs exploitation problem also has two families of methods: model-based and model-free. Model-based means the agent explicitly model the Markov Decision Process (MDP) environment, then does planning over the model. In contrast, model-free methods maintain no such environment model. Typical model-free exploration approaches includegreedy (Sutton & Barto, 1998), optimistic initialization (Ross et al., 2011), and more sophisticated ones such as noisy network (Fortunato et al., 2017) and curiosity (Pathak et al., 2017). These model-free exploration strategies usually are capable to handle large scale real problems, however, they do not have a theoretic guarantee. Whereas, the modelbased explorations are more systematic, thus often have theoretic bounds, such as Optimism in the Face of Uncertainty (OFU) (Jaksch et al., 2010) and Posterior Sampling (PS) Reinforcement Learning (PSRL) (Osband et al., 2013). Despite the beautiful theoretical guarantees, the model-based methods suffer from significant computation complexity when state-action space is large, hence usually not suitable for large scale real problem.
How can we combine the advantage of both demonstration and exploration strategy to gain an even more efficient learning for RL? In this paper, we propose a model-free RL exploration algorithm GPPSTD using posterior sampling on joint Gaussian value function, and provide theoretical analysis about its efficiency in the meantime. We also make use of various demonstrations to decrease the expectation uncertainty of Q value model, and then leverages this advantage in implementing PS on Q values to gain more efficient exploration.
In summary our contributions include: • Show that posterior sampling based on model-free (Ziebart et al., 2008;Audiffren et al., 2015), Repeated IRL (Amin et al., 2017), etc. But IRL can be intractable when problem scale is large. Earlier imitation learning indicates behavior cloning, which could fail when agent encounters untrained states. Later representative IL algorithm includes Data Aggregation (DAgger) (Ross et al., 2011), Generative Adversarial Imitation Learning (GAIL) (Ho & Ermon, 2016), etc. However, their work focuses on imitating optimal demonstration, regarding mediocre and failed demonstration unusable. They also never consider exploration problem after imitating.
As for the exploration problem, two intuitive methods,greedy (Sutton & Barto, 1998) and Optimistic Initialization (Grześ & Kudenko, 2009), are the earliest way to tackle this problem. -greedy is to explore with a probability . Optimistic Initialization initializes all Q values to rmax 1−γ , making RL visit each state at least some times. Model based method Optimism in the Face of Uncertainty (OFU) is to assign each state-action pair a biased estimate of future value and selects the action with highest estimate (Jaksch et al., 2010). Posterior sampling method has been proposed since (Strens, 2000), involving sampling a set of values from posterior estimation and selecting the action with maximal sampled value. PSRL proposed by Osband et al.(2013) does PS on the Markov Decision Process (MDP): in every episode, PSRL sample a MDP , run model-based planning algorithm and acts as if it is the true optimal policy. For finite horizon algorithms, regret bound of O(HS √ AT ) is achieved by PSRL (Osband et al., 2013), and O(H √ SAT ) by GPSRL . It is notable that these methods are all model-based with finite SA space, which can be a considerable limitation in application.
However, since PSRL is a model-based algorithm, it suffers from significant computation complexity for planning when state and action space are large. Therefore, in this paper we built model on value function based on Gaussian Process (GP), making it model-free, and to achieve both exploration efficiency and tractable computation complexity.
Previous model-free algorithms have also been proposed using GP in RL. GP-SARSA (Engel et al., 2005) used GP to update posterior estimation of value function by temporal difference method. iGP-SARSA proposed informative exploration but lacks theoretical analysis (Chung et al., 2013). GPQ for both on-line and batch settings aims at learning Q function which could actually converge as T → ∞ (Chowdhary et al., 2014) but lacks efficient exploration. DGPQ employed delayed update of Q function to achieve PAC-MDP (Grande et al., 2014) but still lacks efficient exploration.
For regret bounds under GP hypothesis, Srinivas et al.(2012) used GP to analyze the regret bound using information gain in bandit problems, while posterior sampling using GP and related analysis of regret bounds had not been explored yet, which would be discussed in this paper.
Theoretical Analysis
In this section, we will show that how to choose demonstrations to achieve lower expected estimation variance, analyze related bounds of posterior sampling in RL under the hypothesis of GP for both deterministic and non-deterministic MDPs, and finally relate the choice of demonstrations and posterior sampling for efficiency improvement.
Expectation of variance conditioned on data in GP
We choose joint Gaussian distribution on value function -more specifically, Gaussian Process (GP) -because GP provides a principled, practical, probabilistic approach to learn in kernel machines (Rasmussen & Williams, 2006).
We assume that the values in the value function are joint normal distributed. Under the GP assumption, the posterior distribution are given by where f is the value of the state vector X, and we wish to obtain value estimation f * over the new observation X * . f and X come from history or what we call experiences. We define p(x) as the distribution of test points, i.e. the states which occur in RL. In the framework of RL, x is every single state and its visiting distribution p(x) is determined by current policy µ and the MDP (Markov Decision Process).
We will start by a theorem that is quite obvious from intuition but hasn't been proved yet.
Theorem 1 When a set (X , f ) is used to estimate f (x * ) in GP, the expectation of variance on test points x * with distribution p(x) conditioned on all possible training set (X', f) set would not be less than what conditioned on the training set X sampled from distribution p(x), if the size of sample set is large enough to ignore the approximation error. (Rasmussen & Williams, 2006).
We consider the expectation of posterior variance over the distribution p(x) given any X as
Since
K(x * , x * )p(x)dx has no relation with X , we just focus on the latter subtracted part If i does not equals to j, the integral would be 0.
For each i, focus on φ * i (X )K(X , X ) −1 φ i (X ). Using numerical approximation of eigenfunctions (Rasmussen & Williams, 2006), when each x l is sampled from ,and each u i and λ mat i is the eigenvector and eigenvalue of matrix K(X, X), with the approximation Given a random set of X , and a sampled set of X, although we do not know φ(x) exactly, we can use X to estimate the value of eigenfunctions of X : , when n → ∞ we could regard all above estimations as asymptotic unbiased estimations, and here we suppose n is large enough to ignore the approximation error so the approximate equations can be seen as equations.
Applying matrix decomposition to symmetric non-negative definite matrix K(X , X ) , According to Cauchy-Schwartz inequality, and when λ mat j equals to λ mat i the result can reach its largest, and the lowest expectation of overall conditional variance is k( Moreover, if the kernel contains noise as below: K(X, X) + σ 2 I would still be a symmetric non-negative definite matrix, so the eigenvalues in previous analysis would all be added with σ 2 , and the eigenvectors remain the same. Obviously the conclusion still remains the same.
Notice that during a learning process of RL, if the agent has not learned how to perform perfectly yet, under present policy the states which the agent would come across would not be those of highest real value. So non-perfect demonstrations are necessary to lower the expectation of uncertainty during exploration.
BayesRegret of GP-based Posterior Sampling
. µ is the policy function of state, and value function: where γ is the rate of discount and satisfies 0 < γ ≤ 1.
We assume that given MDP V µ M ,M (s) is joint normal on the set of state S with optimal policy µ M in M , which contains the assumption of the model using a model-free method.
Define expected cumulative reward of the kth episode: Regret : The regret of every episode is random due to the unknown true MDP M * , the learning algorithm π, the sampling M k of the present episode and previous sampling through history H k1 . Notice that in our algorithm we do not directly sample M k from the posterior distribution φ(·|H k1 ) and we use the posterior distribution of the value to realize our sampling. But for convenience we would use sampled M k to refer to our way of sampling in practice.
And Bayesian regret: which is actually the same with the regret defined by . Since we have different definition of the value function, we use other notations to avoid confusion.
We separate this BayesRegret by episodes, where each episode k conditioned on the previous history H k1 then taking expectation again in order to achieve the expectation on M * . We discuss the relation between BayesRegret and the conditional regret, which is different from the method of previous work .
It is obvious that each H k1 here contains previous history and not independent. So when we take expectation of a series of H k1 , we actually take expectation on whole history H.
Since we will use the stochastic property of M k to analyze the bound, another thing to notice is that since H is actually produced by every sampled M k , taking expectation of H would not disturb the distribution of M k . So if we can bound conditional regret (as described above) on every possible M k from its distribution, then taking the expectation would also bound BayesRegret.
Theorem 2 Let M * be the true MDP with deterministic transitions according to prior φ with values under GP hypothesis. Then the regret for GPPSTD is bounded: BayesRegret(T, π GP P ST D , φ) =Õ( √ HT ).
Proof
Decomposition: First we focus on the difference we can observe by the policy Efficient Model-Free Reinforcement Learning Using Gaussian Process µ k that the agent actually follows, i.e. S µ k ,M k k − S µ k ,M * k in (12).
Referring to previous defination (9), where E M * [ |H k1 ] means taking expectation on M * ∼ φ(·|H k1 ). Recall our assumption of V in the definition part. Although V µ k ,M * does not satisfy joint normal distribution since its policy is not optimistic of its MDP, M k is still sampled from the posterior distribution of M * , which means that given history H k1 , the posterior sample R k , P k and unknown true R * , P * are identically distributed. So the expectation of (13) (on M k while performing posterior sam- ] is totally zero-mean, and is a sum of a series of joint normal variables. We would focus on the variance next.
Lemma 1 (Transformation of Joint Normal Variables).
If X ∼ N p (µ, Σ), A is a matrix of l × p and rank(A) = l, To calculate the sum , let A be a vector filled with 1, so we
So we have proved that given history
] is normally distributed with expectation of 0 and variance ≤ H 2 max(k) σ 2 , where max(k) σ 2 is the max variance of every state in episode k. Now back to the first difference of (12).
Recall that S µ,M k is the sum of joint normal variables, so similar to previous analysis, each |H k1 ] has zero-mean and variance ≤ 4H 2 max σ 2 by analyzing covariance as previous part.
So noticing the independence of sampling between episodes, as analyzed before, where E H means taking expectation on H. Set δ as 1 T , and let max σ 2 be the max variance of all states in all episodes (just for worst case bound), and there is a probability of 1 − 1 T that: ≤ 2 2 max σ 2 (HT + H)logT .
In general cases (like RBF and Matern), σ 2 is bounded (in a few cases like dot-product kernels, covariance cannot be bounded only in infinite spaces, while most continuous spaces in RL has borders), so this could be a sub-linear bound which means the agent would actually learn the real MDP in the end. Notice that we use max σ 2 only for a worst case bound in brief, while the true regret is related with each variance and covariance of the state. This result is better than previous posterior sampling analysis (PSRL bounds √ HSAT empirically but H √ SAT theoretically). As GP gets more information of the environment during exploration, the variance would decay, so actually the bound could be even better.
NON-DETERMINISTIC MDP
True MDP M * = {S, A, R M , P M , H, ρ} ∼ φ, other notations are just the same as 3.2.1, except that P M is a stochastic transition in M , ρ is the distribution of initial states.
Since the transition is not deterministic and the states are continuous, the cumulative reward could be related to countless states of values. Since we do not have assumptions on stochastic transition function, which is necessary for regret analysis in non-deterministic environment, we focus on the cumulative estimation error for any single state during the learning process.
We would show that CumError can also lead to the convergence of estimation as described below. We put the proof of Theorem 3 in Appendix A.
Demonstrations for Posterior Samping
Now back to our reason to make use of demonstrations. Consider the expected variance of all states with distribution p(s) of our estimate of value function, where p(s) is determined by posterior distribution of value function and the present policy. The analysis in 3.2.1&3.2.2 use max σ 2 only for a worst bound, while the real situation is determined by every single σ 2 . So if we get lower expected variance, lower regret would be achieved with a high probability by Markov's inequality: a . That is, with the same parameter a, the lower the expectation is, there is a lower probability that σ 2 would be larger than a.
Above analysis requires that we use sample set X which from distribution p(x) as demonstrations, while in fact we do not know the exact p(x). So as a compromise, we could improve the efficiency of our learning process by demonstrations that contains similar situations to present episode, which is rational from intuition, and also produce better result in practice in Section 6.
Gaussian Process Temporal Difference
GPTD was firstly introduced in Engel et al. 2003, then improved in Engel et al. 2005. We'll briefly explain its overview framework here since our algorithm is closely related to it.
GPTD proposes a generative model for the sequence of rewards corresponding to the trajectory x 1 , x 2 , · · · , x t : where R is the reward process observed in experience, V is the value Gaussian process, and N is a noise process.
We may rewrite (15) using (16) as In order to complete the probabilistic generative model connecting reward observations and values, we may impose a Gaussian prior over V , i.e. V ∼ N (0, k(·, ·)), in which k is the kernel chosen to reflect our prior beliefs concerning the correlations between the values. We also need to define N t ∼ N (0, Σ t ) with Σ t = σ 2 H t H T t and σ is the observation noise level (Engel et al., 2005).
Since both the value prior and the observation noise are Gaussian, the posterior distribution of the value conditioned on observation sequence r t−1 = (r 0 , · · · , r t−1 ) T are also Gaussian and given bŷ
GPPSTD
Now we are ready to present Gaussian Process Posterior Sampling Temporal Difference (GPPSTD) algorithm, described in Algorithm 1. We adopt the GPTD framework to gain the posterior Q value distribution of state action pair conditioned on all reward experiences by Equation 18. We note that similar to GPSARSA method (Engel et al., 2005), we treat state action pair as x t , therefore model Q value of state action pair rather than V value of state in GP. We also use episodic algorithm with fixed episode length as required by the analysis.
As analyzed before, we only update GP model after one episode ends. Posterior sampling should depend on the joint distribution of all the state-action pair in one episode. But during the exploration, the agent would not know exactly what state-action pair it would come across in the following steps within the episode. We overcome this problem by using conditional distribution of joint variables as the analysis below.
We applied posterior sampling method by a = arg max a Q sampled (s t , a). Denote the already sampled Q i = Q(s i , a i )(i = 1, 2, · · · , t). In a single episode, when Algorithm 1 GPPSTD Initialize GP model M repeat Initialize initial state s 1 , Memory of the episode for timestep t = 1 to H do Obtain µ(s t , ·),Σ from M using 18 Sample n(s t , ·) according to 21 Perform a = arg max a (µ(s t , a) + n(s t , a)) Observe s t+1 ,r Memory.add((s t−1 , a t−1 , r, s t , a t ) end for GPTD.Update(M, Memory) until M convergence requirement satisfied Q 1 , Q 2 , · · · , Q t−1 have been sampled, posterior Q t and all previous Q are joint Gaussian distributed, in which Q(s t , ·) stands for Q values of all actions possibilities in s t , µ(s t , ·) stands for their posterior means, and Σ xx , Σ x * x , Σ x * x * stands for posterior covariance matrix given by GP. Using standard multivariate Gaussian conditional results, we gain posterior sampling By subtracting µ t in (20), we have each conditional noise So at each timestep t, we perform action selection by sampling a noise from conditional distribution, add it to posterior mean of Q and choose the best action according to the noised Q. At the end of the episode we use collected observation sequence to update our GP model by updating K t , α t , C t in (18)(for more detail, we refer readers to Engel et al. 2005). This exploration is bounded by Theorem 2 (in deterministic environments) or Theorem 3 (with non-deterministic environments).
It is worth mentioning that because our policy remain unchanged during one episode, it achieves deep exploration. (Russo et al., 2017).
Pretrain
Now let's see how we can make use of various demonstrations to make GPPSTD more efficient. The way we pretrain GP model M is exactly the same as training. For RL, the "test" point distribution p(x) is the experience collected in environment, which is determined by its current knowledge (in our case, value) and exploration strategy. According to analysis in Section 3.1, a training set sampled from p(x) could give the lowest expected uncertainty, then help to avoid GPPSTD algorithm from meaningless exploration, resulting in the efficiency bound in Section 3.2.
Intuitively, we could regard the various-pretrain as an sketch overview of the Q value over state action space, and this sketch helps RL agent explore smartly. Though we just pretrain data with training method, we note that it is extremely hard for the agent to obtain the sketch alone, since a large proportion of space can't be accessed by RL agent itself for lack of systematic information especially in the beginning of the training.
Gaussian Process and Bayesian Neural Network
Now we'll be ready to discuss the general relationship between GP and bayesian neural networks, expanding our ideas to BNN. Neal (1996) So based on earlier work, we could expect that our theory about efficient exploration and making use of demonstrations in RL could extend to Bayesian deep networks. Related work had been done by Kamyar Azizzadenesheli (2018). They proposed Bayesian Deep Q-Network (BDQN), a practical Thompson sampling based RL Algorithm using Bayesian regression to estimate the posterior over Qfunctions, and has achieved impressive results while lacks theoretical analysis. We think this paper could provide a possible theoretical justification for BDQN, meanwhile making use of demonstrations remains future work.
Experiments
Our empirical experiment is done in the CartPole Task, a classic control problem in OpenAI Gym (Brockman et al., 2016). The task is to push a car left or right to balance a stick on the car. In each timestep, the RL algorithm receives a 4-dimensional state, takes one of two actions (left or right), and receives a reward of 1 if the stick's deviation angle from vertical line is within a range. If not, the episode will end. The maximum length of an episode is 200 steps, and we could view the steps after failure as reward = 0, therefore making it a fixed length task.
Firstly, we compare the performance of GPPSTD algorithm, GPTD using -greedy and deep-q learning using -greedy on CartPole in Fig. 2. We choose squared exponential kernel k(x i , x j ) = c × exp(− 1 2 d(x i /l, x j /l) 2 ) for GPPSTD and GPTD method, with length scale l = [0.1, 0.02, 0.1, 0.02, 0.001] and variance c = 10. Since we regard state-action pair as x in GP, our length scale is a 5dimensional vector. We note that because we believe there are no value correlations in action, we give it an length scale of 0.001, which in turn will cause k(x i , x j ) = 0 when action is different. Result Fig. 2 shows that GPPSTD significantly outperform other two algorithms. It demonstrates GPPSTD's exploration process to be both efficient and robust, since -greedy methods fluctuate a lot relative to GPPSTD. We also see that GP may be a better model than neural network in this task.
In the second experiment, we show that when combined with demonstration, GPPSTD could achieve an even better results. In the optimal demonstration pretrain setting we use 10 episodes of optimal demonstration (200-score episodes) while in the various-pretrain setting, 5 episodes of optimal demonstration and 5 episodes of unsuccessful demonstration (score between 10-60) are used for pretrain. As shown in Fig. 1), various-pretrain outperforms optimalpretrain and no-pretrain. We notice that optimal-pretrain suffers fluctuate performance compared to various-pretrain, which verifies our belief. It is because that optimal demonstration only can not provide agent with the information outside optimal trajectory, which leads to higher variance of estimations, whereas various demonstration has lower variance of estimation during exploration, thus lead to better regret as our analysis in Section 3.2. Moreover, as in Fig. 1, various-pretrain has the lowest action uncertainty (measured by posterior variance) at the beginning, reflected our analysis on expected uncertainty analysis in Section 3.1.
Conclusions
In this paper, we discuss how to make use of various demonstrations to improve exploration efficiency in RL and make a statistical proof from the view of GP. What is equally important is that we propose a new algorithm GPPSTD, which implements a model-free method in continuous space with efficient exploration by posterior sampling under GP hypothesis, and also behaves impressively in practice. Both two methods aim at efficient exploration in RL. More impressively, combining both could further improve the efficiency from a Bayesian view. The property of Gaussian Process has been discussed to extend these methods to neural network, and we expect faster computation and even better results using our model-free posterior sampling methods on Bayesian Neural Network.
|
2018-12-11T12:37:24.000Z
|
2018-12-11T00:00:00.000
|
{
"year": 2018,
"sha1": "4600674f5b157a28dfbc5f2d0167354cf142bd3d",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "4600674f5b157a28dfbc5f2d0167354cf142bd3d",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
}
|
119270436
|
pes2o/s2orc
|
v3-fos-license
|
Application of axiomatic formal theory to the Abraham--Minkowski controversy
We treat continuum electrodynamics as an axiomatic formal theory based on the macroscopic Maxwell--Minkowski equations applied to a thermodynamically closed system consisting of an antireflection-coated block of a simple linear dielectric material situated in free-space that is illuminated by a quasimonochromatic field. We prove that valid theorems of the formal theory of Maxwellian continuum electrodynamics are inconsistent with conservation laws for the inviscid incoherent flow of non-interacting particles (photons) in the continuum limit (light field) in the absence of external forces, pressures, or constraints. We also show that valid theorems of Maxwellian continuum electrodynamics are contradicted by the refractive index-independent Lorentz factor of von Laue's application of Einstein's special relativity to a dielectric medium. Obviously, the fundamental physical principles in the vacuum are not affected. However, the extant theoretical treatments of electrodynamics, special relativity, and energy--momentum conservation must be regarded as being mutually inconsistent in a simple linear dielectric in which the effective speed of light is $c/n$. Having proven that the established applications of fundamental physical principles to dielectric materials lead to mutually inconsistent theories, we derive, from first principles, a mutually consistent, alternative theoretical treatment of electrodynamics, special relativity, and energy--momentum conservation in an isotropic, homogeneous, linear dielectric-filled, flat, non-Minkowski, continuous material spacetime.
A. Physical Setting
The foundations of the energy-momentum tensor and the associated tensor-based spacetime conservation theory come to electrodynamics from classical continuum dynamics where the divergence theorem is applied to a Taylor series expansion of the density field of an intrinsic property (e.g., mass, particle number) of an unimpeded, inviscid, incoherent flow of non-interacting particles (molecules, dust, etc.) in the continuum limit (fluid or particle number field, for example) in an otherwise empty volume [1]. Although the continuous formulation of conservation principles was originally the provenance of fluid mechanics (continuum dynamics), the energy and momentum conservation properties of light propagating in the vacuum were long-ago cast in the energymomentum tensor formalism in which the light field (flow of non-interacting photons in the continuum limit) plays the role of the continuous fluid [2]. However, extending the tensor-based theory of energy-momentum conservation of the continuous light field in the vacuum to propagation of the light field in a linear dielectric medium, also in the continuum limit, has proven to be persistently problematic, as exemplified by the more-than-centuryold Abraham-Minkowski momentum controversy . * michael.e.crenshaw4.civ@mail.mil The origin story of the Abraham-Minkowski controversy is that the Minkowski energy-momentum tensor is not diagonally symmetric. Motivated by the need to preserve the principle of conservation of angular momentum, Abraham symmetrized the energy-momentum tensor by an ad hoc redefinition of the linear momentum density to be the Poynting energy flux vector divided by c 2 . The issue of the lack of symmetry in the Minkowski energy-momentum tensor has since been overtaken by substantive problems with conservation of linear momentum for both the Minkowski and Abraham energy-momentum tensors. The modern resolution of the Abraham-Minkowski momentum controversy is to decide that the electromagnetic energy, the electromagnetic angular momentum, and the electromagnetic linear momentum are features of an electromagnetic subsystem. The addition of a phenomenological material subsystem completes the total system. Although the "modern" resolution [20,33] has been around for 50 years, or so, it still isn't working out quite right.
A gradient-index antireflection-coated right rectangular block of a simple (absorption-negligible, isotropic, homogeneous, lowest-order dispersive) linear dielectric material, with refractive index n, situated in free-space that is illuminated at normal incidence by a finite quasimonochromatic field with center frequency ω p is a thermodynamically closed system that consists of the field, the dielectric material, and any other pertinent subsystems. Integrating over all-space Σ, the Abraham (linear) are not constant in time as a finite electromagnetic pulse enters the dielectric from the vacuum [6][7][8][9][10]. The fact that the electromagnetic momentum is not conserved means that a portion of the incident electromagnetic momentum is being transferred to some other subsystem and the dielectric material is the only other identifiable component of the system. When the field is entirely within an antireflectioncoated block of a homogeneous simple linear medium, the Abraham momentum is less than the incident (from vacuum) momentum by a nominal factor of n. The Minkowski momentum is greater than the Abraham momentum by a factor of n 2 and it is therefore greater than the (vacuum) incident momentum by a factor of n. The fact that neither the Abraham momentum nor the Minkowski momentum is globally conserved is wellknown in the scientific literature [6]. Nevertheless, many researchers [9,13] justifiably claim that the Minkowski (linear) momentum is conserved or "almost" conserved based on the four-divergence of the Minkowski energymomentum tensor being negligible, which corresponds to a local conservation law [34], ∂ β T iβ = 0, see Eq. (3.7). But, a factor of n is not a perturbation and any claim [9,13] that the Minkowski momentum (or the Minkowski tensor) is conserved or "almost" conserved is manifestly false based on the global conservation condition [2,34], P i (t) = P i (t 0 ), see Eq. (3.8). Pfeifer, Nieminen, Heckenberg, and Rubinsztein-Dunlop [9] present both facts from the extant literature in their comprehensive review article showing that the Minkowski momentum is greater than the incident momentum by a factor of n in their Sec. 9A and noting that the Minkowski momentum is almost conserved in their Sec. 8C. This unremarked contradiction between local and global conservation laws [34], appears to be one of many factors that contribute to the extraordinary longevity of the Abraham-Minkowski controversy.
In time gone by, the necessity for a medium in which light waves propagate proved the existence of the ether. The properties of the ether, including ether drag and partial ether drag, were determined from the observed characteristics of light propagation in the vacuum. Likewise, the necessity for global conservation of linear momentum in a thermodynamically closed field-plus-matter system proves the existence of a material momentum that is associated with the movement of matter in a microscopic picture of a dielectric substructure [7][8][9][10]. We note that such a microscopic subsystem is adverse to the continuum limit that is a foundational concept of continuum electrodynamics. Nevertheless, the assumption of iden-tifiable electromagnetic and material component subsystem momentums is commonly made and assumed to be settled physics in the scientific literature [7][8][9][10]. However, the nature of the material subsystem is elusive and controversial.
While the physical origin and characteristics of the phenomenological material momentum have been debated for many decades, experimental demonstrations of a material momentum have been variously inconsistent, contradictory, and refuted [6,[25][26][27][28][29][30]. Likewise, theoretical treatments of the Abraham-Minkowski controversy [7][8][9][10][11][12][13][14][15][16][17][18][19][20][21][22][23][24] have not provided a consistent physical solution. In 2007, Pfeifer, Nieminen, Heckenberg, and Rubinsztein-Dunlop [9] reviewed the state of the field and concluded that the issue had been settled for some time: "When the appropriate accompanying energy-momentum tensor for the material medium is also considered, experimental predictions of the various proposed tensors will always be the same, and the preferred form is therefore effectively a matter of personal choice." Three years later, Barnett [17] offered a more restrictive resolution of the Abraham-Minkowski debate by contending that the total momentum G total for the medium and the field is composed of either the Minkowski canonical momentum G M or the Abraham kinetic momentum G A supplemented by the appropriate canonical material momentum G can or kinetic material momentum G kin . Although the Barnett [17] and Pfeifer, Nieminen, Henckenberg, and Rubinsztein-Dunlop [9] theories are based on fundamental principles, Barnett's restriction to two simultaneous physically motivated electromagnetic momentums is contradicted by the mathematical tautology that underlies the analysis of Pfeifer, et al., and viceversa.
In order to present a concrete example for discussion, we use the common Barnett [17] model for the field and matter components of the total energy-momentum tensor. The material momentums are constructed such that the total (electromagnetic plus material) linear momentum [10,17] G total = G incident = G em + G material (1.3a) of a finite pulse of the continuous light field in a continuous simple linear dielectric material with no other consequential subsystems is constant in time as required by global conservation [1,2,9,34] for a continuous fluid. Clearly, G kin = (1 − 1/n)G incident and G can = (1 − n)G incident when the electromagnetic pulse is fully within an antireflection-coated, isotropic, homogeneous, transparent linear dielectric medium. The material energy is constructed such that the total (electromagnetic plus material) energy is equal to the incident energy [9] U total = U incident = U em + U material (1.4a) U total = U A + U kin = U M + U can (1.4b) and is constant in time as the light pulse propagates from the vacuum into the antireflection-coated material. The total energy-momentum tensor is the sum of the Abraham energy-momentum tensor and a kinetic material energy-momentum tensor. In addition, the total energy-momentum tensor is the sum of the Minkowski energy-momentum tensor and a canonical material energy-momentum tensor. That is [9], T αβ total = T αβ incident = T αβ em + T αβ material (1.5a) T αβ total = T αβ A + T αβ kin = T αβ M + T αβ can . (1.5b) Obviously, Brevik's [31] admonitions against the application of conservation laws to subsystems do not apply to conservation of the total energy, the total linear momentum, the total angular momentum, or the total energymomentum tensor of the complete and closed field-plusmatter system that is considered here and in Refs. [22][23][24]. According to the Scientific Method, a scientific hypothesis must result in a unique testable prediction of physical quantities. There are many examples in the experimental record in which the interpretation of momentum experiments is unrestricted with experiments that prove the Minkowski electromagnetic momentum later being shown to prove the Abraham electromagnetic momentum and vice versa [6]. Likewise, experiments that disprove the Minkowski momentum are later re-analyzed to confirm the Minkowski momentum and similarly for the Abraham momentum [6]. The non-uniqueness of the electromagnetic momentum and the non-uniqueness of the material momentum for light in a dielectric are contrary to Popper's criterion of falsifiability and constitute a violation of the Scientific Method.
The prior work appears to present a very complex situation because some experiments support the Abraham definition of momentum and other experiments support the Minkowski momentum formula. This might suggest that the Scientific Method needs to be malleable in order to accommodate the experimentally proven non-unique incommensurate momentums. However, appeal to complexity is a fallacious application of the Scientific Method. The problem is that electromagnetic momentum is not measured directly. Instead the force due to optical pressure on a mirror inserted in a fluid dielectric is measured and related to the change in electromagnetic momentum [6,9,35]. Using the Abraham, Minkowski, or other momentum formula to relate the measured quantity (force) to the momentum creates a circular, self-proving theory, see Sec. 7.
We are accustomed to having well-designed experiments either prove or disprove scientific hypotheses. However, experiments cannot, in either principle or practice, provide a definitive resolution of the Abraham-Minkowski controversy [31] because the Scientific Method has been abrogated by the subsystem separation. In order to be theoretically definitive, the present work is based almost entirely on the unique, unseparated, globally conserved total (field plus matter plus other) momentum or the corresponding total momentum density of a thermodynamically closed system with appropriate system boundaries.
In a thermodynamically closed system, the total momentum, like the total energy, is a known quantity that is uniquely determined from the initial conditions by being constant in time. Then the total energy density and the total linear momentum density components of the total energy-momentum tensor are uniquely decided by time independence of the spatially integrated total energy and total momentum densities (global conservation law) [22][23][24]. The construction of the unique total (electromagnetic plus material plus other) energy-momentum tensor [22][23][24] from these total energy density and total momentum density components gets us congruent with the Scientific Method and definitively resolves the Abraham-Minkowski controversy. That is the generally accepted resolution of the Abraham-Minkowski controversy. Except that there are problems here, too. Applying the four-divergence operator to the unique total energymomentum tensor, one obtains spacetime conservation laws in the form of continuity equations for the total energy and the total linear momentum [9,[22][23][24]. It is easily argued, on physical grounds, that the conservation law for the total energy that is obtained by applying the four-divergence operator to the total energy-momentum tensor cannot be incommensurate with the Poynting theorem that is systematically derived from the Maxwell-Minkowski field equations for a simple linear dielectric. It is also easily proven mathematically that i) the energy conservation law, derived from the four-divergence of the total energy-momentum tensor, is self-inconsistent because its two non-zero terms depend on different powers of the refractive index and ii) the conservation law for the total energy that is derived from the four-divergence of the total energy-momentum tensor is undeniably incommensurate with the Poynting theorem [23]. The contradiction of sound physical arguments by mathematical reality seems to also contribute to the longevity of the dispute.
In summary, we have "fixed" the Abraham-Minkowski momentum conservation problem in the prescribed manner [9,17] only to have contradictions appear in a different form elsewhere. The current author [22][23][24] made the ansatz of a Ravndal [36] refractive index-dependent material four-divergence operator and demonstrated consistency between the field equations and total energymomentum conservation laws, at the cost of apparent problems with special relativity and the Fresnel relations. Moveable contradictions with multiple resolutions are characteristic of an inconsistent theoretical foundation with physically motivated ad hoc patches. A comprehensive approach to the basic theories and the relations between field theory, conservation laws, special relativity, spacetime, and boundary conditions for a quasimonochromatic field propagating in a simple linear di-electric medium is absolutely required.
B. Procedure
Classical continuum electrodynamics can be treated mathematically as a formal system in which the macroscopic Maxwell-Minkowski equations and the constitutive relations are the axioms. Theorems are derived from the axioms using algebra and calculus. In this article, we derive theorems of the formal theory of Maxwell-Minkowski continuum electrodynamics from explicitly stated axioms using substitutions of explicitly defined quantities.
Steps are kept small so that all readers should be quite satisfied that there are no implicit axioms and no manifest deficiencies in the derivations. Noting that spacetime conservation laws [1,2,9,34] (that are reviewed in Sec. 3) and the macroscopic Maxwell field equations for a linear dielectric medium are distinct laws of physics, we show that valid theorems of Maxwellian continuum electrodynamics are proven false by the spacetime conservation laws for an inviscid, incoherent flow of non-interacting particles (photons) in the continuum limit (light field) through the continuous dielectric medium and in the absence of external forces, pressures, constraints, or other impediments (unimpeded flow).
When a valid theorem of an axiomatic formal theory is proven false by an accepted standard then it is proven that one or more axioms of the formal theory are false. Alternatively, the standard with which the theorem is being compared is false or both the theory and the standard can be simultaneously false. Therefore, the extant theoretical treatments of spacetime energy-momentum conservation and Maxwellian continuum electrodynamics must be regarded as being mutually inconsistent in a region of space in which the effective speed of light is c/n.
Nevertheless, there is a generally accepted "fix" in which a physically motivated material momentum and a physically motivated material energy-momentum tensor are added to the rigorously derived Maxwellian electromagnetic quantities [9,17]. The total field-plus-matter linear momentum is constant in time so that the modified momentum and energy-momentum tensor are proven true by global conservation principles. Then, we prove that the physically motivated fix is false because the four-divergence of the total (electromagnetic plus material) energy-momentum tensor is self-inconsistent, violates Poynting's theorem, and violates spacetime conservation laws.
Obviously, we cannot attach any credence to a contradiction between fundamental physical laws (Maxwell's equations and spacetime conservation laws) because the contradiction would have been discovered by now; unless such a contradiction was found but not recognized. For over 50 years, global conservation principles have been used to justify a phenomenological material momen-tum that remediates the contradiction between the tensor energy-momentum continuity equation and the global conservation law. Assuming that fixing the problem results in a correct theory, most researchers have failed to notice that the corrected continuity equation now violates the local conservation law. The current author [23] noticed and further patched the energy-momentum theory to make the continuity equation consistent with both the local and the global conservation laws [34], but at the expense of a contradiction with Einstein-Laue special relativity in a dielectric. The patches to the theory simply move the contradictions around as is characteristic of an underlying theory that is self-inconsistent.
Having proven the original version and the phenomenologically patched version of Maxwellian continuum electrodynamics to be manifestly false in a simple linear dielectric medium, it is customary to propose an alternative theory. In the continuum limit, the electromagnetic field and the dielectric medium are continuous at all length scales. We define an isotropic, homogenous, linear dielectric-filled, flat, non-Minkowski, continuous material spacetime S d (x 0 , x, y, z) and we derive a new non-Maxwellian continuum electrodynamics from Lagrangian field theory in this non-Minkowski spacetime.
The basis functions of Maxwellian electrodynamics in Minkowski Here, x 0 = ct is the time-like coordinate of Minkowski spacetime andk 0 is a unit vector in the direction of propagation. In the non-Minkowski spacetime S d (x 0 , x, y, z), the basis functions of propagating electromagnetic fields, [exp(−i((nω/c)x 0 − (nω/c)k 0 · r)) + c.c.]/2, are what we would expect for monochromatic, or quasimonochromatic, fields propagating at speed c/n in a linear dielectric. Consequently, there is a fundamental difference in approach between an organically continuum-based non-perturbative electrodynamics and the macroscopically averaged effects of microscopic fields interacting perturbatively with a collection of individual particles in the vacuum. Furthermore, the assumptions, approximations, limits, and averages that are implicit in the macroscopic model are not reversible. We cannot re-discretize or un-average a continuous dielectric with macroscopic refractive index n any more than we can ascertain the velocity of a particle in an ideal gas with temperature T .
We apply Lagrangian field theory in the isotropic, homogeneous, linear dielectric-filled, flat, non-Minkowski, continuous material spacetime S d (x 0 , x, y, z) to systematically derive equations of motion for macroscopic fields in a simple linear dielectric. These equations of motion are the axioms of a new formal theory of continuum electrodynamics. Although the new continuum electrodynamics and Maxwellian continuum electrodynamics are disjoint, there is sufficient commonality between the new equations of motion and the macroscopic Maxwell-Minkowski field equations in a dielectric that the extensive theoretical and experimental work that is nearly cor-rectly described by the macroscopic Maxwell-Minkowski equations has an equivalent formulation in the new theory. More interesting is the work that we can do with the new formalism of continuum electrodynamics that is improperly posed in the standard Maxwell theory of continuum electrodynamics. These cases will typically involve the invariance or tensor properties of the set of coupled equations of motion for the macroscopic fields. This interpretation is borne out in our common experience: the macroscopic Maxwell-Minkowski equations produce exceedingly accurate experimentally verified predictions of simple phenomena, but fail to render a unique, uncontroversial, verifiable prediction of energymomentum conservation for electromagnetic fields propagating in a simple linear dielectric medium. An analogy can be made to Newtonian dynamics that accurately described all known dynamical phenomena until confronted with Lorentz length contraction and time dilation, the Michelson-Morley [37] experiment, and Einstein's relativity.
Spacetime conservation laws are distinct from the energy-like and momentum-like evolution theorems that are systematically derived from the macroscopic Maxwell-Minkowski field equations, although the evolution theorems are sometimes incorrectly referred to as the macroscopic electromagnetic energy and momentum conservation laws. In continuum dynamics, spacetime conservation laws are derived for the case of an unimpeded (no external forces, pressures, or constraints), inviscid, incoherent flow of non-interacting particles (dust, molecules of a fluid, etc.) in the continuum limit in an otherwise empty volume [1]. Applying the divergence theorem to a Taylor series expansion of a density field of a conserved property (mass, particle number, etc.) in an empty Minkowski spacetime results in a continuity equation (conservation law) for the conserved property [1]. We show that applying the same derivation procedure to a non-empty, linear dielectric-filled, isotropic, homogeneous, flat, non-Minkowski, continuous material spacetime S d (x 0 , x, y, z) produces a continuity equation for a conserved property in a simple linear dielectric that requires differentiation with respect to the independent time-like variablex 0 instead of x 0 .
We construct the diagonally symmetric, traceless, total energy-momentum tensor as an element of a valid theorem of the new formal theory of continuum electrodynamics. We show that the spatial integrals of the total energy density and the total momentum density are constant in time as the field propagates from the vacuum into the medium and that theorems of the new formal theory correspond to continuity equations for the conserved properties in a dielectric-filled spacetime. The tensor properties of the new formulation of continuum electrodynamics constitute a definitive resolution of the Abraham-Minkowski dilemma.
The unique total energy-momentum tensor that is derived by the new formal theory is entirely electromagnetic in nature. Consequently, there is no need to assume any "splitting" of the total energy, the total momentum, or the total angular momentum into field and matter subsystems in order to satisfy the spacetime conservation laws for a simple linear dielectric: i) In the case of quasimonochromatic optical radiation incident on a stationary homogeneous simple linear dielectric draped with a gradient-index antireflection coating, the surface forces are negligible. Then, the total energy, the total linear momentum, and the total angular momentum are purely electromagnetic and the dielectric remains internally and externally stationary. ii) In the absence of an antireflection coating, the dielectric block, as a whole, acquires (material) momentum due to the optically induced surface pressure that is associated with Fresnel reflection and this must be treated by boundary conditions, not by a hypothetical, unobservable, internal material momentum. The dielectric remains internally stationary.
In this work, the index convention for Greek letters is that they belong to {0, 1, 2, 3} and lower case Roman indices from the middle of the alphabet are in {1, 2, 3}.
Coordinates (x 1 , x 2 , x 3 ) correspond to (x, y, z), as usual. The Einstein summation convention in which repeated indices are summed over is employed.
II. MAXWELLIAN CONTINUUM ELECTRODYNAMICS
There are several representations of the macroscopic Maxwell equations. Alternative formulations that are associated with Ampère, Chu, Lorentz, Minkowski, Peierls, and others [8,11,18,21,23,33] are sometimes used to emphasize various features of classical electrodynamics in ponderable matter.
In order to present a familiar and concrete narrative, we start with the common Minkowski representation of the macroscopic Maxwell field equations [38][39][40][41] The macroscopic Minkowski fields, E(r, t), D(r, t), B(r, t), and H(r, t), are functions of position r and time t. Here, J f (r, t) is the free charge current density and ρ f (r, t) is the free charge density. Equations (2.1) are the axioms of Maxwell-Minkowski electrodynamics for macroscopic fields in matter.
The axioms, Eqs. (2.1), can be operated upon using standard algebra and calculus to derive theorems. For example, substituting the temporal derivative of the Gauss law, Eq. (2.1d), into the divergence of the Maxwell-Ampère law, Eq. (2.1b), we obtain the continuity equation for free charges. We take the scalar product of Eq. (2.1a) with H and the scalar product of (2.1b) with E and subtract the results to produce a continuity equation that is also a theorem of the formal theory of macroscopic Maxwell-Minkowski continuum electrodynamics.
Adding the vector product of B with Eq. (2.1b), the vector product of D with Eq. (2.1a), the product of Eq. (2.1c) with −H, and the product of Eq.
that is also a valid theorem of Maxwellian continuum electrodynamics.
In the absence of charges and charge currents, the energy and momentum continuity equations, Eqs.
In order to examine the importance of charges and charge currents to the existing momentum conservation theory, let us consider a different derivation of Eq. (2.5b). The momentum density p imparted to free charges can be calculated by postulating the Lorentz force density [38][39][40][41] dp as a physical law. Now, eliminate the sources in favor of the fields using the Gauss law, Eq. (2.1d), to eliminate ρ f and using Faraday's law, Eq. (2.1b), to eliminate J f . Then the momentum density p imparted to the free charges can be calculated as [38][39][40][41] Substituting the calculus identity Faraday's law, and Gauss's law into Eq. (2.7) yields the continuity equation This result is equal to Eq. (2.4) and is also a valid theorem of Maxwell-Minkowski electrodynamics. Dropping the free charges and free charge currents, one reproduces the momentum continuity equation for a neutral medium, Eq. (2.5b). Textbook derivations [38][39][40][41] of the electromagnetic momentum continuity equation typically begin with the Lorentz force law, as shown by the derivation, Eq. (2.6)-(2.9). The derivation is simple and obvious. However, one starts with f L = 0 in the absence of free charges and free charge currents. This might lead one to think that free charges and free charge currents are necessary to the theory. This opinion is clearly wrong because free charges and free charge currents are initial conditions and having no free charges and no free charge current does not make the homogeneous theory wrong. The momentum continuity equation can be derived for a neutral dielectric medium using either of the two procedures by setting ρ f = 0 and J f = 0 at any point in the derivations.
We can also substitute homogeneous Maxwell-Minkowski equations into the calculus identity, Eq. (2.8), and derive an identical result. Whatever course we chart, we can study the physics of a free charge-free and free charge current-free medium and add the free charges and free charge currents to the theory when they are part of the specific system being studied. We also note that the postulated Lorentz force density, Eq. (2.6), is derived in Eq. (2.4). At the moment, we should not get distracted by peripheral issues.
A quasimonochromatic field is an arbitrarily long, finite, constant amplitude, unchirped pulse (square, rectangular, or top-hat pulse) with a short (relative to the pulse length) smooth turn-on transition and a short smooth turn-off transition. In order to be concise and avoid an unnecessarily complicated presentation, we adopt the plane wave-limit in which the amplitude of the field is spatially constant over an arbitrarily large crosssectional area of the propagating field and then smoothly decreases at least quadratically in the transverse spatial distance. The phase front is constant across the transverse cross-section of the propagating field. The planewave limit is a useful concept that allows us to treat the dynamics by a one-dimensional model as long as the well-known characteristics are applied consistent with the well-known limits, see, for instance, Sec. 7.1 of Ref. [40] or Chap. 16 of Ref. [41]. It should be noted that the plane-wave limit is distinct from the assumption of infinite plane waves that have infinite energy.
The permittivity and permeability of a linear medium are functions of the frequency of light as indicated by the Kramers-Kronig relations. If the center frequency ω p of the quasimonochromatic field is far from any material resonances then absorption can be treated as negligible and dispersion can be treated in the lowest order of approximation by using the permittivity ε(ω p ) and permeability µ(ω p ) that correspond to the center frequency of the quasimonochromatic field. Many authors treat dispersion in lowest order in addressing electromagnetic momentum issues [14] while other authors retain additional orders [7,10,11,16]. We consider the higher orders of dispersion to be negligible or perturbative in the parameter regime under consideration. If it is indeed necessary to retain higher orders of dispersion, we will still need the lowest-order theory in order to identify the source and magnitude of any differences. Consequently, it is not an error to correctly derive the lowest-order theory.
We consider a quasimonochromatic field with center frequency ω p to be normally incident from the vacuum onto a finite block of a transparent isotropic homogeneous linear medium. The permittivity and permeability of the simple linear medium are characterized by real, timeindependent, single-valued constants ε(ω p ) and µ(ω p ). Without loss of generality, we can treat the medium as being initially at rest with respect to the Laboratory Frame of Reference. With the medium at rest in the local frame of reference, the macroscopic electric and magnetic fields are related by the constitutive relations where ε(r, t 0 ) is the electric permittivity and µ(r, t 0 ) is the magnetic permeability. Generally, we will treat the stationary block as a right rectangular block of finite size that is draped with a thin gradient-index antireflection coating, but is otherwise isotropic and homogenous. The spatial variation of the material parameters, ε(r, t 0 ) and µ(r, t 0 ), typically consists of step functions (piecewise homogeneous block material) or Fermi distributions (piecewise homogeneous block material draped with a thin gradient-index antireflection coating). We adopt the plane-wave limit. Fig. 1 is a one-dimensional representation of the initial configuration of a quasimonochromatic field that is propagating toward a neutral (no free charges or free charge currents), gradient-index antireflection coated, arbitrarily long, stationary block of transparent, homogeneous, isotropic, linear material with refractive index n(r, t 0 ) = ε(r, t 0 )µ(r, t 0 ).
As the field enters the medium from the vacuum, the field imparts optically induced surface and volume forces to the material that act to accelerate the material, and/or portions of the material. The nature of the surface and volume forces (Fresnel, Lorentz, Helmholtz, Abraham, etc.), has been debated in the scientific literature for a very long time. However, it is the consequence of material motion on the optical characteristics of simple linear media that is important to discuss here.
Laue [42,43] applied the Einstein relativistic velocity sum rule to derive the speed of light in a transparent block of dielectric that is moving in the Laboratory Frame of Reference with velocity v. Laue's formula for the speed of light in the moving dielectric medium is where n = √ ε is the index of refraction in the rest frame and θ is the angle between the direction of light propagation and the direction in which the dielectric is moving [42]. Then is the index of refraction in the moving frame. However, it takes an intense light field applied for a long time for a macroscopic material to be accelerated to relativistic speeds where the difference between n and n would be appreciable. Physically, Eqs. (2.10) are valid limiting cases and are usually very accurate. Ramos, Rubilar, and Obukhov [13] use conservation of the center of energy velocity and also conclude that "is an extremely accurate approximation indeed". Describing the theoretical viewpoint of physics, Rindler [44] states "a physical theory is an abstract mathematical model (much like Euclidian geometry) whose applications to the real world consist of correspondences between a subset of it and a subset of the real world". Experimentalists, developers, and other realists may disagree and want to include all potentially relevant aspects of the physical world. However, adding complexity introduces additional parameters that are not independently determinable making it impossible to prove or disprove a particular model, e.g., the Abraham momentum or the Minkowski momentum.
By choosing to work in a regime in which higher-thanfirst-order-dispersion is negligible and the motion of the medium is non-relativistic and also negligible, we have a rock-solid basis for our theory in terms of the constitutive relations, D = εE and B = µH.
At optical frequencies, the magnetic permeability is usually negligible and the large majority of work on the Abraham-Minkowski dilemma has been performed for dielectric media. In order to maintain contact with the prior work, we restrict ourselves to dielectric media and designate as axioms of our formal theory. These axioms are the same as the constitutive relations, Eqs. (2.10), but for the simpler case of a dielectric medium, with n = √ ε and µ = 1, rather than the more general magneto-dielectric medium.
Using the axioms, Eqs. (2.14), to eliminate D in favor of E and H in favor of B, the energy continuity equation, Eq. (2.5a), can be written as ∂ ∂t This equation has been derived as a mathematical identity of the macroscopic Maxwell equations for the specific case of a simple linear dielectric medium. We denote the macroscopic electromagnetic energy density The identity is derived by expanding the vector operators, See Sec. 6.8 of Ref. [40].
for a simple linear dielectric. Then allows construction of another valid theorem of Maxwellian continuum electrodynamics [40] 1 c (2.24) for a non-relativistic simple linear dielectric medium. The Minkowski force density is derived directly from the Maxwell-Minkowski equations for a dielectric. The Minkowski force density reduces to in the plane-wave limit. As a matter of linear algebra, we can write, row-wise, the energy continuity equation, Eq. (2.15), and the three scalar differential equations that comprise the vector momentum continuity equation, Eq. (2.24), as a differential equation where ∂ β is the usual four-divergence operator defined by is the Minkowski four-force density, and .27), is a valid theorem of the formal theory of Maxwellian continuum electrodynamics for a simple linear dielectric medium in the nonrelativistic limit. Obviously, the intent is to identify the fourby-four matrix, Eq. (2.30), with the Minkowski energymomentum tensor.
Next, we use the formal theory of Maxwellian continuum electrodynamics to rigorously derive the Abraham energy-momentum theory [4] that was contemporaneous with the Minkowski theory [3]. We subtract a force density-like term from both sides of Eq. (2.24) to obtain (2.32) We combine, row-wise, the energy continuity equation, Eq. (2.15), with the three orthogonal components of the momentum continuity equation, Eq. (2.32), to obtain a new differential equation that is also a valid theorem of the formal theory of Maxwellian continuum electrodynamics, where (2.34) is a traceless diagonally symmetric four-by-four matrix and is the Abraham four-force density. For historical reasons, the four-by-four matrix, Eq. (2.34), is known as the Abraham energy-momentum tensor. It is often claimed in the scientific literature that the Abraham four-force density, Eq. (2.35), is negligible or "almost" negligible because the time average of f A , Eq. (2.31), is essentially zero due to the oscillating field [25,31,[45][46][47]. See also page 205 of Ref. [21]. However, the force density-like-term, Eq. (2.31), cannot "fluctuate out" because that would mean that the first term in Eq. (2.32) is also negligible -obviously, this term is necessary for electromagnetic fields to propagate through the medium.
In this section, the usual, well-known momentum continuity equations, Eqs. (2.27) and (2.33), have been derived from the macroscopic Maxwell-Minkowski equations. The axioms and conditions have been explicitly stated and the steps of the derivation have been kept small and explicitly documented in order to forestall any arguments about the validity of the derivation and results.
III. SPACETIME CONSERVATION LAWS
The fundamental physical principles of conservation of mass, conservation of linear momentum, conservation of angular momentum, and conservation of total (ki-netic+potential) energy were well-established long before Maxwell and Laue. In continuum dynamics (fluid dynamics, for example) a continuity equation reflects the conservation of a scalar property of an unimpeded (no external forces, pressures, or constraints), inviscid, incoherent flow of non-interacting particles (dust, fluid, etc.) in the continuum limit in terms of the equality of the net rate of flux out of the otherwise empty volume and the time rate of change of the property density field [1]. For a conserved scalar property, the continuity equation of the generic property density ρ with velocity field u, is derived by applying the divergence theorem to a Taylor series expansion of the property density field ρ and the scalar components of the property flux density field to unimpeded non-relativistic flow of non-interacting particles in an otherwise empty volume [1]. For unimpeded flow of mass-bearing particles in a thermodynamically closed system, we have a conserved scalar property, the total mass, Σ ρ m dv that is obtained by integrating the mass density ρ m over the total volume Σ. The corresponding continuity equation is We have a conserved vector quantity, the total momentum, Σ ρ m u dv, belonging to the same thermodynamically closed system. The vector momentum continuity equation can be written in component form as As a matter of linear algebra, we can write, row-wise, Eq. (3.3) and the three scalar differential equations that comprise the vector differential equation, Eq. (3.4), as a single differential equation is, by construction, a diagonally symmetric four-by-four matrix. The differential equation, Eq. (3.5), is a valid theorem of the formal theory of continuum dynamics (not continuum electrodynamics). Obviously, the intent is to identify the matrix, Eq. (3.6), with the dust energymomentum tensor. The matrix, Eq. (3.6), has the following characteristics of flow through an otherwise empty volume for an unimpeded (no external forces, pressures, or constraints), inviscid, incoherent flow of non-interacting particles in the continuum limit [1,2,9] 1) Continuity equations (local conservation laws) are generated by the four-divergence of the matrix [1,34], is constant in time for each α.
3) The trace T αα is proportional to the mass density ρ m : 4) The matrix is diagonally symmetric, Symmetry putatively corresponds to conservation of angular momentum [14], although symmetry is not considered to be an absolute requirement for angular momentum conservation [2].
With the advent of relativity, conservation of mass became conservation of relativistic mass-energy E = (p·pc 2 +m 2 c 4 ) 1/2 and these physical principles are known as the spacetime conservation laws and are properties of Minkowski spacetime. Mass-energy is simply the most well-known of the conserved properties: the discussion in this section applies, equally well, for the conservation of number/quantity and for the conservation of any intrinsic property of identical non-interacting particles in an unimpeded, inviscid, incoherent flow through empty space.
Although the material that is presented in this section is well-known, some experts have questioned the application of the "particle" conservation laws to electrodynamics. However, the fundamental basis of the conservation laws is Minkowski spacetime, not particle dynamics. Application of the spacetime conservation laws to the continuum limit of the flow of photons is demonstrated in the next section.
A. Vacuum
The energy and momentum conservation properties of a continuous light field propagating in the vacuum were long-ago cast in the energy-momentum tensor formalism of classical particle dynamics in the continuum limit in which the continuous light field plays the role of the continuous fluid [2]. Because the conservation properties of light in a dielectric remain contentious, we should reproduce the vacuum theory so that we can agree on the terminology, procedures, and principles.
The Maxwell equations for electromagnetic fields in the vacuum are ∇ · e = 0 (4.1d) in terms of the microscopic electric and magnetic fields, e(r, t) and b(r, t). The microscopic Maxwell equations, Eqs. (4.1) can be systematically combined (like in Sec. 2) to form a scalar energy continuity equation and the components of a vector momentum continuity equation [40] 1 c These energy and momentum time-evolution equations can be combined, row-wise, to construct a differential equation where is the energy-momentum tensor for the electromagnetic field in free space. 1) By construction, the continuity equation, Eq. (4.5), is a valid theorem of Eqs. (4.1) that expresses local conservation of energy and momentum [34].
2) The Laue theorem [34] defines the conditions under which a local distribution of energy and momentum can be used to construct globally conserved quantities. We take the temporal constancy of for each α as an operational condition for global conservation of energy and momentum for quasimonochromatic fields.
3) The matrix, Eq. (4.6), is traceless corresponding to massless photons. 4) The matrix is diagonally symmetric corresponding to conservation of angular momentum, although symmetry is not considered to be an expressly rigid requirement for angular momentum conservation [2]. 5) The condition is derived from Eq. (4.5) if the matrix, Eq. (4.6) is symmetric.
The amplitude and duration of the fields are not affected by propagation through the vacuum in the planewave limit insuring global conservation of electromagnetic energy and momentum. Clearly, the particle description of energy-momentum conservation can be applied to the light field in the plane-wave limit as the unimpeded, inviscid, incoherent flow of massless photons in the continuum limit through an otherwise empty volume.
B. Dielectric
Microscopically, a dielectric consists of tiny polarizable particles and host material embedded in the vacuum. In continuum electrodynamics, the properties of the medium are averaged and the material is continuous at all length scales. This is a second and distinct meaning of the word "continuum" in continuum electrodynamics because the light field is the continuum limit of the flow of photons in the sense of fluid mechanics or continuum dynamics.
The material is modeled as an arbitrarily large continuous isotropic homogeneous block of transparent linear dielectric that is draped with a gradient-index antireflection coating. In the limit that the gradient of the refractive index can be neglected, the Minkowski continuity equation, Eq. (2.27) becomes which is the putative condition for local conservation of energy and linear momentum [34]. Consequently, it is frequently claimed in the scientific literature that the Minkowski linear momentum and the Minkowski energymomentum tensor are (globally) conserved or nearly conserved [9,13]. The gradient nature of the Minkowski fourforce density, Eq. (2.29), definitely supports that assertion. At this point, we would like to make it emphatically clear that this assertion is false.
Propagation of the electromagnetic field in a neutral transparent dielectric is given by the wave equation in the quasistationary limit [8], where A is the vector potential and (4.14) For quasimonochromatic fields, we define the slowly varying amplitude of the electric field E 0 (r, t) and the slowly varying amplitude of the magnetic field B 0 (r, t) by E = E 0 (exp(−i(ωt − k · r)) + c.c.) and B = (B 0 exp(−i(ωt − k · r)) + c.c.).
Also, A = (A 0 exp(−i(ωt − k · r)) + c.c.). Figure 1 is a graphical representation of the slowly varying amplitude of a quasimonochromatic field (planewave limit) in the vacuum traveling to the right at some time t 0 before entering a dielectric medium. The representation is in terms of the envelope of the vector potential, |A 0 (t 0 , z)|. A finite-difference time-domain solution of the wave equation in retarded time [48] allows us illustrate the same field at some time t 1 after it has entered a linear isotropic homogenous dielectric through a gradient-index antireflection coating, Fig. 2.
According to the Maxwellian continuum electrodynamic theory, the nominal width of the pulse in the dielectric is the width of the incident pulse reduced by a factor of n due to the reduced speed of light in a dielectric. The numerical solution of the wave equation confirms this theoretical fact.
The Minkowski electromagnetic energy formula is Substituting the relations between the fields and vector potential into the energy formula, on obtains where φ is the cross-sectional area of the field and comparisons have a per unit of cross-sectional area basis. This result is confirmed by numerical integration of the incident and refracted fields shown in Figs. 1 and 2. Using the fact that the width of the pulse is narrower in the medium by a factor of n in Eq. (4.16), one finds, by energy conservation, that the amplitude of the vector potential in the dielectic is smaller than the incident vector potential amplitude by a factor of √ n. This is confirmed in the numerical solution of the wave equation by an examination of Figs. 1 and 2. Both theoretically and numerically it is shown that the amplitude of the vector potential in the dielectric is the amplitude of the incident vector potential divided by √ n. Applying this result to Eqs. (4.13) and (4.14), one finds that the amplitude of the electric field in the dielectric, |E 0 | diel , is a factor of √ n smaller than the amplitude of the electric field that is incident from the vacuum, |E 0 | inc . Meanwhile, the amplitude of the magnetic field in the dielectric is increased by a factor of √ n from the amplitude of the incident field. For a quasimonochromatic field in the quasistationary limit, Applying the relations between the incident fields and the fields in the dielectric, Eqs. (4.17), we find that E×B has a constant amplitude across the antireflection-coated entry face of the dielectric. Then, E × B is multiplied by n 2 /c to get the Minkowski momentum density. The pulse is narrower in the dielectric than in the vacuum by a factor of n due to the reduced velocity of light in the dielectric. Substituting the relations between the incident fields and the fields in the dielectric into the Minkowski electromagnetic momentum formula (4.18) and comparing the results with Eqs. (4.15) and (4.16), we find that the Minkowski electromagnetic energy is constant in time but the Minkowski electromagnetic momentum is greater than the incident momentum by a factor of n strongly violating the global conservation condition, Eq. (3.8). It could not be otherwise because Eqs. (4.15) and (4.18) have the same quadratic dependence of the fields but differ by a factor of n in magnitude. Consequently, G M (t 1 ) is very different from the incident momentum and T αβ M is not an approximation of the total energy-momentum tensor, contrary to the conservation implied by Eq. (4.11) and contrary to statements in the scientific literature [9,13].
We have shown that energy-momentum relations that are systematically derived from the field equations using Maxwellian continuum electrodynamics are inconsistent with spacetime conservation laws if the gradient force is negligible as in Eq. (4.11). In order to avoid that fate, it has been the practice to make a physically motivated assumption that the macroscopic Maxwell-Minkowski equations describe an electromagnetic subsystem that is coupled to a material subsystem. We now examine this alternative, generally accepted as correct case [5][6][7][8][9][10][11][12][13][14][15][16][17][18][19][20][21][22][23], and show that it also leads to strong violation of spacetime conservation laws. According to the current Abraham-Minkowski resolution theory [5][6][7][8][9][10][11][12][13][14][15][16][17][18][19][20][21][22][23], the dynamics of the material sub-system are based on a material four-tensor T αβ matl such that We add Eqs. (4.19) and (4.20). Then the total energymomentum tensor obeys the local conservation law [34] ∂ β T αβ total = 0 (4. 22) in accordance with the spacetime conservation law Eq. (3.7). A wide variety of physical models have been employed in an effort to fully resolve the problem of momentum conservation in a dielectric [5][6][7][8][9][10][11][12][13][14][15][16][17][18][19][20][21][22][23]. Selected examples are discussed in the next subsection. Typically, one assumes a microscopic model of the material dynamics in a dielectric and applies an averaging technique to derive the macroscopic momentum of the material. The correctness of the results is assumed to be affirmed by the fundamental nature of the physical laws that are used as the basis of the analysis. Adding the electromagnetic and material tensors, one obtains the total energy-momentum tensor for the thermodynamically closed system, Eq. (4.21). The total linear momentum (4.23) and the total energy are known quantities that can be related to the energy and momentum of the incident field in the vacuum because they are required to be constant in time by global conservation laws in the complete and closed system (unless one assumes inappropriate system boundaries). Using the corresponding total energy and total momentum densities to populate the total energy-momentum tensor, we write [23] T that is obtained for α = 0 is manifestly false because the two non-zero terms depend on different powers of n in addition to being incommensurate with the Poynting theorem. This result is based on the total (electromagnetic plus material) energy-momentum tensor, Eq. (4.25), and is therefore independent of the particular electromagnetic representation, Abraham, Minkowski, etc., that is used. Then, the macroscopic Maxwell field equations and the spacetime conservation laws are laws of physics that are proven to be contradictory in the case of a thermodynamically closed system consisting of an electromagnetic subsystem and a dielectric material subsystem [31]. Clearly, the prescribed method to resolve the Abraham-Minkowski momentum dilemma produces only another contradiction. It may be argued that pure induction without experimental support is not a method of theoretical physics.
In our case, the energy-momentum evolution equations, Eqs. (2.27) and (2.33), are derived by formal theory directly from the laws of Maxwell-Minkowski continuum electrodynamics. Then the Minkowski momentum was shown to strongly violate global conservation laws. When these theorems are "fixed" by the addition of a physically motivated, but hypothetical, material energymomentum tensor as shown in Eq. (4.21), the contrived total energy-momentum tensor, Eq. (4.25), leads to violation of other conditions of the spacetime conservation laws as shown by Eq. (4.26).
Proof by mathematical contradiction is far stronger than an experimental demonstration. One might recall that the 1887 Michelson-Morley experiment [37] was initially interpreted to prove the existence of ether drag and was later deemed to support the absence of ether in the Einstein relativity theory. Likewise, the experiments that were originally viewed as support for the Abraham energy-momentum theory or the Minkowski theory, or both, will be shown in Sec. 7 to provide experimental justification for the new theory that is derived in Sec. 5.
C. Brief Survey of Prior Work
The century-long history of the Abraham-Minkowski controversy [3-23] is a search for some provable description of momentum and momentum conservation for electromagnetic fields in dielectric media. A wide variety of physical principles have been applied to establish the priority of one type of momentum over another, or to establish that the Abraham and Minkowski formulations are equally valid. The modern resolution of the Abraham-Minkowski momentum controversy is to adopt a scientific conformity in which the Minkowski momentum and the Abraham momentum are both correct forms of electromagnetic momentum with the understanding that neither is the total momentum [9,10,12,17]. Either the Minkowski momentum or the Abraham momentum can be used as the momentum of the electromagnetic field as long as that momentum is accompanied by the appropriate material momentum [9]. The material momentum is specific to a particular material and we will consider several well-known models that have appeared in the scientific literature in order to circumscribe the area of difficulty.
In a quasi-microscopic approach, the material momentum is often modeled as the aggregated kinematic momentum of individual particles of matter in the continuum limit. The total energy-momentum tensor is the sum of the electromagnetic energy-momentum tensor and the material energy-momentum tensor. In one example, Pfeifer, Nieminen, Heckenberg, and Rubinsztein-Dunlop [9], posit that the total energy-momentum tensor is the sum of the Abraham energy-momentum tensor, Eq. (2.34), and the dust energy-momentum tensor (4.27) Here, ρ 0 is a constant mass density and v(r, t) is a velocity field. The dust tensor, Eq. (4.27), is usually applied to a thermodynamically closed system consisting of noninteracting, neutral, mass-bearing particles in an inviscid, incoherent, unimpeded flow such that ∂ β T αβ dust = 0 (4.28) in the continuum limit. In the current context, however, the total tensor energy-momentum continuity equation is posited as [9] ∂ β T total = ∂ β T αβ A + T αβ dust = 0 . Clearly, it is intended that the dust tensor is coupled to the Abraham electromagnetic tensor through the Abraham force density such that [30] ∂ β T αβ dust = −f α A . Pfeifer, Nieminen, Heckenberg, and Rubinsztein-Dunlop [9] then use global conservation of momentum arguments to phenomenologically relate the material momentum density to the electromagnetic momentum density with the ansatz The total energy and total momentum are both quadratic in the fields and must have the same dependence on the refractive index n. We note that if the particle density ρ 0 is constant then Eq. (4.33) reproduces Eq. (4.26) and is manifestly false because the two nonzero terms would depend on different powers of the refractive index and because the equation would be incommensurate with Poynting's theorem. Although Pfeifer, Nieminen, Heckenberg, and Rubinsztein-Dunlop [9] do not propose a time-dependent model for the particle density ρ 0 , the two non-zero terms of the energy continuity equation will be incommensurate unless Eq. (4.33) becomes n c ∂ ∂t The corresponding tensor continuity equation for the total energy and the total momentum is false because the presence of the index-dependent material four-divergence operator [23,36] violates the conservation condition, Eq. (4.22), for α = 0. This result shows that the total energy and the total momentum being constant in time does not guarantee that the evolution equations for the total energy and the total momentum satisfy the local conservation law, in fact, just the opposite.
In an influential 1973 article, Gordon [20] uses a microscopic model of the dielectric in terms of electric dipoles. Assuming a dilute vapor in which the dipoles do not interact with each other or their host, Gordon writes the microscopic Lorentz dipole force on a particle with linear polarizability α as [7,20,33] f atom = α (e · ∇)e + de dt × b (4.37) in the vacuum, where e is the microscopic electric field and b is the microscopic magnetic field. The material momentum density is obtained by spatially averaging the force on a single dipole and integrating with respect to time. Then the material momentum density is where N is the dipole density. The fields acting on the dipoles inside a dielectric are not the same as the fields in free space. For the purpose of presenting the prior work, we suffer, without proof, that the material momentum density is [20] Gordon assumes that the total momentum density is the sum of the Abraham momentum density and the material momentum density. Making a transformation to retarded time [48], Gordon [20] derives for the total momentum density and then assumes a pseudo-momentum in order to force agreement with the Minkowski form of momentum. In the Gordon model, and similar models, the dipoles are free particles in the vacuum that are accelerated by the Lorentz dipole force at the leading edge of the quasimonochromatic field and travel at constant velocity until decelerated by the Lorentz dipole force at the trailing edge of the field. In a real dielectric, or a more complete theoretical model of a dielectric, the motion of the material dipoles will be considerably impeded by collisions, lattice strains, or other effects of the host material. Consequently, it is assumed that a traveling deformation of the material, rather than the unrestrained motion of dipoles, will contribute the requisite material momentum [9,20]. The Gordon linear momentum that is obtained by spatially integrating the Gordon momentum density and the Minkowski momentum, Eq. (4.18), that is obtained by adding a hypothetical pseudomomentum concludes the derivation. Comparing Eq. (4.41) to Eq. (4.15), the Gordon momentum G G is constant in time in the case of propagation of a quasimonochromatic field through a gradient-index antireflection-coated simple linear dielectric [20,22,23]. Then, we now have a plausible model for the material momentum.
There are several problems with the derivation presented in Ref. [20] in addition to the assumptions that are described above: i ) In Eq. (4.37), the force density α(e · ∇)e has been improperly retained because several other terms of the same order have been dropped in the dipole approximation [49]. Moreover, this small term is divided into two large terms of nearly equal magnitude and opposite sign and one of these terms is eliminated. ii ) Temporal independence of the total linear momentum is only one of the four conditions of the energy-momentum conservation laws, Eqs. (3.7)-(3.10). iii ) There is a factor of 2 error in the susceptibility used by Gordon. In the corrected version of the Gordon derivation, Milonni and Boyd [7] prove that the sum of the electromagnetic and material momentums is the Minkowski momentum, Eq. (1.2), which is not constant in time. Barnett and Loudon [10] and Barnett [17] present a model in which the Abraham momentum and the Minkowski momentum are both appropriate momentums for the field in a dielectric. Each of the classical electromagnetic momentums is accompanied by a material momentum, different in each case, and identified with either a canonical or kinetic phenomenology. The material momentum densities, g matl canonical and g matl kinetic , are defined implicitly by global conservation of total momentum, such that where G total = G incident for a gradient-index antireflection coated simple linear dielectric block. Although providing a descriptive model for construction of the total linear momentum, the total linear momentum, G total , is unique in a thermodynamically closed system because it is constant in time and it is a known quantity in terms of macroscopic fields, Eq. (4.23). As in the previous example, temporal independence of the total linear momentum is only one of the four conditions of the spacetime conservation laws, Eqs. (3.7)-(3.10). If we use either of the canonical or kinetic models of Eq. (4.42) for the total linear momentum, then the total energy-momentum tensor will be the same as Eq. (4.25). Applying the local conservation condition, Eq. (4.22), the four-divergence of the total energy-momentum tensor will produce a demonstrably false energy continuity equation, just as before, Eq. (4.26). Again, the two nonzero terms in Eq. (4.26) depend on different powers of the refractive index and Eq. (4.26) is incommensurate with the Poynting theorem. Ramos, Rubilar, and Obukhov [13] utilize a fully relativistic 4-dimensional tensor formalism to discuss the energy-momentum of a system that consists of an antireflection-coated rigid slab of dielectric with a final constant velocity v. Their total energy-momentum tensor is where u µ is the four-velocity (γ, γv). Ramos, Rubilar, and Obukhov [13] claim that the total energy-momentum tensor, Eq. (4.43), satisfies the energy-momentum balance equation that the energy-momentum tensor of the complete system is conserved, and that the system is thermodynamically closed if the four-current density J ν ext is zero. Then the total four-momentum of the whole system is globally conserved and is a time-independent quantity [13]. In order to test the validity of the total energy-momentum tensor, Eq. (4.43), we consider a quasimonochromatic field in the plane-wave limit to be normally incident on a simple linear dielectric through a gradient-index antireflection coating. Evaluating the ν = 0 element of Eq. (4.44), we obtain ∂ ∂(ct) ρ 0 c 2 + 1 2 (n 2 E 2 + B 2 ) +∇·(ρ 0 v)+∇·(E×B) = 0 (4.46) by substitution from the total energy-momentum tensor, Eq. (4.43), using quantities from the usual antisymmetric field tensor Now, Eq. (4.46) is the same as Eq. (4.31) that was derived previously using the model of Pfeifer, Nieminen, Heckenberg, and Rubinsztein-Dunlop [9]. Substituting Eq. (4.32) into Eq. (4.46) and taking ρ 0 as constant in time [13], we write 1 c ∂ ∂t As before, Eq. (4.48) i) is incommensurate with the Poynting theorem and ii) is self-inconsistent because the two non-zero terms depend on different powers of the refractive index thereby disproving the energy-momentum balance equation, Eq. (4.44), with J ν ext = 0. A fully microscopic model of the interaction of light with ponderable matter is unique, valid, and beyond our capabilities. The examples above are representative of the many diverse quasi-microscopic treatments of the Abraham-Minkowski controversy and there is no unique quasi-microscopic model [9]. There are many ways to couple and average the quasi-microscopic material properties with the electromagnetic properties that are systematically derived from the macroscopic Maxwell-Minkowski equations. The correctness of the procedure is rooted in the fundamental basis of the model and the derivation. In the end, the total linear momentum is the sum of the electromagnetic momentum and the material momentum. However, the spacetime conservation laws are not satisfied. We provided specific examples, using the models of Pfeifer, Nieminen, Heckenberg, and Rubinsztein-Dunlop [9], of Barnett [17], and of Ramos, Rubilar, and Obukhov [13], where the continuity equation for the total energy is proven to be false. More importantly, these are general results as shown in Sec. 4B. Any construction of the total energy-momentum tensor must be based on energy and momentum densities corresponding to the time independent total energy, Eq. (4.24) and the time independent total momentum, Eq. (4.23). Then the four-divergence of the total energy-momentum tensor that is constructed using Maxwellian continuum electrodynamics will always result in a provably false energy continuity equation, even if a phenomenological material energy-momentum tensor is assumed.
V. LAGRANGIAN FIELD DYNAMICS IN A DIELECTRIC-FILLED SPACETIME
At the fundamental microscopic level, dielectrics consist of tiny bits of host and polarizable matter, embedded in the vacuum, with interactions of various types. According to Lorentz, the seat of the electromagnetic field is empty space. If a light pulse is emitted from a point (x a , y a , z a ) at time t a then spherical wavefronts are defined by in a flat four-dimensional Minkowski spacetime S v (ct, x, y, z). Equation (5.1) underlies classical electrodynamics and its relationship to special relativity. Although light always travels at speed c [50], Eq. (5.1) is only valid at very short range before the light is scattered by the various microscopic features of the dielectric. While the microscopic picture is always valid, there are practical difficulties in treating all of the interactions as light traverses a dielectric.
In continuum electrodynamics, the dielectric is treated as continuous at all length scales and the macroscopic refractive index n is defined such that light travels with an effective speed of c/n. In an arbitrarily large simple linear dielectric medium with an isotropic homogeneous index of refraction n, spherical wavefronts from a point source at (x a , y a , z a ) and emitted at time t a are defined by At this point, we postulate Eq. (5.2), instead of Eq. (5.1), as the basis of a theory of continuum electrodynamics and derive the consequences for field theory, classical continuum electrodynamics, special relativity, spacetime, and experiments. We consider an arbitrarily large region of space to be filled with a simple linear isotropic homogeneous dielectric that is characterized by a linear refractive index n. For clarity and concision, we will work in a regime in which dispersion can be treated parametrically and is otherwise negligible such that n(ω p ) is a real timeindependent constant for a transparent dielectric that is illuminated by a quasimonochromatic field of center frequency ω p , as described in Sec. 2. We define an inertial reference frame S(x, y, z) with orthogonal axes, x, y, and z, and require that the origin of the reference frame is significantly inside the volume that is defined by the surface of the dielectric medium. We denote a time-like coordinate in the medium asx 0 . If a light pulse is emitted from the origin at time The basis functions, exp(−i(nω/c)(x 0 −k 0 · r)), define the null surface,x 0 =k 0 · r. Fig. 3 is a depiction of the intersection of the light cone with the x −x 0 plane in the flat material spacetime S d showing the nullx 0 = x. There will be a different material spacetime for each value of the refractive index, but the half-opening angle of the material light cone will always be α = π/4 in the corresponding material spacetime. The unit slope of the null in the x −x 0 plane of the non-Minkowski material spacetime is related to the coordinate speed of light in a simple linear dielectric by This equation shows that the effective speed of light in a simple linear dielectric medium is attributable to renormalization of the time-like coordinate by n.
For a system of particles, the transformation of the position vector x i of the i th particle to J independent generalized coordinates is Applying the chain rule, we obtain the virtual displacement and the velocity of the i th particle in the new coordinate system. Substitution of (5.11) For a system of particles in equilibrium, the virtual work of the applied forces f i vanishes and the virtual work on each particle vanishes leading to the principle of virtual work i f i · δx i = 0 (5.12) and D'Alembert's principle Using Eqs. (5.7) and (5.11) and the kinetic energy of the i th particle we can write D'Alembert's principle, Eq. (5.13), as J j d dτ in terms of the generalized forces If the generalized forces come from a generalized scalar potential function V [51], then we can write Lagrange equations of motion d dτ where L = T − V is the Lagrangian. The canonical momentum is therefore in a linear medium. Comparable derivations for the vacuum case, τ → t, appear in Goldstein [51] and Marion [38], for example. This version of canonical momentum differs from the existing vacuum formula because the material time τ appears instead of the vacuum time t. The field theory [52] is based on a generalization of the discrete case in which the dynamics are derived from a Lagrangian density L. The generalization of the Lagrange equation, Eq. (5.17), for fields in a linear medium is . (5.19) This equation differs from the Lagrange equation for fields in the vacuum [52] d dx 0 (5.20) in that differentiation is performed with respect to the material time-like coordinatex 0 instead of the vacuum coordinate x 0 . We take the Lagrangian density of the electromagnetic field in the medium to be Again, differentiation is performed with respect to the material time-like coordinatex 0 instead of the vacuum coordinate x 0 . Furthermore, the Lagrangian density is explicitly quadratic in the macroscopic fields corresponding to real eigenvalues and a conservative system. Equations (5.19) and (5.21) form the basis for a new canonical theory of macroscopic fields in a simple linear dielectric. The new theory has similarities in appearance to the macroscopic Maxwell equations, but it is disjoint from the Maxwell theory because it is based in a flat non-Minkowski material spacetime S d (x 0 , x, y, z) instead of a vacuum Minkowski spacetime S v (x 0 , x, y, z). Constructing the components of Eq. (5.19), we have Here, Π is the canonical momentum field density whose components were defined in Eq. and a Faraday-like law The divergence of the variant Maxwell-Ampère Law, Eq. (5.29), is Integrating Eq. (5.32) with respect to the time-like coordinate yields a modified version of Gauss's law where −c 1 is a constant of integration. Based on the derivation of these equations, it is required that the source terms in Eqs. (5.31) and (5.33) that involve the gradient of the refractive index are, at most, perturbative, essentially limiting the theory to an isotropic homogeneous block of simple linear dielectric draped with a gradient-index antireflection coating or a piecewise homogeneous simple linear dielectric. We have not included free charges and a free-charge current because it is an unnecessary complication and because an inviscid incoherent flow of non-interacting charges in the continuum limit moving unimpeded through a continuous dielectric cannot be justified at the level of rigor that we are employing in the current work. This completes the set of first-order equations of motion for the macroscopic fields, Eqs. (5.29)-(5.31) and (5.33). Consolidating the equations of motion and dropping the inhomogeneous source terms, we have the equations of motion for macroscopic electromagnetic fields in an isotropic homogeneous simple linear dielectric, derived from field theory for quasimonochromatic electromagnetic fields in a linear dielectric-filled, flat, non-Minkowski continuous material spacetime.
Readers of this article may note that the macroscopic field equations, Eqs. (5.34), obviously violate special relativity becausex 0 is index-dependent. It is also obvious that Eqs. (5.34) cannot possibly violate special relativity because their form is isomorphic to an easily verified identity of the macroscopic Maxwell-Minkowski equations, Eqs. (2.1), which are known to satisfy special relativity. Readers should not unequivocally, uncritically, and untenably advocate one side or the other of this contradiction based solely on the appearance of Eqs. (5.34). It is shown in Ref. [54] that Eqs. (5.34) comply with Rosen dielectric special relativity. The applicability of the Fresnel relations to the system described by Eqs. (5.34) has been questioned and treated in Ref. [56]. Other issues like free charges, free charge currents, experimental verification, dispersion, plane-wave limit, material motion, etc. were addressed as they came up in the description.
We can never place a matter-based observer, no matter how small, in a continuous dielectric because the model dielectric is continuous at all length scales and will always be displaced. Consequently, the necessity to make non-optical measurements in a vacuum leads to the establishment of a Laboratory Frame of Reference. An observer that resides in the vacuum by virtue of displacing the terrestrial atmosphere outside of the dielectric, such as Fizeau [53], will measure the speed of light in a dielectric to be dependent on the velocity of the dielectric relative to the vacuum-based Laboratory Frame of Reference, Eq. (2.11), an effect that Fresnel attributed to ether drag.
In deriving the theory of dielectric special relativity, Laue considered a block of dielectric moving inertially in a Laboratory Frame of Reference and used the Einstein velocity sum rule to derive the theory of dielectric special relativity and confirm Fresnel drag. That physical configuration is not correct for the special relativity that underlies the equations of motion for electromagnetic fields in a dielectric, Eqs. (5.34). The current author [54] derived a theory of dielectric special relativity for inertial reference frames translating at constant speed in an arbitrarily large region of space in which the speed of light is c/n in the local rest frame. This rigorous derivation of special relativity in an appropriate physical configuration confirms Rosen's [55] phenomenological derivation of an index-dependent theory of special relativity in a dielectric. The speed of light at the location of the observer in the dielectric is obviously independent of the motion of the dielectric; otherwise there would be an inaccessible preferred Laboratory Reference Frame. The situation is different from the Laue theory; there the vacuum-based Laboratory Frame of Reference is established first and the motion of the dielectric in the preferred Laboratory Reference Frame can be specified and measured.
Rosen [55] noted that there will be a different theory of relativity associated with a limiting speed in each material. In the rest frame of the material, the speed of light c d will be different in different dielectric materials and we can label different materials with the index i. Considering only isotropic, homogeneous linear dielectric materials in which the speed of light is inversely proportional to a real constant n i , we obtain as our material-specific Lorentz factor [54,55]. As discussed above, the configuration of the physical system that we are treating is different from the system that was employed by Laue [42,43]. In this article, the theory of quasimonochromatic radiation interacting with a simple linear dielectric has been discussed primarily in terms of an arbitrarily large isotropic homogeneous medium or a block of an isotropic, homogeneous, linear dielectric material draped with a gradient-index anti-reflection coating. At some point, we will be required to deal with the boundary conditions of piecewise homogeneous linear dielectric materials. Reflection and refraction are experimentally uncomplicated and it would be unpleasant if the usual Fresnel formulas failed to work for the new theory. On the other hand, it is apparent that the usual derivation of the Fresnel relations [38][39][40][41] by application of the Stoke's theorem and the divergence theorem to the Maxwell-Minkowski equations will not work when applied to the new field equations, Eqs. (5.34). Boundary conditions and the Fresnel relations are rigorously derived by conservation of energy and the application of Stoke's theorem to the wave equation in a separate publication [56].
We cannot rigorously relate the ordinary macroscopic Maxwell-Minkowski field equations, Eqs. (2.1), to the results of our derivation, Eqs. (5.34). In the current formalism, based on Lagrangian field theory adapted to a region of space in which the speed of light is c/n, the usual macroscopic Maxwell fields E and D are not definable in terms of Π because the refractive index is contained in the independent coordinate and is not a free material parameter. As disclosed by Rosen [55], there is a different theory of relativity associated with each isotropic homogeneous medium in which a limiting speed is associated with the phenomena that take place in the medium. Likewise, there is a different theory of electrodynamics for each linear medium, labelled i, with refractive index n i and the different theories correspond to disjoint isotropic, homogeneous, flat, non-Minkowski material spacetimes [55] that is derived by substituting the definitions of the macroscopic fields, Eqs. (5.27) and (5.28), into the Maxwell-Ampère-like law, Eq. (5.34a). Therefore, we can be assured that the extensive theoretical and experimental work that is "correctly" described by the macroscopic Maxwell theory of continuum electrodynamics has an equivalent, or nearly equivalent, expression in the new theory. Nevertheless, we must be very careful about integrating established concepts and formulas of Maxwellian electrodynamics into the new version of continuum electrodynamics.
More interesting is the work that we can do with the new formalism of continuum electrodynamics that was improperly posed in the standard Maxwell theory of continuum electrodynamics. These cases will typically involve the invariance or tensor properties of the set of coupled equations of motion. This interpretation is borne out in our common experience: the macroscopic Maxwell-Minkowski equations produce exceedingly accurate experimentally verified predictions of simple phenomena like reflection, refraction, Fresnel relations, wave propagation, etc., but fail to render a unique, uncontroversial, experimentally verifiable prediction of energymomentum conservation. In the next section, we will demonstrate the utility of the new formalism of continuum electrodynamics by addressing energy-momentum conservation in a dielectric.
VI. CONSERVATION LAWS AND {Π, B} ELECTRODYNAMICS
The derivation of the continuity equation of a property flux density is described in Sec. 3 as applying the divergence theorem to a Taylor series expansion of the property density field ρ and the property flux density field g = ρu to a continuous flow in an otherwise empty volume [1]. Here, we are treating the continuous (continuum limit) flow of photons (light field) in an arbitrarily large, isotropic, homogeneous, simple linear dielectric that is modeled as a region of space in which the speed of light is c/n. Therefore, the conditions on the flow differ from the vacuum conditions that are assumed for the usual spacetime conservation laws that were discussed in Sec. 3. Microscopically, dielectrics are mostly empty space. But, in the continuum limit, dielectrics are continuous at all length scales and the light field cannot be treated as if it is flowing in an otherwise empty volume.
It is necessary to modify the spacetime conservation laws that were presented in Sec. 3 for the flow of a photon fluid (light field) in a non-empty volume. As shown in Eq. (5.6), the generalized temporal coordinate in a dielectric-filled volume is τ . Then a continuity equation has the form ∂ρ ∂τ + ∇ · (ρu) = 0 (6.1) in an arbitrarily large region of space that is filled with an isotropic, homogeneous, simple linear dielectric material. We can compare the conservation laws in a dielectricfilled spacetime with the vacuum conservation laws Eqs. (3.7)-(3.10), in an empty volume. 1) Continuity equations in a dielectric have the form of Eq. (6.1), instead of Eq. (3.1). Writing a scalar continuity equation for energy and a scalar continuity equation for each of the three components of the momentum, rowwise, we obtain the differential equation instead of Eq. (3.3) as a condition for conservation of energy and momentum for a continuous unimpeded flow in a dielectric-filled spacetime. The material fourdivergence operator replaces Eq. (2.29) because τ , not t, is the independent temporal coordinate.
2) The total energy and the total linear momentum are constant in material time τ for each α (global conservation).
3) The trace of the total energy-momentum tensor is proportional to the mass density ρ m T αα ∝ ρ m (6.5) and is zero for light. 4) If the total energy-momentum tensor of the incident field is diagonally symmetric then the total energymomentum tensor inside the dielectric medium is mostlikely diagonally symmetric as a matter of conservation of total angular momentum, absent pathological boundary conditions, subsystem separation, or other inappropriate system definitions. 5) The extra continuity equation for the total energy and the total momentum in a thermodynamically closed system∂ is obtained by substituting the symmetry condition, Eq. (6.6), into the continuity condition, Eq. (6.2). For each different medium, there is a different material four-divergence operator, Eq. (6.3), and a different material four-continuity equation, Eq. (6.2), due to the dependence of the time-like coordinate on the refractive index n.
We can demonstrate consistency of the new formulation of continuum electrodynamics with the conservation laws inside a dielectric-filled spacetime. The equations of motion for the macroscopic fields, Eqs. (5.29) and (5.31), can be combined in the usual manner, using algebra and calculus, to write an energy continuity equation in terms of an electromagnetic energy density an electromagnetic momentum density where the stress-tensor W T is is a (gradient) force density. Then the continuity equations, Eqs. (6.8) and (6.12), can be written, row-wise, as a differential equation where is the four-force density. We integrate Eq. (6.10) over all-space Σ and obtain is obtained by integrating Eq. (6.9). We can apply a gradient-index antireflection coating to an isotropic homogeneous simple linear dielectric in order to greatly suppress reflections. Analysis of the wave equation for a quasimonochromatic pulse entering an antireflection coated simple linear dielectric from the vacuum shows that the amplitude of the vector potential is reduced by √ n and the width is reduced by n, Sec. 4. Then the definitions of the macroscopic canonical field Π and the macroscopic magnetic field B, Eqs. (5.27) and (5.28), show that the macroscopic fields in the dielectric are each greater than the incident vacuum fields by a factor of √ n compensating for the reduced width of the field in the dielectric. Neglecting the small gradients, the electromagnetic momentum, Eq. (6.18), and the electromagnetic energy, Eq. (6.19), are conserved. Then the electromagnetic momentum, Eq. (6.18), is the total momentum and the electromagnetic energy, Eq. (6.19), is the total energy. Consequently, there is no significant energy or momentum contained in any hypothetical unobservable material subsystem and there is no need for a mechanism in the theory to couple to any subsystem by a source or sink of either energy or momentum. In this limit, the right-hand side of Eq. (6.15) is negligible. Then∂ β T αβ = 0 (6.20) conforms to the corrected spacetime conservation condition, Eq. (6.2). The other conservation conditions, Eqs. (6.4)-(6.6), are also satisfied. Therefore, the macroscopic electromagnetic system, Eqs. (5.34), is thermodynamically closed and Eq. (6.16) is the traceless, diagonally symmetric, total energy-momentum tensor and the differential equation, Eq. (6.20), is a tensor conservation law.
All of the quantities that constitute the total energy density, total momentum density, and total energymomentum tensor are electromagnetic quantities with the caveat that the gradient of the refractive index is small. Although rigorous results are restricted to a limiting case, the the real-world necessity of a non-zero gradient adds only a small perturbative effect. The opposite limit of a piecewise homogeneous medium without an antireflection coating must be handled using Fresnel boundary conditions [56].
A. The Balazs thought experiment
In 1953, Balazs [19] proposed a thought experiment to resolve the Abraham-Minkowski controversy. The thought experiment was based on the law of conservation of momentum and a theorem that the center of mass, including the rest mass that is associated with the energy, moves at a uniform velocity [57]. The total energy E = p · pc 2 + m 2 c 4 1/2 (7.1) becomes the Einstein formula E = mc 2 for massive particles in the limit v/c → 0. For massless particles, like photons, Eq. (7.1) becomes whereê k is a unit vector in the direction of motion. Equation (7.2) defines the instantaneous momentum of a photon between scattering events in a microscopic model of a dielectric. The description of the macroscopic momentum of a field in terms of the momentums of constituent photons is difficult because the effective momentum of a photon in the direction of propagation of the macroscopic field is different from its instantaneous momentum due to scattering. Some sort of averaging process is required, at which point the single photon description becomes a problem. An additional issue with the photon description of light propagation in a continuous dielectric is illustrated by the commingling of macroscopic fields and the macroscopic refractive index with microscopic photon momentum and momentum states in a description of photon recoil momentum in a medium [58]. There are other complications, including an indefinite photon number, that cause us to choose to choose a macroscopic classical description for light propagation in a dielectric.
As an electromagnetic field propagates from vacuum into a simple linear dielectric, the effective velocities of photons in the field are reduced due to scattering. There is a corresponding increase in photon density in the dielectric. Likewise, the classical energy density (Π 2 +B 2 )/2 and the classical momentum density B×Π/c are enhanced by a factor of n in the dielectric, compared to the vacuum. For finite pulses in a dielectric, the enhanced energy density is offset by a narrowing of the pulse so that the electromagnetic energy is time independent for quasimonochromatic fields in the plane-wave limit. The electromagnetic energy is the total energy by virtue of being constant in time. Likewise, the electromagnetic momentum, is time independent and is the total momentum. The center-of-energy velocity of the field slows to (c/n)ê k . Invoking the Einstein mass-energy equivalence, it is argued in the scientific literature [17] that some microscopic constituents of the dielectric must be accelerated and then decelerated by the field; otherwise the theorem that the center of mass-energy moves at a constant velocity is violated. For a distribution of particles of mass m i and velocity u i , the total momentum is the sum of the momentums of all the particles i in the distribution. If the mass of each particle m i is constant, the statement that the velocity of the center of mass is constant is a statement of conservation of total momentum.
Because of the enhanced momentum density of the field in a dielectric, the differential of electromagnetic momentum that is contained in an element of volume δv (a "particle"), is a factor of n greater than in the vacuum. For a finite pulse, the narrower pulse width and enhanced momentum density offset allowing the electromagnetic momentum to be constant in time as the field enters, and exits, the dielectric through the gradient-index antireflection coating. Consequently, there is no need to hypothesize any motion of the material constituents of the dielectric to preserve the conservation of linear momentum, even though the velocity of light slows to c/n.
B. The Jones-Richards experiment
One of the enduring questions of the Abraham-Minkowski controversy is why the Minkowski momentum is so often measured experimentally while the Abraham form of momentum is so favored in theoretical work. We now have the tools to answer that question. The Minkowski momentum is not measured directly, but inferred from a measured index dependence of the optical force on a mirror placed in a dielectric fluid [9,10,35]. The force on the mirror is which depends on the total momentum density, Eq. (6.10). If we were to assume F = 2dG/dt, which is the relation between momentum and force in an otherwise empty spacetime, then we would write Then one might infer from Eq. (7.9) that the momentum density of the field in the dielectric fluid is the Minkowski momentum density. The measured force on the mirror in the Jones-Richards experiment [35] is consistent with both Eqs. (7.8) and Eqs. (7.9), depending on what theory you use to interpret the results. Clearly an experiment that measures force, instead of directly measuring the change in momentum in the dielectric, will not conclusively distinguish the momentum density. Specifically, the Jones-Richards experiment does not prove that the Minkowski momentum density is the momentum density in the dielectric, as has been argued, nor does it prove that the total momentum density, Eq. (6.10), is the momentum density in the dielectric. However, based on the changes to continuum electrodynamics that are necessitated by conservation of energy and momentum in the propagation of light in a continuous medium, we can justify Eq. (7.8), instead of Eq. (7.9), as the appropriate relation between the force on the mirror and the momentum of the field in a dielectric.
VIII. CONCLUSION
It has been said that physics is an experimental science and that physical theory must be constructed on the solid basis of observations and measurements. That is certainly true for serendipitous discoveries like x-rays and radioactivity; But Maxwell [59] used inductive reasoning to modify the Ampère law and construct the laws of electromagnetics two decades before Hertz [60] demonstrated the existence of electromagnetic waves. Later, Einstein's theory of special relativity violated the wellestablished and experimentally verified law of conservation of mass and this law was modified to become the law of conservation of mass-energy. Mathematics is the language of physics and there are many other examples (quantum mechanics, nonlinear optics, high-energy particle physics, etc.) where theory led experiments and not the other way around.
In this article, we treated Maxwellian continuum electrodynamics as an axiomatic formal theory and showed that valid theorems of the formal theory are contradicted by conservation laws. Axiomatic formal theory is a cornerstone of abstract mathematics and the contradiction of valid theorems of Maxwellian continuum electrodynamics by other fundamental laws of physics proves, unambiguously, that electrodynamics and energy-momentum conservation laws, as currently applied to dielectrics, are mutually inconsistent.
We then established a rigorous basis for a reformulation of theoretical continuum electrodynamics by deriving equations of motion for the macroscopic fields from Lagrangian field theory adapted for a dielectric-filled spacetime. We reformulated the conservation laws, which were originally derived for the case of an unimpeded inviscid flow of non-interacting particles (dust, fluid, etc.) in the continuum limit in an otherwise empty volume for the flow of a light field in a dielectric-filled volume. In a separate publication [54], we used coordinate transformations between inertial reference frames in a dielectricfilled volume to derive a theory of dielectric special relativity. The reformulated versions of continuum electrodynamics, special relativity, spacetime, field theory, and energy-momentum conservation laws are mutually consistent in a dielectric-filled volume. The Abraham-Minkowski controversy is trivially resolved because the tensor total energy-momentum continuity theorem, the total energy-momentum tensor, the total momentum, and the total energy are fully electromagnetic and unique for a closed and complete system consisting of a simple linear dielectric block draped with a gradient-index antireflection coating that is illuminated by quasimonochromatic light. The newly derived theory makes a unique prediction that was shown to be consistent with the Balazs [19] thought experiment and the Jones-Richards experiment [35] and is consequently compliant with the Scientific Method.
|
2018-11-28T17:53:45.000Z
|
2015-02-20T00:00:00.000
|
{
"year": 2015,
"sha1": "6996ba549dbdaa6a55ab28de9c5ccfd78c0ae2e4",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "80f0484984ec5a4ddc37bca66b67a3519bf082a0",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Philosophy"
]
}
|
233608337
|
pes2o/s2orc
|
v3-fos-license
|
Disease Burden, Risk Factors, and Recent Trends of Liver Cancer: A Global Country-Level Analysis
Background: This study aimed to evaluate the updated disease burden, risk factors, and temporal trends of liver cancer based on age, sex, and country. Methods: We estimated the incidence of liver cancer and its attribution to hepatitis B virus (HBV) and hepatitis C virus (HCV) in 2018 based on the Global Cancer Observatory and World Health Organization (WHO) Cancer Causes database. We extracted the prevalence of risk factors from the WHO Global Health Observatory to examine the associations by weighted linear regression. The trend analysis used data from the Cancer Incidence in Five Continents and the WHO mortality database from 48 countries. Temporal patterns of incidence and mortality were calculated using average annual percent change (AAPC) by joinpoint regression analysis. Results: The global incidence of liver cancer was (age-standardized rate [ASR]) 9.3 per 100,000 population in 2018, and there was an evident disparity in the incidence related to HBV (ASR 0.2–41.2) and HCV (ASR 0.4–43.5). A higher HCV/HBV-related incidence ratio was associated with a higher level of alcohol consumption (β 0.49), overweight (β 0.51), obesity (β 0.64), elevated cholesterol (β 0.70), gross domestic product (β 0.20), and Human Development Index (HDI; β 0.45). An increasing trend in incidence was identified in many countries, especially for male individuals, population aged ≥50 years, and countries with a higher HCV/HBV-related liver cancer incidence ratio. Countries with the most drastic increase in male incidence were reported in India (AAPC 7.70), Ireland (AAPC 5.60), Sweden (AAPC 5.72), the UK (AAPC 5.59), and Norway (AAPC 4.87). Conclusion: We observed an overall increasing trend of liver cancer, especially among male subjects, older individuals, and countries with a higher prevalence of HCV-related liver cancer. More efforts are needed in enhancing lifestyle modifications and accessibility of antiviral treatment for these populations. Future studies should investigate the reasons behind these epidemiological changes.
tory to examine the associations by weighted linear regression. The trend analysis used data from the Cancer Incidence in Five Continents and the WHO mortality database from 48 countries. Temporal patterns of incidence and mortality were calculated using average annual percent change (AAPC) by joinpoint regression analysis. Results: The global incidence of liver cancer was (age-standardized rate [ASR]) 9.3 per 100,000 population in 2018, and there was an evident disparity in the incidence related to HBV (ASR 0.2-41.2) and HCV (ASR 0.4-43.5). A higher HCV/HBV-related incidence ratio was associated with a higher level of alcohol consumption (β 0.49), overweight (β 0.51), obesity (β 0.64), elevated cholesterol (β 0.70), gross domestic product (β 0. 20), and Human Development Index (HDI; β 0. 45). An increasing trend in
Introduction
Liver cancer is one of the most common malignancies, with more than 800,000 new cases diagnosed each year globally [1]. It is also a leading cause of cancer mortality, accounting for over 700,000 cancer deaths annually [1]. Liver cancer is more common in sub-Saharan Africa, East Asia, and Southeast Asia than in the Western countries [1]. However, in the USA, the incidence of liver cancer has more than tripled, while its mortality has more than doubled since 1980 [2]. The most common liver cancer is hepatocellular carcinoma, while other less common types include intrahepatic cholangiocarcinoma, angiosarcoma, and hemangiosarcoma [3]. The risk factors for liver cancer include gender, race, chronic viral hepatitis, cirrhosis, inherited metabolic diseases, alcohol drinking, smoking, obesity, type 2 diabetes, and exposure to carcinogenic substances such as aflatoxins [4]. Liver cancer could be prevented by reducing the prevalence of these modifiable risk factors, including hepatitis vaccination and lifestyle changes [5].
It is important to monitor the epidemiological trend of liver cancer using cancer registry data of high quality. Studying the recent trend of incidence and mortality for liver cancer is crucial as it can inform policy formulation for effective public health interventions and clinical practice. Owing to its high disparity in epidemiology across different populations, a comprehensive evaluation of its worldwide temporal patterns of disease burden in different population groups could benefit resource planning and allocation. Evidence has also showed that there is geographical variation in the epidemiology of liver cancer caused by hepatitis B virus (hepatitis B virus) and hepatitis C virus (HCV) [6]. Evaluating the updated disease bur-den and associated risk factors of liver cancer by different causes in this population infected with HBV and HCV is important as the preventive measures and clinical management would be different for HBV and HCV.
Nevertheless, there is a lack of studies on the most updated epidemiology and risk factors of liver cancer induced by different causes, as well as its trend. Previous literature only investigated certain populations [7][8][9], reported relatively old data [10,11], and did not present the cancer burden by different causes [12]. Although the Global Burden of Disease (GBD) studies [13,14] evaluated the disease burden of liver disease by specific etiologies, there is a lack of trend analysis for different groups by sex, age, and country using real-world cancer registry data. Also, none of these studies have investigated the difference in risk factors associated with liver cancer related to HBV and HCV at a country level. Therefore, the objectives of this study were to evaluate the (1) updated global epidemiology of liver cancer in 2018, (2) associated lifestyle and metabolic risk factors related to HBV and HCV, and (3) its recent epidemiologic trend by sex, age, and country.
Data Source
This study adopted methods similar to our previously published studies [11,15,16]. In brief, we retrieved the GLOBOCAN database, which contains data of 185 countries to estimate the global and regional incidences and mortality of liver cancer in 2018 [17]. To improve the quality and coverage of estimation of incidence and mortality, several methods were used in GLOBOCAN, including modeling by mortality-to-incidence ratios, predictions, and approximation from neighboring regions. We also estimated the incidence of liver cancer attributable to HBV and HCV in 2018 based on World Health Organization (WHO) Cancer Causes database [18] and previous studies [19,20]. For the analysis of its lifestyle and metabolic factors, we used the age-standardized prevalence of risk factors for each country from the WHO Global Health Observatory database [21], including smoking, alcohol consumption, physical inactivity, overweight, obesity, diabetes, hypertension, and elevated cholesterol (see online suppl. Table 1; see www.karger.com/doi/10.1159/000515304 for all online suppl. material). We also extracted the gross domestic product (GDP) per capita and Human Development Index (HDI) in 2018 for each country from the World Bank [22] and the United Nations Development Programme [23], respectively. For trend analysis, we extracted incidence and mortality figures of 48 countries from national and global registries for all available calendar years (1980-2017) (online suppl. Table 2). To retrieve the data on incidence, we searched nation-/region-specific cancer registries in Cancer Incidence in Five Continents, volumes I-XI [24]. The Cancer Incidence in Five Continents database contains population-based data of incidence figures from cancer registries by confirming the diagnosis DOI: 10.1159/000515304 of each cancer case reported in a predetermined time interval. To obtain the most updated figures on incidence and mortality for the USA, we searched the Surveillance, Epidemiology, and End Results, which is a publicly available program covering most cancer registry data in the USA [25]. We also collected the most updated figures on the incidence and mortality for northern European countries, including Denmark, Finland, Sweden, Iceland, Greenland, Norway, and the Faroe Islands from the Nordic Cancer Registries [26]. We used the WHO mortality database for mortality data for other countries/regions out of the USA and northern Europe [27]. Only data with a quality level of medium or above were used to compute mortality figures in the database [28]. All these cancer registries have been regarded as a well-recognized standard reference for trend analysis of cancer burden. We used the International Classification of Diseases and Related Health Problems-10th Revision code C22 to identify "malignant neoplasm of the liver and intrahepatic bile ducts" in the analysis [29]. Age-standardized rates (ASRs) were calculated for all figures based on the Segi-Doll world standard population [30].
Statistical Analysis
All incidence and mortality figures were presented by ASRs. The HCV-/HBV-related liver cancer incidence ratio was calculated by ASRs of liver cancer incidence attributable to HCV divided by that to HBV. The ratio shows the relative burden of liver cancer attributable to HBV and HCV for individual countries without reference to its incidence. We calculated this ratio to examine the incidence of liver cancer attributable to HCV and HBV in association with certain risk factors. We chose these 2 because only these 2 etiologies for liver cancer were described in the database. Countries with a ratio more than 1 had a higher incidence of HCV-related liver cancer than that related to HBV. Countries with a ratio less than 1 had a lower incidence of HCV-related liver cancer than that related to HBV. Countries with a ratio equal to 1 had the same incidence of HCV-related liver cancer as that related to HBV. We also estimated the total attributable fraction (AF) of liver cancer caused by HBV and HCV for each country from the database. The correlations between the lifestyle and metabolic risk factors, GDP per capita, and HDI with the ratio were examined using Pearson's correlation coefficient (r). We also performed the sensitivity analysis by excluding the countries with a total AF of liver cancer caused by HBV and HCV no more than 50 and 60%, respectively. Weighted linear regression by inverse variance was also performed to generate beta coefficient (β) for the associations. The epidemiological trend of incidence and mortality of liver cancer in the recent past 10 years was evaluated for different countries by using joinpoint regression analysis [31]. The results were presented as average annual percent change (AAPC) with its 95% confidence interval (CI) [31]. A logarithmic transformation of the incidence and mortality data was performed, and standard errors were calculated by binomial approximation. Weights equivalent to each segment's length were apportioned for the specified time frame [32]. Countries with "zero" or "missing" values in their figures of the most recent decade were excluded from the regression analysis. A maximum of 3 joinpoints was used as the parameter of analysis. The AAPC was evaluated as an average of annual percent change (APC) using geometric weighting in populations of different age strata, genders, and countries. A p value of <0.05 was considered statistically significant in the analysis.
Incidence and Mortality in 2018
A total of 841,080 (CI 817,635-865,198) new cases were reported in 2018 (Table 1) [17]. The ASR of incidence was 9.3 (CI 9.0-9.6) per 100,000 population and showed 7-fold variation globally (Fig. 1). The highest rates were reported in Eastern Asia ( Table 2 shows the cause-specific estimated number of new cases, ASRs, and the HCV/HBV-related liver cancer incidence ratios in 2018 for each country. The highest ASRs of HBV-related liver cancer were observed in Mongolia (ASR 41.
Risk Factors Associated with HCV/HBV-Related Liver Cancer Incidence Ratio
Among the lifestyle risk factors investigated, a higher incidence ratio of HCV/HBV-related liver cancer was associated with a higher prevalence of alcohol consumption (r 0.27, p < 0.001) and physical inactivity (r 0.19, p = 0.02), but not with smoking (p = 0.2) (Fig. 2). For the metabolic risk factors, a higher ratio was associated with a higher prevalence of overweight (r 0.42, p < 0.001), obesity (r 0.40, p < 0.001), and elevated cholesterol (r 0.41, p < 0.001), and a lower prevalence of hypertension (r −0.21, p = 0.008), but not with diabetes (p = 0.7). The higher ratio was also associated with a higher GDP per capita (r 0.40, p < 0.001) and HDI (r 0.42, p < 0.001) for different countries. The correlations remained unchanged when excluding the countries with a total AF of liver cancer caused by HBV and HCV no more than 50% or 60% in the sensitivity analysis (online suppl. Table 3). After conducting the weighted linear regression, the associations remained significant for alcohol consumption (β 0.
Temporal Trends of Liver Cancer
The incidence and mortality trends of each country between 1980 and 2017 are shown in online suppl. Figure 1, and the results from the joinpoint regression analysis are plotted in online suppl. Figure 2. Incidence Trend Considering male individuals, 18 countries had an increase in incidence and 23 countries reported stable trends ( Fig. 3; online suppl.
Mortality Trend
Considering male patients, 13 countries had an increase in mortality and 23 countries reported stable trends ( Fig. 4; online suppl. showed the most significant decrease.
Summary of Major Findings
This study presents the most updated data on the global disease burden of liver cancer by causes and associated risk factors, as well as its epidemiological trends by age, gender, and country. There are several major findings. First, the highest burden tended to predominate in Eastern Asia, and there was an evident epidemiologic disparity in its incidence caused by HBV and HCV in 2018. Second, a higher incidence ratio of HCV/HBV-related liver cancer was associated with a higher level of alcohol consumption, overweight, obesity, elevated cholesterol, GDP, and HDI. Third, many countries reported an increasing trend in liver cancer for the past 10 years, especially among male individuals, those aged ≥50 years, and countries with a higher HCV/HBV-related liver cancer incidence ratio.
Disparities in Epidemiology by Causes
There was a substantial variation in the incidence and mortality of liver cancer in different countries in 2018. We found that the highest incidence and mortality tended to predominate in Eastern Asia, Micronesia, Northern Africa, and Southeastern Asia, while the lowest was found in south-central Asia, Western Asia, central Europe, and eastern Europe. These findings are consistent with those reported from previous studies [10,11]. In addition to ethnic and racial differences, this variation might be attributed to the different distribution of risk factors for liver cancer across different populations. Chronic HBV and HCV remain important risk factors for liver cancer [33]. Globally, more than 250 million individuals were infected with HBV in 2015, with a significant geographic variation in prevalence [34]. Countries with a prevalence of HBV infection over 8% were mostly in Asia and Africa, and they contributed to approximately 70% of all infected patients. On the contrary, the prevalence of HBV in Western countries was as low as less than 2% [35]. Globally, more than 70 million individuals were infected with HCV in 2015, with a major geographical difference [36]. The prevalence of HCV was high in central Asia and the Mediterranean (>3.5%), while its prevalence was less than 1.5% in North America [37]. Even though the prevalence of HBV and HCV was low in the developed countries, liver cancer caused by HCV was more prevalent in highincome countries like North America and western Europe [38].
Association with Risk Factors, GDP, and HDI
There are some clinical and public health significance for examining the association between the HCV-/HBVrelated liver cancer incidence ratio and the prevalence of preventable lifestyle and metabolic risk factors. HBV and HCV are largely predominant risk factors compared with other risk factors in most countries, and there was an evident geographical variation in the epidemiology of liver cancer caused by HBV and HCV. This indicator was devised for the purpose of looking at their relative association with different risk factors, which may help set tailored strategies on liver cancer prevention for individual countries. The current correlation analysis found some lifestyle and metabolic risk factors associated with a higher prevalence of liver cancer incidence attributable to HCV and a lower prevalence of that attributable to HBV at a country-level analysis. These factors included alcohol consumption, overweight, obesity, and elevated cholesterol. The results are generally consistent with the studies at an individual level. There is evidence for a much stronger synergistic effect between alcohol consumption and HCV infection in the development of liver cancer than HBV infection according to a case-control study of 464 patients with liver cancer [39]. A cohort study of 23,820 participants with a follow-up period of 14 years found that obesity was independently associated with a 4-fold risk of liver cancer among patients infected with HCV, but not among patients infected with HBV [40]. Similar associations were found among individuals with diabetes [40] and metabolic syndrome [41]. However, we did not find diabetes as a risk factor for higher HCV-related liver cancer, and this is probably due to the presence of unknown potential confounders or the difference between ecological correlation and individual correlations. A notable finding of the study results is that the ratio was also associated with GDP and HDI, which are 2 important indexes measuring the level of socioeconomic development of different countries.
Increasing Burden in the Past Decade
We observed an overall increasing trend of its incidence and mortality for the past 10 years, especially among male subjects, older individuals, and countries with a higher prevalence of HCV-related liver cancer. The reasons behind the increasing trend of the incidence and mortality of liver cancer remain unclear. As the increase was mostly observed in countries with a higher prevalence of HCV-related liver cancer, the increasing prevalence of alcohol consumption and obesity may have contributed to this epidemiologic transition. For the recent past decade, the global alcohol consumption per capita has increased from 5.5 to 6.4 L (16.4%) among adults [42]. Based on a recent WHO report on the global burden of obesity in 2016, its prevalence has nearly tripled in the past 4 decades [43]. A meta-analysis of more than 14 million participants found that the worldwide prevalence of central obesity has doubled from 1985 (16%) to 2014 (34%) [44]. In addition to HCV-related liver cancer, the increase in burden of liver cancer may also be attributable to the recent increasing trend of non-Liver Cancer 2021;10:330-345 DOI: 10.1159/000515304 alcoholic fatty liver disease (NAFLD) [45,46]. NAFLD can lead to liver cirrhosis and cancer, contributing to liver-related mortality [47]. Evidence showed that there was also a strong association between the risk of NAFLD and obesity, as well as other metabolic diseases [48]. All these factors may be associated with an increasing trend of liver cancer among countries with high GDP and HDI. Considering the increasing prevalence of obesity and metabolic syndrome caused by overnutrition, sedentary lifestyles, and urbanization, the global burden of HCVand NAFLD-related liver cancer is estimated to increase further in the future.
Strengths and Limitations
This study is an updated analysis of the global burden of liver cancer by causes and its associated risk factors, as well as its recent epidemiological trend by age, gender, and country. The figures were obtained from real-world cancer registries of high quality with a total of more than one million cancer cases. Nevertheless, the study has several limitations. First, there could be underreporting of the cancer figures in lower income countries when compared with higher income countries. In contrast, the figures could also have been overestimated as the figures for incidence and mortality were mainly from the cancer registries of major cities for some countries. Second, direct comparison between some countries could be difficult since cancer registries and causes of death registries might differ by countries and over time. Third, risk factor association with other etiologies could not be assessed from this database, and only the analysis on HCV-/HBV-related liver cancer could be performed. In addition, the increase in incidence in some countries may be attributable to the improvement in diagnosis.
Implications
The reinforcement of country-specific preventive strategies, including a robust implementation of hepatitis vaccination programs and promotion of healthy lifestyle interventions, is important to reduce the burden of liver cancer. Screening programs for high-risk populations can also be organized in primary care settings to detect liver cancer and its related diseases, including HBV, HCV, cirrhosis, and NAFLD [49]. In 2015, the United Nations announced one important goal of Sustainable Development is to eliminate viral hepatitis by 2030 [50]. For HBV-related liver cancer, the decrease in cancer rates will certainly depend on highly effective vaccination campaigns and antiviral treatment. For HCV-related liver cancer, the decline in cancer rates will derive more from an anti-viral approach as HCV vaccine is not currently available [51]. However, even though effective antiviral treatment options are available for HBV and HCV, they have not been sufficiently implemented globally, especially in developing countries. The success of the elimination plan will largely depend on the accessibility of antiviral treatment, the availability of treatment and monitoring guidelines, and the capacity to offer screening and treat the high-risk populations. Other strategies to combat liver cancer including community-based health promotion education program (such as those on prevention of needle exchange among intravenous drug users) and environmental modifications (such as initiatives on the promotion of storage technique to avoid aflatoxin contamination) can also be useful for high-risk populations. For patients already diagnosed with liver cancer, it is important to channel efforts and resources in improving available medical and surgical interventions (e.g., surgery, ablation, embolization therapy, radiation therapy, targeted drug therapy, immunotherapy, and chemotherapy) and provide multidisciplinary care to reduce its related morbidity and mortality. Future studies should investigate the plausible reasons behind these epidemiological changes, which may offer further insights into developing an evidence-based globally sustainable, targeted, and individualized public health model in fighting liver cancer at its core.
Statement of Ethics
This study was approved by the Survey and Behavioural Research Ethics Committee, Chinese University of Hong Kong (No. SBRE-20-332). We declare the research complies with the guidelines for human studies and was conducted ethically in accordance with the World Medical Association Declaration of Helsinki. Patient consent was not applicable as the study only used information freely available in the public domain and does not contain any personal or medical information about an identifiable living individual.
|
2021-05-04T22:06:36.450Z
|
2021-03-30T00:00:00.000
|
{
"year": 2021,
"sha1": "b5e90dcf28acecc225737856b20091bfceb2cba0",
"oa_license": "CCBYNC",
"oa_url": "https://www.karger.com/Article/Pdf/515304",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "eb5d004efcad38fc5d6e68ecdfcc865624055ac5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.