id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
255820571
pes2o/s2orc
v3-fos-license
Evidence of diversity and recombination in Arsenophonus symbionts of the Bemisia tabaci species complex Background Maternally inherited bacterial symbionts infecting arthropods have major implications on host ecology and evolution. Among them, the genus Arsenophonus is particularly characterized by a large host spectrum and a wide range of symbiotic relationships (from mutualism to parasitism), making it a good model to study the evolution of host-symbiont associations. However, few data are available on the diversity and distribution of Arsenophonus within host lineages. Here, we propose a survey on Arsenophonus diversity in whitefly species (Hemiptera), in particular the Bemisia tabaci species complex. This polyphagous insect pest is composed of genetic groups that differ in many ecological aspects. They harbor specific bacterial communities, among them several lineages of Arsenophonus, enabling a study of the evolutionary history of these bacteria at a fine host taxonomic level, in association to host geographical range and ecology. Results Among 152 individuals, our analysis identified 19 allelic profiles and 6 phylogenetic groups, demonstrating this bacterium's high diversity. These groups, based on Arsenophonus phylogeny, correlated with B. tabaci genetic groups with two exceptions reflecting horizontal transfers. None of three genes analyzed provided evidence of intragenic recombination, but intergenic recombination events were detected. A mutation inducing a STOP codon on one gene in a strain infecting one B. tabaci genetic group was also found. Phylogenetic analyses of the three concatenated loci revealed the existence of two clades of Arsenophonus. One, composed of strains found in other Hemiptera, could be the ancestral clade in whiteflies. The other, which regroups strains found in Hymenoptera and Diptera, may have been acquired more recently by whiteflies through lateral transfers. Conclusions This analysis of the genus Arsenophonus revealed a diversity within the B. tabaci species complex which resembles that reported on the larger scale of insect taxonomy. We also provide evidence for recombination events within the Arsenophonus genome and horizontal transmission of strains among insect taxa. This work provides further insight into the evolution of the Arsenophonus genome, the infection dynamics of this bacterium and its influence on its insect host's ecology. Background Many arthropods live in symbiosis with one or more endosymbiotic bacteria, establishing a wide diversity of symbiotic associations ranging from mutualism to parasitism [1,2]. When arthropod hosts feed on imbalanced diets, such as plant sap or vertebrate blood, mutualistic bacterial symbionts play a central role in their biology by providing essential nutrients that are lacking or limited [3], leading to obligatory cooperative insect-microbial relationships. Arthropods also harbor facultative symbionts acquired more recently, leading to complex associations with shorter epidemiological and evolutionary dynamics [4,5]. These are mainly vertically transmitted but according to the host-symbiont association, horizontal transfers may occur within and between species on different evolutionary time scales [6][7][8][9]. An extremely diverse group of bacterial taxa is involved in facultative symbiosis, with a wide range of both hosts and phenotypes. Some facultative endosymbiotic bacteria confer direct fitness benefits such as protection against natural enemies [10,11], hostplant specialization [12] or thermal tolerance [13]. Others, like the alphaproteobacterium Wolbachia and the Bacteroidetes Cardinium, manipulate host reproduction to enable their spread and maintenance in host populations despite deleterious effects (for review see Stouthamer et al. [14]). Among the symbiotic bacteria, the gammaproteobacterium genus Arsenophonus has particular characteristic features with regard to lineage diversity, host spectrum and the symbiotic relationships established with its host. It thus constitutes a good model to study the evolutionary process shaping symbiotic associations. The diversity of Arsenophonus host species is particularly large, including insects, other arthropods (such as ticks) and plants [15]. This can be explained by the symbiont's transmission routes since this vertically transmitted bacterium can also be acquired by horizontal transfer within and among species [16,17]. Moreover, some strains can be cultivated on cell-free cultures [18]. Arsenophonus-host relationships range from parasitism to mutualism, with the induction of various phenotypes such as reproductive manipulation (male-killing) [19], phytopathogenicity [20] or obligatory mutualism [21,22]. However, in most reported symbiotic associations, the impact of this symbiont on the host phenotype remains unknown. Based on rRNA gene analysis, phylogenetic studies have revealed an extremely high diversity of bacterial lineages forming a monophyletic group [15]. In addition, the Arsenophonus phylogeny encompasses several other host-specific sub-clusters with lower divergence associated to ticks, plants, triatomine bugs, whiteflies, several genera of hippoboscids and ants, but no co-speciation pattern within clades. Beside these bacterial lineages that cluster according to host taxonomy, a number of closely related Arsenophonus strains infect unrelated host species. Moreover, the same host species sometimes harbors several Arsenophonus lineages, a pattern that is probably due to the Arsenophonus's ability to be horizontally transferred, as recently demonstrated in the hymenopteran parasitoids of the family Pteromalidae [17]. Previous studies have shown that whitefly species can host different strains of several bacteria [15,23,24] , and they thus appear to be particularly relevant to investigating Arsenophonus diversity and evolution. However, we cannot disregard the fact that rRNAbased phylogeny suffers inconsistencies as a result of intragenomic heterogeneity among the 8 to 10 estimated rRNA copies in the Arsenophonus genome [25]. Moreover, biased phylogeny can also result from homologous recombination, which appears more frequently in symbiotic bacteria than expected based on their intracellular lifestyle and vertical transmission [26,27]. The availability of the complete sequence of the Arsenophonus genome now provides the opportunity to perform a more accurate exploration of the evolutionary history and ecological spread of this pervasive symbiotic bacterium on different host-taxonomical scales. Among the whiteflies, the Bemisia tabaci (Homoptera, Aleyrodidae) species complex has emerged as a focus of attention for several reasons, chief among them being the ongoing species radiation and the high prevalence of a wide diversity of endosymbiotic bacteria, including several lineages of Arsenophonus [28]. The whitefly B. tabaci is a worldwide polyphagous pest of vegetables and ornamental crops, previously thought to be a unique species composed of several well-differentiated genetic groups or biotypes. Recently however, some of these groups have been recognized as true species, so that B. tabaci is now considered a complex of 24 cryptic species which barely interbreed and form different phylogenetic clades [29]. The biological data needed to draw clear boundaries among species and to identify the cause of such genetic differentiation are still lacking. This phloem-feeding insect harbors a primary symbiont, Portiera aleyrodidarum, required for supplementing its specialized diet. B. tabaci also hosts up to six vertically transmitted secondary symbionts, some of which are phylogenetically highly distant [23]. For each of these symbionts, the phenotypic consequences of infection in B. tabaci remain poorly identified, if at all [30]. Nevertheless, in other insect species, some of these bacteria are known to manipulate host reproduction, while others increase resistance to natural enemies [4,10,14,31]. Moreover, the symbionts are thought to play a major role in the viral transmission capacities of the pest [32,33]. Interestingly, multiple bacterial infections are common in B. tabaci, and the endosymbiotic community is correlated with the B. tabaci genetic groups on different scales of differentiation [28,34,35]. This raises the question of these endosymbionts role in B. tabaci biology and species radiation. Within the 24 well-differentiated mtDNA groups recognized as true species by De Barro et al. [29] and that regroup all previously described biotypes, Arsenophonus has been found in AsiaII3 (ZHJ1 biotype), AsiaII7 (Cv biotype), Indian Ocean (Ms biotype), Mediterranean [Q and Africa Silver Leafing (ASL) biotypes which probably form true species] and the Sub-Saharan Africa species [Africa non-Silver Leafing (AnSL) biotype] [28,[34][35][36][37][38]. For all other species or groups, there is either no data or they have proven to be free from infection. For example, among the putative species of the Africa/Middle East/Asia Minor clade which contains the most invasive species the Ms, Q and ASL groups Arsenophonus appears well established, whereas the invasive B group has been shown to be uninfected, despite extensive symbiont screening [28,34,39]. The prevalence varies considerably within and among populations and genetic groups infected by Arsenophonus. For example, Q is composed of three COI-differentiated groups, Q1, Q2 and Q3 [28]. To date, these three cytotypes have not shown the same geographical distribution and show different endosymbiotic bacterial community compositions [28,40]. The subgroup Q1, found in Europe, is not infected by Arsenophonus but harbors three other bacteria [28]. In contrast, Q2 observed in the Middle East and Q3 reported only in Africa show high prevalence of Arsenophonus in co-infection with Rickettsia [28,34,41]. Ms individuals are highly infected by Arsenophonus with a high level of co-infection by Cardinium [37]. All of these groups (B, Q, ASL, Ms and AnSL) show quite different geographical ranges. Ms has been detected on the islands in the southwestern part of the Indian Ocean, Tanzania and Uganda, living in sympatry with B [42]. ASL and AnSL have been reported only in Africa [28,35,[43][44][45][46]. In contrast, the invasive B and Q groups are spread all over the world. Q has been found in Africa, America, Europe, Asia and the Middle East [28,34,47,48]. However, this situation is constantly in flux, because commercial trade is responsible for recurrent introduction/invasion processes of B. tabaci giving rise to new sympatric situations. Moreover, potential horizontal transfers of symbionts and interbreeding can generate new nucleo-cytoplasmic combinations and thus rapid evolution of symbiont diversity. Patterns of Arsenophonus infection in B. tabaci within the high-level Africa/Middle East/Asia Minor groups make this clade a good candidate to study, on fine taxonomic and time scales, the spread of this bacterium, its ability to be horizontally transferred and finally, its evolutionary history, including genetic diversity generated by recombination events. In the present paper, we explore the prevalence and diversity of Arsenophonus strains in this clade using an MLST approach to avoid the disadvantages of the rRNA approach. In parallel we also studied, as an outgroup, the Sub-Saharan AnSL species (S biotype), considered the basal group of this species complex, and two other whitefly species found at the sampling sites, Trialeurodes vaporariorum and Bemisia afer. Insect sampling Individuals from different species of Bemisia tabaci and two other Aleyrodidae species were collected from 2001 to 2010 from various locations and host plants in Africa and Europe and stored in 96% ethanol (Table 1, Figure 1). DNA extraction and PCR amplification Arsenophonus detection and identification of B. tabaci genetic groups Insects were sexed and DNA was extracted as previously described by Delatte et al. [49]. All samples were screened for Arsenophonus infection using the specific primers Ars-23S1/Ars-23S2 targeting the 23SRNA gene [50] (Table 2). To check for extracted DNA quality, all samples were also tested for the presence of the primary symbiont P. aleyrodidarum using specific primers for the 16S rRNA genes described by Zchori-Fein and Brown [23]. When positive signals were recorded in both PCRs, insects were used in the analysis. B. tabaci genetic groups were identified by PCR-RFLP (random fragment length polymorphism) test based on the mitochondrial marker COI (Cytochrome Oxidase 1) gene as described by Gnankine et al. [35] for Q, ASL and AnSL individuals. A set of 10 microsatellite markers was used to identify Ms according to Delatte et al. [42]. Moreover, a portion of the COI gene was sequenced for five individuals from each of the different B. tabaci genetic groups, using the protocol described by Thierry et al. [37] and Gnankine et al. [35] ( Figure S1 in Additional file 1). Study of Arsenophonus diversity PCRs targeting three different genes of Arsenophonus were carried out on positive samples with two sets of primers designed specifically for this study (ftsK: ftskFor1/Rev1, ftskFor2/Rev2; yaeT: YaeTF496/ YaeTR496, see Table 2) and one set from the literature (fbaA: FbaAf/FbaAr) [17]. For the Q group, amplifications failed for some individuals and the primer FbaArLM (Table 2) was then used instead of FbaAr. These two primers are adjacent and their use permits Phylogenetic analyses Multiple sequences were aligned using MUSCLE [51] algorithm implemented in CLC DNA Workbench 6.0 (CLC Bio). Phylogenetic analyses were performed using maximum-likelihood (ML) and Bayesian inferences for each locus separately and for the concatenated data set. JModelTest v.0.1.1 was used to carry out statistical selection of best-fit models of nucleotide substitution [52] using the Akaike Information Criterion (AIC). A corrected version of the AIC (AICc) was used for each data set because the sample size (n) was small relative to the number of parameters (n/K < 40). This approach suggested the following models: HKY for fbaA, GTR for ftsK, HKY+I for yaeT and GTR+I for the concatenated data set. Under the selected models, the parameters were optimized and ML analyses were performed with Phyml v.3.0 [53]. The robustness of nodes was assessed with 100 bootstrap replicates for each data set. Bayesian analyses were performed as implemented in MrBayes v.3.1.2 [54]. According to the BIC (Bayesian information criterion) estimated with jModelTest, the selected models were the same as for ML inferences. For the concatenated data set, the same models were used for each gene partition. Analyses were initiated from random starting trees. Two separate Markov chain Monte Carlo (MCMC) runs, each composed of four chains, were run for 5 million generations with a "stoprule" option to end the run before the fixed number of generations when the convergence diagnostic falls below 0.01. Thus, the number of generations was 3,000,000 for FbaA, 600,000 for FtsK, 2, 100,000 for YaeT and 1,000,000 for the concatenated data set. A burn-in of 25% of the generations sampled was discarded and posterior probabilities were computed from the remaining trees. Runs of each analysis performed converged with PSRF values at 1. In addition, Arsenophonus strains identified in the present study were used to infer phylogeny on a larger scale with the Arsenophonus sequences from various insect species obtained from Duron et al. [17]. The GTR+G model was used for both methods (ML and Bayesian inferences) and the number of generations was 360,000 for the Bayesian analysis. Recombination analysis The multiple sequence alignments used in the phylogenetic analysis were also used to identify putative recombinant regions with methods available in the RDP3 computer analysis package [55]. The multiple sequence alignments were analyzed by seven methods: RDP [56], GENECONV [57], Bootscan [58], Maximum Chi Square [59], Chimaera [60], SiScan [61], and 3Seq [62]. The default search parameters for scanning the aligned sequences for recombination were used and the highest acceptable probability (p value) was set to 0.001. Diversity and genetic analysis Identical DNA sequences at a given locus for different strains were assigned the same arbitrary allele number (i.e. each allele has a unique identifier). Each unique allelic combination corresponded to a haplotype. Genetic diversity was assessed using several functions from the DnaSP package [63] by calculating the average number of pairwise nucleotide differences per site among the sequences (π), the total number of mutations (η), the number of polymorphic sites (S) and the haplotype diversity (Hd). The software Arlequin v.3.01 [64] was used to test the putative occurrence of geographical or species structure for the different population groups by an AMOVA (analysis of molecular variance). The analyses partitioning the observed nucleotide diversity were performed between and within sampling sites Ars-23S2 5'-GGTCCTCCAGTTAGTGTTACCCAAC -3' This study This study (countries, localities) or species (B. tabaci species, T. vaporariorum and B. afer). For each analysis, genetic variation was partitioned into the three following levels: between groups (F CT ), between populations within groups (F SC ) and within populations (F ST ). Significance was tested by 10,000 permutations as described by Excoffier et al. [64]. Results Three bacterial genes fbaA, yaeT and ftsK of Arsenophonus were sequenced for 152 Aleyrodidae individuals sampled from different geographical locations and host plants ( Figure 1, Table 1). The obtained sequences exhibited a high degree of identity to sequences from the bacterial genus Arsenophonus available in the NCBI database (http://www.ncbi.nlm.nih.gov), ranging from 91 to 100% for fbaA, 94 to 98% for yaeT, and 91 to 100% for ftsK. The G-C content varied from 39 to 46% (Table 3), the expected range for these bacteria [65]. Prevalence and co-occurrence of Arsenophonus Arsenophonus revealed highly variable prevalences among and within genetic groups and locations (Table 1). Within the Q3 and ASL groups found only in Africa, more than 80% of the individuals were infected with Arsenophonus, whereas the prevalence was lower in the AnSL group (50% on average). The infection level was much more variable in Q2 (from 33 to 100%) and Ms (from 4 to 100%). Furthermore, all individuals tested from T. vaporariorum (30) and B. afer (2) were infected with Arsenophonus. Since the sampling was not performed on the same host plants, or in the same locations or countries for a given group, we could not test for the influence of host plant or locality. Based on the three sequenced genes, we could not detect individual co-infection by two lineages of Arsenophonus in the same whitefly. Allelic variation Nine alleles were found for both ftsK and fbaA, and 11 for yaeT (Table 4). In these three genes, only 12.1% of the sites showed variation (110/906; Table 3). The observed allelic diversity was not randomly distributed. In fact, strong and significant differentiation (Fct = 0.69*, explaining 69% of the total variation in the sample, Table S1 in Additional file 1) was observed between groups of alleles, with each group being mostly associated to a genetic group within the B. tabaci complex or the other Aleyrodidae species tested (T. vaporariorum or B. afer). For the ftsK locus, we observed indels of two types: a 2-bp insertion found exclusively in the Arsenophonus hosted by the Q2 genetic group and a 1-bp deletion found in some ASL and Q2 individuals. These two indels resulted in hypothetical truncated ftsK proteins potentially encoding 866 or 884 amino acids, respectively (predicted ftsK has 1030 amino acids in Arsenophonus nasoniae [Genbank: CBA73190.1]; (Table S2 in Additional file 1). Among the 152 individuals used in this study, a total of 19 haplotypes of Arsenophonus were identified, which is low compared to the theoretical 891 allelic combinations (9 x 9 x 11, 9 alleles for both ftsK and fbaA, and 11 for yaeT; Table 4). Recombination analysis Using the RDP3 package, recombination events were tested for each gene separately and for the concatenated data set using all sequences studied (see Figure 2). No recombination events were detected for any of the gene portions analyzed separately, suggesting that there is no intragene recombination. For the concatenated data set sequences, among the seven algorithms tested, four (GENECONV, Bootscan, Maximum Chi Square, and Chimaera) showed two significant recombination events (Table S3 in Additional file 1). Recombination events were detected in individuals B1-47 and B1-42 (ASL genetic group) for the whole region of the ftsK gene (positions 366 to 617 in the concatenated alignment). Parental-like sequences determined for the recombinant B1-42 were VILCU10 (Q2 genetic group, major parent) and B1-45 (ASL genetic group, minor parent), and parental-like sequences for the recombinant B1-47 were O2-22 (Q3 genetic group, major parent) and B1-34 (ASL genetic group, minor parent). These two recombinant sequences suggest a recombination event between Arsenophonus sequence-like of the Q2 and ASL genetic groups for B1-42 and between Q3 and ASL genetic groups for B1-47. Phylogenetic inference of relationships All tree topologies (each gene separately and the combined analysis) were the same with both ML and Bayesian analyses, and we therefore present trees with both bootstrap statistics and Bayesian posterior probabilities (Figures 2, 3; Figure S2 in Additional file 1). Phylogenetic analysis among Arsenophonus from Aleyrodidae The phylogenetic trees obtained for each of the three loci were congruent except for the two recombinants (B1-42 and B1-47). Thus, we conducted analyses using the 907-bp concatenated fbaA, ftsK and yaeT sequences. The concatenated tree (Figure 3) revealed the existence of two highly supported clades composed of six groups and one singleton (the Arsenophonus found in B. afer, genetically distant from B. tabaci; Figure S1 in Additional file 1). Shown are: mean GC%, number of polymorphic sites including gaps (S), the total number of mutations (η),average number of pairwise nucleotide differences per site among the sequences (π), number of haplotypes (h) and haplotype diversity (Hd). • The total number of individuals includes the singleton B. afer. The first clade was composed of Q2, Ms, Trialeurodes and some ASL individuals. The second clade was composed of Q3, ASL and AnSL individuals. Interestingly, ASL individuals sampled from the same location and host plant (Burkina Faso, Bobo/Kuinima, Tomato, Marrow; Table 1) were found in both Arsenophonus clades, and included the recombinants as well. The six phylogenetic groups of Arsenophonus highly correlated with the B. tabaci genetic groups defined on the basis of the mitochondrial COI, and with the two other Aleyrodidae species. Indeed, four groups were composed exclusively of individuals belonging to the same genetic group, respectively Ms, ASL, Q3 and Q2. The two other groups included either two distinct COI groups of B. tabaci ASL and AnSL or individuals from two different host species : B. tabaci (with Ms genetic group individuals from Madagascar, Tanzania and Reunion) and T. vaporariorum (Tables 3, 4). Comparative analysis of the genetic divergence of these groups at the three loci (Tables 3, 4) revealed that the group composed of ASL and AnSL individuals is the most polymorphic (π = 0.0068), while the Q2 group is highly homogeneous despite several sampling origins (Table 1). Overall, DNA polymorphism was rather low with an average value of group π means of 0.002. Phylogenetic relatedness of Arsenophonus strains from other insects species The Arsenophonus isolates observed in our B. tabaci samples proved to be phylogenetically very close to the Arsenophonus strains found in other insect species (Figure 3). One clade, composed of T. vaporariorum, B. afer, the B. tabaci groups Ms, Q2, and some individuals belonging to ASL, fell into the Aphis sp. and Triatoma sp. Arsenophonus clade described by Duron et al. [17]. The other clade was comprised mainly Arsenophonus infecting Hymenoptera (Nasonia vitripennis, Pachycrepoideus vindimmiae, Muscidifurax uniraptor) and the dipteran Protocalliphora azurea. Discussion In this paper we report on a survey of the Arsenophonus bacterial symbiont in whitefly species, and in particular in B. tabaci. The data revealed considerable withingenus diversity at this fine host taxonomic level. Previous studies conducted in several arthropod species have found Arsenophonus to be one of the richest and most widespread symbiotic bacteria in arthropods [9,15]. However, those studies were performed with 16S rRNA, which is present in multiple copies in the genome of the bacterium [25] and has proven to be a marker that is Number of individuals per haplotype and frequencies are indicated. The name of each haplotype is the name of one of its representatives. The genetic groups of B. tabaci associated with the haplotype are indicated in parentheses. highly sensitive to methodological artifacts, leading to an overestimation of the diversity [15]. The phylogenetic analyses performed on concatenated sequences of three Arsenophonus genes from whiteflies identified two well-resolved clades corresponding to the two clades obtained in the MLST study performed by Duron et al. on a larger insect species scale [17]. One clade was composed of Arsenophonus lineages from three B. tabaci genetic groups (Ms, ASL, Q2), T. vaporariorum and B. afer, and strains found in other Hemiptera. The other clade, initially clustering Arsenophonus strains found in Hymenoptera and Diptera, also contained whitefly symbionts of the AnSL, ASL and Q3 genetic groups of the B. tabaci species complex. This clade thus combines insect hosts from phylogenetically distant taxa. The lineages of Arsenophonus from this clade were most likely acquired by whiteflies more recently through lateral transfers from other insect species. The genetic groups of B. tabaci represented in this clade all originated from Africa (AnSL, ASL and Q3), which could be explained by horizontal transmission events among groups of B. tabaci after a first interspecific transfer of Arsenophonus from another insect genus. There have been many reports of interspecific horizontal transfers of facultative symbiotic bacteria, suggesting that this phenomenon is frequent in arthropods and probably represents the most common process in the establishment of new symbioses [8]. For example, extensive horizontal transmissions of the reproductive manipulator Wolbachia have occurred between insect species [66]. However, horizontal transfers of Arsenophonus were poorly documented at the time. Nevertheless, a bacterium called Candidatus Phlomobacter fragariae, which is pathogen of strawberry plants, is phylogenetically close to Arsenophonus associated with some hemiptera (from cixiids) and more distantly related to psyllid and delphacid secondary endosymbionts [20,67], showing probable evidence of horizontal transfer between plants and insects. Recently Duron et al. [17] demonstrated, by phylogenetic analysis and experimental studies, the existence of such horizontal transmission of Arsenophonus strains among different wasp species through multi-parasitism. Here we provide indirect phylogenetic evidence of horizontal transmission of Arsenophonus among distantly related species that do not have clear intimate ecological contact (via predation or parasitism for instance) and thus have less opportunities for horizontal transfers. This could be explained by the particular features of Arsenophonus, most notably its broad spectrum of host species (many insect taxa but also plants) and its ability to grow outside the host [68]. On a lower taxonomic scale, within the whitefly species, 19 haplotypes were identified among the 152 concatenated sequences of Arsenophonus obtained in this study. They formed six phylogenetic groups and one singleton corresponding to the Arsenophonus strain found in the host species B. afer. These groups did not cluster individuals according to host plant or sampling site, and four of them were congruent to the B. tabaci genetic groups. Among the two other phylogenetic groups, one clustered B. tabaci individuals that belonged to two strongly diverse genetic groups, ASL and AnSL, which are considered two different species [29] and which were not collected on either the same host plant or in the same country (Burkina Faso and Benin/Togo, respectively). Only some of the ASL individuals belonged to this group, while the others clustered together. These two groups split into the two clades found in whiteflies, which may reflect two separate acquisition events. The other group of Arsenophonus comprised individuals of two whitefly species, T. vaporariorum and B. tabaci (Ms individuals originated from different countries: Madagascar, Tanzania or Reunion). The Arsenophonus strains found in Ms individuals clustered into two groups, but they fell into the same clade (close to Hemiptera). The haplotype diversity of this group was very low, suggesting a recent transfer between T. vaporariorum and Ms. One hypothesis is that the exchange of Arsenophonus lineages between these two species occurred through their parasitoids, as previously described for Wolbachia in planthoppers [69], since T. vaporariorum and B. tabaci share some parasitoid species (such as Encarsia or Eretmocerus) and are usually found in sympatry. A second pathway of infection could be through their feeding habit via the plant, as both species are found in sympatry in the field and share the same host plant range. Such a method of symbiont acquisition has been hypothesized for Rickettsia in B. tabaci [70]. Within the B. tabaci species complex, we found, for the first time for Arsenophonus, intergenic recombination events in two individuals belonging to the ASL genetic group. The parental-like sequences came from Q2, Q3 and ASL individuals. Although unexpected for intracellular bacteria, homologous recombination has been described in some endosymbiotic bacteria [26,27]. For example, Wolbachia showed extensive recombination within and across lineages resulting in chimeric genomes [27]; Darby et al. [25] also found evidence of genetic transfer from Wolbachia symbionts, and phage exchange with other gammaproteobacterial symbionts, suggesting that Arsenophonus is not a strict clonal bacterium, in agreement with the present study. These recombination events may have important implications for the bacteria, notably in terms of phenotypic effects and capacity of adaptation to new hosts, and thus for the bacterial-host association [8], and might prevent the debilitating effects of obligate intracellularity (e.g., Muller's rachet [71]). In the Wolbachia genome, intergenic and intragenic recombinations occur; we detected only intergenic recombination events between ftsK and the two other genes in Arsenophonus. Surprisingly, we detected indels inducing STOP codons in this gene. These indels, found in all individuals of the Q2 genetic group sampled in Israel, France, Spain, and Reunion, disables the end of the ftsK portion sequenced in this study. In bacteria, ftsK is part of an operon of 10 genes necessary for cell division [72]. However, a recent study has demonstrated that, in Escherichia coli, overexpression of one of the 10 genes of this operon (ftsN) is able to rescue cells in which ftsK has been deleted [73]. This gene, ftsN, is also present in the Arsenophonus genome [Genbank: CBA75818.1]. These data suggest that ftsK may be not suitable for a MLST approach and other conserved genes should be targeted instead. Future studies should focus on obtaining extensive data related to the specificity of Arsenophonus-Q2 interactions. It would be interesting to sample more Q2 individuals infected with Arsenophonus to determine the prevalence of this STOP codon in natural populations and its consequences for the bacteria. Conclusions In this study, we found that the diversity of Arsenophonus strains in B. tabaci corresponds with the diversity observed on a larger scale in insect species. It would be interesting, in further studies, to extend the sampling to more host species in order to get an accurate idea of the diversity of Arsenophonus lineages. However, a complete understanding of the Arsenophonus phylogeny would require more molecular markers. This could be achieved through the use of other housekeeping genes for the MLST approach or insertion sequences and mobile elements, which is now possible since the genome of Arsenophonus has been completely sequenced. We found intergenic recombinations using only three genes, suggesting that such events could be frequent in the Arsenophonus genome. Understanding the Arsenophonus genomic features is crucial for further research on the evolution and infection dynamics of these bacteria, and on their role on the host phenotype and adaptation. According to these effects on host physiology and phenotype, they could then be potentially exploited in efforts to manipulate pest species such as B. tabaci. Additional material Additional file 1: Figure S1. Partial mitochondrial COI gene phylogeny of Aleyrodidae individuals used in this study. The tree was constructed using a Bayesian analysis. Node supports were Figure S2. Arsenophonus phylogeny using maximum-likelihood (ML) and Bayesian analyses based on sequences of the three genes fbaA (A), ftsK (B) and yaeT (C). Different evolution models were used to reconstruct the phylogeny for each gene [fbaA (HKY), ftsK (GTR), yaeT (HKY+I)]. Bootstrap values are shown at the nodes for ML analysis and the second number represents the Bayesian posterior probabilities. Table S1. Analysis of molecular variance computed by the method of Excoffier et al. [69] on samples of Arsenophonus from several Aleyrodidae species. Group denomination was according to their hosts, i.e. Bemisia tabaci: ASL, AnSL, Q2, Q3, Ms, Bemisia afer, Trialeurodes vaporariorum. Each species (group) was separated into populations corresponding to location of sampling. *p < 0.05. Table S2. Haplotypes of the three sequenced genes fbaA (A), ftsK (B), yaeT (C) recovered across all 152 samples of Aleyrodidae collected in this study. Only polymorphic positions are shown, and these are numbered with reference to the consensus sequence. Dots represent identity with respect to reference. The frequency indicates the number of times the haplotype was found in the total sample. *non-synonymous mutations. Table S3. Recombination in Arsenophonus. Details of the Arsenophonus recombination events detected in this study, including parental-like sequences, and p-values for various recombination-detection tests, using RDP3 [60].
2016-05-15T04:42:19.390Z
2012-01-18T00:00:00.000
{ "year": 2012, "sha1": "d70c69d066f6c7082eca2c7eb03c66aef0f23f3a", "oa_license": "CCBY", "oa_url": "https://bmcmicrobiol.biomedcentral.com/counter/pdf/10.1186/1471-2180-12-S1-S10", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "eccc37307b77613d442ed82a09850c1af9f7399c", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
18570937
pes2o/s2orc
v3-fos-license
Supernovae as Probes of Extra Dimensions Since the dawn of the new millennium, there has been a revived interest in the concept of extra dimensions.In this scenario all the standard model matter and gauge fields are confined to the 4 dimensions and only gravity can escape to higher dimensions of the universe.This idea can be tested using table-top experiments, collider experiments, astrophysical or cosmological observations. The main astrophysical constraints come from the cooling rate of supernovae, neutron stars, red giants and the sun. In this article, we consider the energy loss mechanism of SN1987A and study the constraints it places on the number and size of extra dimensions and the higher dimensional Planck scale. INTRODUCTION It has been recently noted that the scale of quantum gravity M Pl ≈ 10 19 GeV can be brought down to a few TeV in certain class of extra dimensional models [1]. The only experimentally verified scale M EW of Standard Model(SM) interactions in four dimensions lies within the TeV scale which allows tolerable quantum corrections. Therefore, the assumption that the 4+n dimensional gravity becomes strong at TeV scale, while the standard gauge interactions remain confined to the four-dimensional spacetime does not conflict with today's data available from low energy gravitational experiments [2]. Such a notion of TeV scale gravity solves the hierarchy problem between M EW and M Pl without relying on supersymmetry or technicolour. According to this model, the observed weakness of gravity at long distances is due to the presence of n new spatial dimensions large compared to the electroweak scale. This can be inferred from the relation between the Planck scales of the D = 4 + n dimensional theory M D and the four dimensional theory M Pl , which is given by where R is the size of the extra dimensions. Putting M D ∼ 1 TeV one finds For n = 1, R ∼ 10 13 cm, which is obviously excluded since it would modify the Newtonian gravity at solar-system distances. For n = 2, we get R ∼ 1 mm, which is precisely the distance where our present experimental measurement of gravitational strength stops. Clearly, while the gravitational force has not been directly measured below a millimeter, the success of the SM up to ∼ 100 GeV implies that the SM fields can not feel these extra dimensions, that is they are confined to only '3+1-brane', in the higher dimensional spacetime called 'bulk'. In this framework the universe is 4 + n dimensional with the fundamental Planck scale M D near the weak scale, with n ≥ 2 new sub-mm sized dimensions where gravity can freely propagate everywhere in the bulk, but the SM particles are localised on the 3-brane embedded in this bulk. This theory predicts a variety of novel signals which can be tested using table-top experiments, collider experiments, astrophysical or cosmological observations. It has been pointed out that one of the strongest constraints on this physics comes from SN1987A [3]. Various authors have done calculations to place such constraints on M D and n [4,5,6,7,8,9,10]. In this article, we summarise all the results which have appeared in the literature so far. SUPERNOVA EXPLOSION AND COOLING Supernovae come in two main observational varieties [11]. Those whose optical spectra exhibit hydrogen lines are classified as Type II, while hydrogen-deficient SNe are designated Type I. Physically, there are two fundamental types of supernovae, based on what mechanism powers them: the thermonuclear SNe and the core-collapse ones. Only SNe Ia are thermonuclear type and the rest are formed by core-collapse of a massive star. The core-collapse supernovae are the class of explosions which mark the evolutionary end of massive stars (M ≥ 8 M ⊙ ). The collapse can not ignite nuclear fusion because iron is the most tightly bound nucleus. Therefore, the collapse continues until the equation of state stiffens by nucleon degeneracy pressure at about nuclear density (3 ×10 14 g cm −3 ). At this "bounce" a shock wave forms, moving outward and expelling the stellar mantle and envelope. The kinetic energy of the explosion carries about 1% of the liberated gravitational binding energy of about 3 × 10 53 erg and the remaining 99% going into neutrinos. This powerful and detectable neutrino burst is the main astro-particle interest of core-collapse SNe. In the case of SN 1987A, about 10 53 ergs of gravitational binding energy was released in few seconds and the neutrino fluxes were measured by Kamiokande [12] and IMB [13] collaborations. Numerical neutrino light curves can be compared with the SN 1987A data where the measured energies are found to be "too low". For example, the numerical simulation in [14] yields time-integrated values E ν e ≈ 13 MeV, Eν e ≈ 16 MeV, and E ν x ≈ 23 MeV. On the other hand, the data imply Eν e = 7.5 MeV at Kamiokande and 11.1 MeV at IMB [15]. Even the 95% confidence range for Kamiokande implies Eν e < 12 MeV. Flavor oscillations would increase the expected energies and thus enhance the discrepancy [15]. It has remained unclear if these and other anomalies of the SN 1987A neutrino signal should be blamed on small-number statistics, or point to a serious problem with the SN models or the detectors, or is there a new physics happening in SNe? Since we have these measurements already at our disposal, now if we propose some novel channel through which the core of the supernova can lose energy, the luminosity in this channel should be low enough to preserve the agreement of neutrino observations with theory. That is, L new channel ≤ 10 53 ergs s −1 . This idea was earlier used to put the strongest experimental upper bounds on the axion mass [16]. Here, we will consider emission of higher-dimensional gravitons from the core. Once these particles are produced, they escape into extra dimensions, carrying energy away with them. The constraint on luminosity of this process can be converted into a bound on the M D . The argument is very similar to that used to bound the axion-nucleon coupling strength [16,17,18,19]. The "standard model" of supernovae does an exceptionally good job of predicting the duration and shape of the neutrino pulse from SN1987A. Any mechanism which leads to significant energy-loss from the core of the supernova immediately after bounce will produce a very different neutrino-pulse shape, and so will destroy this agreement [18]. Raffelt has proposed a simple analytic criterion based on detailed supernova simulations [19]: if any energy-loss mechanism has an emissivity greater than 10 19 ergs g −1 s −1 then it will remove sufficient energy from the explosion to invalidate the current understanding of Type-II supernovae's neutrino signal. CONSTRAINTS ON EXTRA DIMENSIONS The most restrictive limits on M D come from SN 1987A energy-loss argument. If large extra dimensions exist, the usual four dimensional graviton is complemented by a tower of Kaluza-Klein (KK) states, corresponding to new phase space in the bulk. The KK gravitons interact with the strength of ordinary gravitons and thus are not trapped in the SN core. Each KK graviton state couples to the SM field with the usual gravitational strength according to [20] where κ 2 = 16πG Pl and the summation is over all KK states labeled by the level n. T µν is the energy-momentum tensor of the SM fields residing on the brane and h µν, n the KK state. Since for large R the KK gravitons are very light, they may be copiously produced in high energy processes. For real emission of the KK gravitons from a SM field, the total cross-section can be written as where the dependence on the gravitational coupling is factored out. Because the mass separation of adjacent KK states, O(1/R), is usually much smaller than typical energies in a physical process, we can approximate the summation by an integration. Since we are concerned with the energy loss to gravitons escaping into the extra dimensions, it is convenient and standard to define the quantitiesε a+b→c which are the rate at which energy is lost to gravitons via the process a + b → c, per unit time per unit mass of the stellar object. In terms of the cross-section σ a+b→c the number densities n a,b for a,b and the mass density ρ,ε is given bẏ ε a+b→c. = n a n b σ (a+b→c) v rel E c ρ (5) where the brackets indicate thermal averaging. During the first few seconds after collapse, the core contains neutrons, protons, electrons, neutrinos and thermal photons. There are a number of processes in which KK gravitons can be produced. For the conditions that pertain in the core at this time (T ∼ 30 − 70 MeV, ρ ∼ (3 − 10) × 10 14 g cm −3 ), the relevant processes are • Nucleon-Nucleon Brehmstrahlung: NN → NNG KK • Graviton production in photon fusion: γγ → G KK , and • Electron-positron anhilation process: e − e + → G KK . In the SNe, nucleon and photon abundances are comparable (actually nucleons are somewhat more abundant). Nucleon-nucleon bremhmstrahlung is the dominant process relevant for the SN1987A where the temperature is comparable to m π and so the strong interaction between N's is unsuppressed. In the following we present the bounds derived by various authors based on this process. CONCLUSIONS In summary, it has been found that KK graviton emission from SN1987A puts very strong constraints on models with large extra dimensions in the case n = 2. In this case, for a conservative choice of the core parameters we arrive at a bound on the M D ≥ 30 TeV. We have done similar calculations in the case of plasmons which will be reported elsewhere. Even though taking into account various uncertainties encountered in the calculation can weaken this bound, it is unlikely that it can be pushed down to the phenomenologically interesting range of a few TeV. Therefore this case is still viable for solving the hierarchy problem and accessible to being tested at the LHC.
2007-07-07T03:53:55.000Z
2007-06-25T00:00:00.000
{ "year": 2007, "sha1": "74ceb741968520ce298c042bdba51022a32ec08f", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0706.3551", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "74ceb741968520ce298c042bdba51022a32ec08f", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
235658366
pes2o/s2orc
v3-fos-license
VOLUME-CONSTRAINT LOCAL ENERGY-MINIMIZING SETS IN A BALL In this paper, we prove a Poincaré inequality for any volume-constraint local energyminimizing sets, provided its singular set is of Hausdorff dimension at most n − 3. With this inequality, we prove that the only volume-constraint local energy-minimizing sets in the Euclidean unit ball, whose singular set is closed and of Hausdorff dimension at most n− 3, are totally geodesic balls or spherical caps intersecting the unit sphere with constant contact angle; for stable sets in a wedge-shaped domain or in a half space, provided the same condition of the singular set, must be spherical. In particular, they are smooth. MSC 2010: 49Q20, 28A75, 53A10, 53C24. where P is the perimeter functional as in (2.1). Mathematically, we assume Ω ⊂ R n is a fixed bounded open set with smooth boundary and E is a set of finite perimeter in Ω. Here σ ∈ R + denotes the surface tension at the interface between this liquid and other medium filling Ω, β ∈ R is called relative adhesion coefficient between the fluid and the container, which satisfies |β| < 1 due to Young's law, g is typically assumed to be the gravitational energy, whose integral is called the potential energy. The free energy functional is usually minimized under a prescribed volume constraint. The existence of global minimizers of the free energy functional under volume constraint is easy to be shown by the direct method in calculus of variations, see for example [Mag12,Theorem 19.5]. For simplicity, we assume throughout this paper that σ = 1, g = 0, that is, we consider the energy functional F β (E; Ω) = P (E; Ω) − βP (E; ∂Ω), |β| < 1. (1.1) Provided ∂E ∩ Ω is sufficiently smooth, the boundary of stationary points of E for the corresponding variational problems are capillary hypersurfaces ∂E ∩ Ω, namely, constant mean curvature hypersurfaces intersecting ∂Ω at constant contact angle θ = π − arccos β. For the reader who is interested in the physical consideration of capillary surfaces, we refer to Finn's celebrated monograph [Fin86] for a detailed account. When β = 0, F β (E; Ω) reduces to the perimeter functional P (E; Ω) of E in Ω. The structure and regularity of local minimizers of P (E; Ω) under volume constraint has been studied by Gonzalez-Massari-Tamanini [GMT83] and Grüter [GJ86;Grü87]. It was shown that for any local minimizer E, ∂E ∩ Ω is smooth in Ω except for a singular set of Hausdorff dimension at most n − 8. Moreover, Sternberg-Zumbrun [SZ98] has derived a Poincaré-type inequality for any local minimizer E, provided the singular set in ∂E ∩ Ω is of Hausdorff dimension at most n − 3. This Poincaré-type inequality can be used to prove the connectivity of stable solutions in convex domains, as well as smoothness of local minimizers in B n (n−dimensional Euclidean unit ball) under the condition |E| < 1 n−1 H n−1 (Ē ∩ S n−1 ), where S n−1 denotes the (n − 1)-dimensional unit sphere. Such condition has been recently verified by Barbosa [Bar18]. Sternberg-Zumbrun [SZ98] have conjectured all the local minimizers in a convex domain are smooth. On the other hand, they constructed a local minimizer with singularity in a non-convex domain [SZ18]. Recently, Wang-Xia [WX19] classified all local minimizers in B n to be either totally geodesic balls or spherical caps intersecting S n−1 orthogonally. In particular, they proved the smoothness of local minimizers in B n . The classification for n = 3 has been proved by Nunes [Nun17]. Note that for n = 3, the local minimizers are a priori known to be smooth by virtue of [GMT83;Grü87]. We remark that, in the smooth setting, that is, provided ∂E ∩ Ω is C 2 , the Poincaré-type inequality is just the second variational formula for P (E; Ω) = H n−1 (∂E ∩ Ω) under volume constraint and the stability problem has been first investigated by Ros-Vergasta [RV95]. As we have already mentioned, the boundary of stationary points of E are capillary hypersurfaces ∂E ∩ Ω. The second variational formula for F β (E; Ω) under volume constraint has been derived by Ros-Souam [RS97]. Wang-Xia [WX19] classified all smooth stable solutions in B n to be either totally geodesic balls or spherical caps intersecting S n−1 at constant contact angle θ = arccos(−β). In general, as in the case of β = 0, the local minimizers of F β (E; Ω) under volume constraint are not known to be smooth. Therefore, one needs to study the regularity of local minimizers as well as Sternberg-Zumbrun's Poincaré-type inequality in the case |β| < 1. The main purpose is to study the Poincaré-type inequality for local minimizers of F β (E; Ω) under volume constraint in the case |β| < 1. 1.1. Notation. In all follows, we denote by H k the k-dimensional Hausdorff measure in R n , by ω n the volume of n-dimensional Euclidean unit ball, by E the topological closure of a set E, by Int(E) the topological interior of E, by E c the topological complement of E and by ∂E the topological boundary of E. For the constrained isoperimetric problems, the container Ω ⊂ R n is assumed to be an open domain(possibly unbounded) with smooth boundary ∂Ω. Let E ⊂ Ω be a set with finite volume and perimeter, let M denote the closed set ∂E ∩ Ω, let regM denotes the C 2 part of M in Ω, let singM = M \ regM denote the singular part of M ; let B + denote the set Int(∂E ∩ ∂Ω), which is open and smooth; let Γ denote the closed set M ∩ ∂B + , namely, Γ = ∂M ∩ ∂Ω = ∂B + . Throughout this paper, let ν M , ν B + denote the outer unit normal of M, B + , respectively, when they exist; ν M Γ , ν B + Γ denote the exterior unit conormal of Γ in M, B + , respectively(see also Figure 1). Let B M denotes the second fundamental form of regM in R n with respect to −ν M 1 and B ∂Ω denotes the second fundamental form of ∂Ω with respect to the inwards pointing unit normal −ν B + , ||B M || 2 = n−1 i=1 κ 2 i , where {κ i } are the principal curvatures of M . When taking an orthonormal basis {τ i } n−1 i=1 on T M , the mean curvature H of M with respect to B M is given by H = n−1 i=1 B M (τ i , τ i ). We introduce the following admissible family of sets of finite perimeter for the study of fixedvolume variation. Definition 1.1. For some T > 0, a family of sets of finite perimeter in Ω, denoted by {E t } t∈(−T,T ) , with each E t of finite perimeter and E 0 = E, is called admissible, if: ( The stationary and stable sets in our settings are defined in the following sense. Here ∇ M denotes the tangential gradient with respect to M and ∆ M denotes the tangential Laplacian with respect to M . Remark 1.2. We remark that the interior regularity result of Gonzalez-Massari-Tamanini [GMT83] tells that H n−8+γ (sing(∂E ∩ Ω)) = 0 for any γ > 0. When β = 0, the boundary regularity result of Grüter and Jost [GJ86; Grü87] tells that sing(∂E ∩ Ω) is closed and H n−8+γ (sing(∂E ∩ Ω)) = 0 for any γ > 0. However, it seems unclear if one has the boundary regularity as Grüter and Jost for general |β| < 1, see for example [KT17]. We will follow closely the proof of Sternberg-Zumbrun [SZ98], where the case β = 0 has been handled. The key step is the construction of smooth cut-off functions vanishing near singM . These cut-off functions were first introduced in [SZ99]. With the help of such smooth cut-off functions, we can define smooth test functions which coincide with ζν M outside some neighborhood of singM . To construct such smooth cut-off functions, we use the stationary condition to associate M to a general varifold inside Ω intersecting ∂Ω with fixed contact angle, which was introduced by Kagaya-Tonegawa [KT17]. By virtue of this, we can use monotonicity formulas, both inside Ω and near the boundary ∂Ω, to obtain a uniform control on the ratio H n−1 (Br(x)∩M ) r n−1 for every small ball B r (x) which together form a covering of singM . This uniform bound enables us to construct the smooth cut-off functions that we will be needed. By using the Poincaré-type inequality (1.5), we can get the regularity and classification of stable sets in the unit ball B n whose singular set is of Hausdorff dimension at most n − 3. Theorem 1.2. Let E ⊂ B n be stable as in Definition 1.2. If the singular set singM is closed and satisfying H n−3 (singM ) = 0. Then M is either a totally geodesic ball or spherical cap, in particular, M is smooth. As we mentioned before, when M is C 2 , this result was proved by Wang-Xia [WX19, Theorem 3.1]. The key ingredient of their proof is a special choice of test function ϕ ∈ C 2 (M ) with M ϕ = 0, which depends on a conformal Killing vector field in R n tangential to S n−1 . With the help of the Poincaré-type inequality (1.5), we can use their test function in regM and their proof still works for the case when H n−3 (singM ) = 0. Moreover, we construct a volume preserving perturbation which strictly decreases the perimeter. This contradicts to the fact that E is capillary stable, and hence rule out the singularities. On the other hand, we can also prove that the stable measure-theoretic capillary hypersurface in a wedge-shaped domain must be spherical and smooth, which generalizes the results of the smooth case [LX17;Sou21] to the measure-theoretic sense. Theorem 1.3. Let Ω ⊂ R n be a wedge-shaped domain 2 with planar boundaries P 1 , . . . , P L . Let E ⊂ Ω be a set with finite volume and perimeter, which is stable as in Definition 6.1. If the singular set singM is closed with H n−3 (singM ) = 0 and |k| ≤ 1 3 , then M must be a spherical cap, in particular, M is smooth. 1.3. Organization of the paper. The paper is organized as follows. In Section 2 we recall some background materials from Geometric Measure Theory. In Section 3 we prove that the regular part of the boundary of any stationary set is capillary(Proposition 3.1). In Section 4, we first construct smooth cut-off functions(Lemma 4.1), which are crucial for deducing the Poincarétype inequality, then we finish the proof of (Theorem 1.1). In Section 5, as an application of the Poincaré-type inequality, we give the classification of stable sets in B n (Theorem 1.2). In Section 6, as another application, we prove that the stable measure-theoretic capillary hypersurface in a wedge-shaped domain must be smooth and spherical(Theorem 1.3), in particular, the case when the wedge-shaped domain is a half space is included(Corollary 6.1). Preliminaries Throughout this paper, we denote by |E| the n-dimensional Lebesgue measure of a set E ⊂ R n , χ E denotes the indicator function of E. Also, B r (x) denotes a n-dimensional open ball in R n with radius r and centered at x. B n , S n−1 denote the n-dimensional unit ball, (n−1)-dimensional unit sphere in R n , respectively. div, ∇ will always denote the divergence operator, gradient operator in R n , respectively. 2.1. Sets of finite perimeter. Let E ⊂ R n be a set of finite perimeter, namely, Without loss of generality, we can assume that sptµ E = ∂E. Indeed, for any set of finite perimeter E, it is equivalent(modulo a volume zero set)to a Borel set F such that sptµ F = ∂F (c.f., [Mag12,Proposition12.19]). With this modification, combined with the fact that sptµ E = ∂ * E, we have We need the following first variation formula of perimeter. Theorem 2.2 ([Mag12, Theorem 17.5]). If A is an open set in R n , E is a set of finite perimeter, {f t } |t|< is a local variation in A, then where X is the initial velocity of {f t } |t|< and div E X : is a Borel function called the boundary divergence of X on E. Here, {f t } |t|< is called a local variation in A if it is a one parameter family of diffeomorphisms in R n and where we adopt the convention that θ ≡ 0 on R n \ M . For a rectifiable varifold whose generalized mean curvature is bounded, we have the following monotonicity formula. We will need the following Schätzle's strong maximum principle for 4 multiplicity one rectifiable varifold( [Sch04],[DM19, Theorem 4]), which is a very deep result and turns out to be powerful for singularity analysis when M.G. Delgadino and F. Maggi proved that the only stationary points for the Euclidean Isoperimeteric problem must be finite union of Euclidean balls, c.f., [DM19, Section3, conclusion of the proof]. Theorem 2.4 (Schätzle's strong maximum principle for rectifiable varifold). Let M be a normalized locally H n−1 -rectifiable set with distributional mean curvature vector H ∈ L p (H n−1 M ; R n ) for some p > max{2, n}. Pick a direction ν ∈ S n−1 and a number h 0 ∈ R, consider a connected open set U ∈ ν ⊥ such that For another function η ∈ W 2,p (U ; (h 0 , ∞)), whose graph lies below the graph of ϕ on U and touches at a point z 0 , namley, η ≤ ϕ on U and η(z 0 ) = ϕ(z 0 ) for some z 0 ∈ U , then it cannot be that for H n−1 -a.e. z ∈ U , unless η = ϕ on U . In what follows,we consider only the (n − 1)-dimensional varifold in R n . Let U ⊂ R n be open, set G n−1 (U ) := U ×G(n, n−1). A general (n−1)-varifold in U is a Radon measure on G n−1 (U ). Let V n−1 (U ) denote the set of all general (n − 1)-varifolds in U . For V ∈ V n−1 (U ), let µ V denote the weight of V , which is a Radon measure on U defined by Let δV be the first variation of V , which is a linear functional on C 1 c (U ; R n ), and is given by 4 Although Schätzle's strong maximum principle for multiplicity one rectifiable is sufficient in our approach, it is still worth mentioning that Schätzle's strong maximum principle indeed works for integer multiplicity rectifiable varifolds with sufficiently summable (distributional) mean curvature vector. In that case, the distributional mean cuvature vector does not play a role as the mean curvature of some graph function, but rather the mean curvature of multiple sheets which overlaps in a possibly complicated way. Let ||δV || be the total variation when it exists. Indeed, if V is of locally bounded variation in U , by Riesz representation theorem, there exists a Radon measure ||δV || on U and a vector valued ||δV ||-measurable function ν with |ν| = 1 for ||δV ||-a.e in U , such that (2.9) Here ||δV || is characterized by exists for µ V -a.e. and (2.9) can be written as Here the vector field H( In particular, for some K ⊂⊂ G n−1 (U ), In this case, the weight measure µ |M | = H n−1 M , since for any Borel set E ⊂ U ⊂ R n , 2.4. General varifold with constant contact angle. We briefly introduce the following definitions and propositions for general varifold with constant contact angle, which were first introduced and studied in a recent work [KT17]. , V is said to has a fixed contact angle θ with ∂Ω at the boundary of B + if the following two conditions hold. (1) The generalized mean curvature vector H exists, i.e., for any g ∈ C 1 c (Ω; R n ), we have (2) For any g ∈ C 1 (Ω; R n ) with g · ν ∂Ω = 0 on ∂Ω, we have where div ∂Ω is the tangential divergence on ∂Ω and ν ∂Ω is the outward unit normal vector on the boundary ∂Ω. Before we state the monotonicity formula, we need some further notations. Let us set Since Ω is bounded and ∂Ω is C 2 , there exists s 0 ∈ (0, κ −1 ], depending only on ∂Ω, s.t., for any x ∈ N s 0 , there exists a unique closest point ξ(x) ∈ ∂Ω, namely, dist(x, ∂Ω) = |x − ξ(x)|. By the projection ξ(x), we define the reflection pointx of x with respect to ∂Ω asx : For any a ∈ R n , the reflection ballB r (a) of B r (a) with respect to ∂Ω is defined as , if V has a fixed contact angle θ with ∂Ω at the boundary of B + in the sense of Definition 2.2. Assume that for some p > n and Γ > 0, we have Then there exists a constant C 1 ≥ 0, depending only on n, s.t., for any x ∈ N s 0 /6 ∩ Ω, is a non-decreasing function of r in (0, s 0 /6), here β = − cos θ. For those θ ∈ ( π 2 , π], we consider ∂Ω − B + and −β = − cos θ instead of B + and β = − cos θ,respectively, then the same claim holds. Characterization of stationary points In this section, we give a characterization of the stationary points of the free-energy functional (1.1). First we prove a topological lemma. Lemma 3.1. Ω ⊂ R n is a bounded open set with C 2 boundary ∂Ω, E ⊂ Ω is a set of fintie perimeter. Using the notations in Section 1.1, if the singular set satisfies H n−3 (singM ) = 0. Proof. By definition, int (B + ) = int (∂E ∩ ∂Ω) is C 2 , and hence singB + ⊂ ∂B + . Notice that ∂B + ⊂ M , this is a simple topological fact, but we present the proof for the sake of clarity. First we need the following fact: for any two sets C, D, Thus Notice also that Finally, we have: Moreover, by above argument we see that singB + ⊂ ∂B + ⊂ M , by definition of singM , we have: which also implies H n−3 (singB + ) = 0. Now we show the first part of Theorem 1.1 about the geometry of the measure-theoretic capillary hypersurface M . Moreover, regM ∩Ω is locally an analytic hypersurface with constant mean curvature, relatively open in ∂E ∩ Ω. regM ∩Ω ·dH n−1 or M ∩∂Ω ·dH n−2 (x) and regM ∩∂Ω ·dH n−2 are exactly the same things. Also, by Lemma 3.1 and notice that Γ := ∂B + ∩ M , we have: B + ·dH n−1 = regB + ·dH n−1 and Γ ·dH n−2 = regB + ∩Γ ·dH n−2 . Here we use regB + to denote the C 2 part of B + . Proof. We argue as in [SZ98], first we construct a family of admissible sets as in Definition 1.1, we start from any variation which preserves volume at first order and maps boundary to the boundary, and then do some modifications inside Ω such that the volume of E t (defined below (3.5)) preserves for a short time (−T, T ). The method is fairly well-known and, for example, is applied in the proof of Young's law[Mag12, Theorem19.8]. First, let X ∈ C ∞ c (R n ; R n ) be any vector field satisfying By solving the Cauchy's problems we obain a local variation {Ψ t } |t|<T for some small T > 0, having X as its initial velocity. Now we do some modifications inside Ω to obtain a new family of admissible sets {Ẽ t } |t|<T . Fix any x ∈ regM ∩ Ω, by regularity, ∂E can be locally written as the graph of some smooth function u 0 : D → R 1 ,where D ⊂ R n−1 is a fixed open set near x. Since X ∈ C ∞ c (R n ; R n ) satisfies (3.4), we can find a much smaller number, still denoted by T , such that not only ∂E, but also ∂E t for all t ∈ (−T, T ), can be written as a graph of a smooth function u : D ×(−T, T ) → R 1 . Then we do some modifications on {∂E t } over D as follows, for any smooth function g : Moreover, g | ∂D = 0 for any t ∈ (−T, T ). We point out that such smooth function does exist since , hence such smooth function indeed exists. Now, we define a new family of sets {Ẽ t } by replacing the boundary portion of {∂E t } given by {(x , u(x , t)) : x ∈ D } with new boundary for x ∈ D , denoted by {∂Ẽ t }, and given by (3.7) which reveals the fact thatṼ Thus, {Ẽ t } is admissible as in Definition 1.1. In the following, by the fact that E is stationary in the sense as Definition 1.2, we will deduce that regM ∩ Ω is of constant mean curvature and on regM ∩ ∂Ω, Since E t andẼ t coincide outside D , we have: Since E is stationary as in Definition 1.2, we find that Using the first variation formula of perimeter of E( Theorem 2.2), we obtain (3.9) More precisely, we set With the help of the first variation formula inside Ω(c.f., [SZ98, (2.23)]) and Remark 3.1, we duduce that From stationary, back to (3.9) we have for any X satisfying (3.2),(3.3). For such X on regM , it is decomposed to Thus, back to (3.10), we have for any X satisfying (3.2),(3.3). We claim that (3.13) holds for any X 0 ∈ C 2 c (R n ; R n ) satisfying (3.3). (i.e., we rule out the restriction (3.2)). Indeed, for any X 0 ∈ C ∞ c (R n ; R n ) satisfying (3.3), there exists s > 0 and S 0 ∈ C ∞ c (Ω; R n ) such that X := S 0 + sX 0 ∈ C ∞ c (R n ; R n ) satisfies (3.2),(3.3). By (3.13), we conclude that holds for any X 0 ∈ C ∞ c (R n ; R n ) and X 0 (x) ∈ T x (∂Ω) for any x ∈ ∂Ω. Notice that for any such X 0 , we have Hence we have for any X 0 ∈ C ∞ c (R n ; R n ) satisfying (3.3). For any X ∈ C ∞ c (R n ; R n ), since X can be written as tangential part and normal part with respect to ∂Ω, denoted by X = X T + X ⊥ , the following is valid: ∀x ∈ ∂Ω. This shows that (3.14) holds for any X ∈ C ∞ c (R n ; R n ). By virtue of fundamental lemma of the calculus of variations, Poincaré type inequality When M is pointwise C 2 , this result is well-known and is obtained by testing the second variation formula via ζν M , where ζ is smooth everywhere on M and M ζdH n−1 = 0, and ν M is the outward pointing unit normal on M . For our case, the main difficulty is that near the singularities, we are not able to define the normal, however, we still want a smooth vector field defined on the whole M and the corresponding variation preserves the volume at the first order, with the help of (3.8), we can invoke the stability condition. In order to prove this Poincaré type inequality, we shall need the following lemma, which was used by Sternberg and Zumbrun in [SZ99] to study the local minimizers. This lemma enables us to construct smooth cut-off functions, which play essential roles when we are dealing with singularities. Proof of Lemma 4.1. First, since singM is closed and is a subset of a bounded set Ω, we have singM is compact. Fix some > 0, since H n−3 (singM ) = 0, we can cover singM with finitely many balls < satisfying r i < s 0 /6, here s 0 /6 is given in Chapter 2(r i < s 0 /6 is valid for small enough). Set I := {B r i (z i ) ∈ G : z i ∈ Ω \ N s 0 /6 }, B := G \ I. In terms of I, we consider a natural (n−1)-rectifiable varifold related to M . Set V 1 := v(M, θ), where θ ≡ 1 on M , and θ ≡ 0 on R n \M . For such E, by Proposition 3.1, regM ∩Ω is of constant mean curvature H, for any X ∈ C 1 c (Ω; R n ), arguing as in (3.11), we obtain where H(x) = Hν M (x). This implies that V 1 has generalized mean curvature vector H and |H| ≤ |H|. Set r 1 := max{r i : Using the monotonicity formula (2.6) with Λ, R replaced by |H|, R 1 , respectively, then for any B r i (z i ) ∈ I, and for any r ≤ r 1 , we have Notice that by definition of I, we have R 1 > s 0 /6. Set C 3 := e 2|H|diam(Ω) H n−1 (M ) (s 0 /6) n−1 , we have for any B r i (z i ) ∈ I. (4.4) By (4.3) and N 1 i=1 r n−3 i < , we have Finally, we mollifyφ to obtain a smooth function ϕ , which still satsifies estimates of the form (4.5), (4.6) and (4.7) with the constant C replaced by some constant C 2 (which is independent of ). Sinceφ satisfies (4.4), let S , S denote the sets such that We see that ϕ is the desired smooth cut-off function. In order to derive the second variation formula, we need the following Lemma. Ros and Souam([RS97]) derived this Lemma when M is a C 2 hypersurface. Since we consider regM (which is C 2 ), the proof of this Lemma is essentially the same with Ros and Souam's result, and hence omitted. Lemma 4.2 ([RS97], Lemma 4.1). For a set E ⊂ Ω which is stable as in Definition 1.2 and for any C 2 variation Ψ t whose initial velocity X := d dt | t=0 Ψ t satisfies (3.2) and (3.3). Let X t (x) := d ds | s=t Ψ s (x) denote the velocity of the variation at t, for ease of notation, we use X to represent the initial velocity X t | t=0 . Let ∇ M ,∇ denote the gradient on M, ∂M , respectively, and X 0 (resp. X 1 ) the tangent part of X to M (resp. to ∂M ). Let also S 0 , S 1 , S 2 denote respectively the shape operator of M in R n with respect to −ν M , of ∂M in M with respect to ν M Γ and of ∂M in ∂B + with respect to ν B + Γ . Let · denote the Euclidean inner product, let f = X · (−ν M ) be a C 1 function defined on regM and ∂f Here we denote by a "prime" the first derivative d dt | t=0 in the Euclidean space R n . We use notations in Section 1.1(see also Figure 1.) Lemma 4.3. For a set E ⊂ Ω which is stable as in Definition 1.2 and for any C 2 variation Ψ t whose initial velocity X := d dt | t=0 Ψ t satisfies (3.2) and (3.3), we have the following second variation formula where f = X · ν M is a fucntion defined on regM, q is given by (1.4), (4.9) H is the constant mean curvature as in P roposition 3.1. Proof. Set Γt denote the outer unit conormal of M t , B + t at Γ t , respectively. For simplicity, we still use Γ as in Section 1.1 to represent Γ t | t=0 . In the following integration argument, we use Remark 3.1. Then at time t, by Theorem 2.2, we have Γt | y dH n−2 (y). By the area formula For the first term in (4.10), by the well known fact that (c.f., [Ros93]) For the second term in (4.10), notice that where we use the divergence theorem in the first equality and the area formula in the last equality. Thus we have For the fifth term, notice that on Γ, by Proposition 3.1, Notice that the last two terms in (4.11) are computed in Lemma 4.2 (4). Combining the computations above, we have where q is given by (1.4). Proof of Theorem 1.1. With the help of the smooth cut-off functions, we are able to finish the proof. Fix a small > 0, consider any smooth function ζ : regM → R 1 satisfying M ζdH n−1 (x) = 0. First, we define a smooth functionζ : M → R 1 byζ := ϕ · ζ. By virtue of Lemma 4.1, we have: where {B r i (z i )}, N 1 are given in the proof of Lemma 4.1, and we use the fact that S ⊂ ∪ N 1 i=1 (B r i (z i )). Also, the last inequality holds due to the fact that r i < 2 . Moreover, we extend ζ smoothly to the whole R n with only the requirement that ∇ζ ·ν M = 0 on M \ S . Then, take any smooth vector field ν M ∈ C ∞ c (R n ; R n ) satisfying Also, take any smooth vector field N ∈ C ∞ c (R n ; R n ) satisfying (1) |N | = 1 in a neighborhood of (M \ S ) ∩ ∂Ω, Then, let X ∈ C 2 (R n ; R n ) be a vector field satisfying Notice that such X exists since these conditions can both be satisfied by virtue of the fact that M intersects ∂Ω with the constant contact angle θ, where cos θ = −β and β √ 1−β 2 = cot (π − θ), and hence on regM ∩ ∂Ω, Hence X satisfies (3.2),(3.3). Following the same argument as in the proof of Proposition 3.1, we obtain an admissible family of sets {Ẽ t } by a smooth modification through the graph function(denoted by g ) inside Ω. By stability of E, a direct computation of (3.8) yields where we use the definition of g in the last inequality. Here ∇ x , u , g are defined in the proof of Proposition 3.1, precisely, (3.6) and the paragraph before (3.6). Combining with (4.8), we have where f = X · ν M , q is given by (1.4). In terms of f , we have the following observations ζ N · ν M = 0 on regM, ζ ν M · ν M = ζ on regM. Stable measure-theoretic capillary hypersurface in a Euclidean ball Following the same approach in [WX19], we prove the rigidity result on the regular part of stable E ⊂ B n with the help of the Poincare-type inequality(1.3). Moreover, we will rull out the singularities by stability of E. Throughout Section 5 and Section 6, we use ·, · to denote the Euclidean inner product in R n . First, fix a ∈ R n , we consider a vector field X a in R n defined by X a is a conformal Killing vector field in R n which is tangent to S n−1 . The following are some properties which we will use in the proof of the rigidity of M , and due to the fact that we study these properties only on regM , which admits the unit normal ν M point wisely, so the proof of the following properties is exactly the same as [WX19], hence omitted. Proof of Theorem 1.2. By Proposition 3.1, we know that regM intersects ∂B n with a constant angle θ ∈ (0, π), and is of constant mean curvature H. In the following, we suppose that H ≥ 0 on regM ∩ B n . Otherwise,if H < 0, we consider B n \ E ⊂ B n with |B n \ E| = (1 − σ) |B n |, which is stable as in Definition 1.2. The reason that B n \ E is stable is as follows: We consider the functional F −β instead of F β , then we have Since E is stable with respect to the functional F β , we see that B n \ E is stable with respect to the functional F −β . Then by Proposition 3.1, M as the boundary of B n \ E is of constant contact angle π − θ and constant mean curvature −H ≥ 0. For the C 2 part regM , we argue as [WX19] with some modifications, first we construct a special test function to use the stability condition. Precisely, we define a test function pointwisely along regM in the following manner: For each a ∈ R n , on regM , we define: By (5.1) and the fact that regM ·dH n−1 = M ·dH n−1 ,we have Using the divergence theorem and following the same argument as in the proof of [WX19, Theorem 3.1], we have: where x T is the tangential part of x with respect to regM and in the last inequality we use the fact that (n − 1)||B M || 2 − H 2 is non-negative by virtue of Cauchy's inequality. By virtue of Cauchy's inequality, the equality holds if and only if regM is umbilical in B n . It follows that away from singM , locally M is flat or spherical. Next we prove that singM is empty and hence M is either a totally geodesic ball or a spherical cap. Indeed, since regM is of constant mean curvature H(assume H > 0, for the case H = 0, the proof is essentially the same.), if singM = ∅, then M is the union of finitely many spherical caps with the same radius r = 1 H . Here the reason that spherical caps are finite is that they are of the same radius and P (M ; B n ) < ∞. In this case, fix a singular point x ∈ M , locally near x, we can find a smooth volume-preserving perturbation which decreases the perimeter of E strictly. Precisely, we can remove a portion with small volume v near the regular part of a spherical cap, which is the intersection of two balls with the same radius, and add a portion with a flat boundary and volume v near the singular point. This perturbation preserves the volume and strictly decreases the perimeter. Indeed, the removing near the regular part does not change the perimeter, while the perimeter near the singular point strictly decreases after the perturbation, see Figure 2. In light of the arguments for the smooth stable capillary hypersurface [LX17; Sou21], we proceed the rigidity results to the stable measure-theoretic capillary hypersurfaces. Let us first set things up, let Ω be a smooth, unbounded domain in R n (n ≥ 3), ∂Ω consists of a finite family of hyperplanes P 1 , . . . , P L , for some integer L ≥ 1. Let n 1 , . . . , n L be the exterior unit normal to P i in Ω. We call such Ω a wedge-shaped domain when Ω satisfies that: {n 1 , . . . , n L } are 8 linearly independent. Up to translation, we can assume that the origin O ∈ R n is in the intersection L i=1 P i 9 . Let E ⊆ Ω be a set with finite volume 10 and periemter, we consider the free energy functional F L (E; Ω) given by where for each i, β i ∈ (−1, 1) is a prescribed constant. In this situation, the definition of stationary sets of F L under volume constraint in Ω is brought up easily. Moreover, ∂ * E ∩ Ω is locally an analytic hypersurface with constant mean curvature, relatively open in ∂E ∩ Ω. Proof. When M is C 2 , this result has been derived in [LX17,Section 2]. For the non-smooth case, the proof is essentially the same with Proposition 3.1 and hence omitted. Remark 6.1. It is possible that E intersects ∂Ω in a trivial way, namely, H n−1 (∂ * E ∩ P i ) = 0 for each i ∈ {1, . . . , L}. In this situation, the free energy functional reduces to the perimeter functional P (E; Ω) and E is a stationary point of the isoperimetric problem, which is well-studied by D.M. Delgadino and F. Maggi in [DM19]. Thus in the following we assume that E intersects ∂Ω in a non-trivial way, i.e., there exists some integer 0 < K ≤ L such that P (E; P i ) = 0 for each i ≤ K. In this situation, by a minor modification of the proof of Theorem 1.1, we conclude that, Proposition 6.2. Let Ω ⊂ R n be a wedge-shaped domain, let E ⊂ Ω be set with finite volume and perimeter, which is stable for F L under volume constraint, as in Definition 6.1. Using the notations in Proposition 6.1, if the singular set singM is closed and satisfying H n−3 (singM ) = 0. Then, for any smooth function ζ : regM → R 1 with M ζ(x)dH n−1 (x) = 0, (6.3) the following Poincaré-type inequality holds, where 11 Here ∇ M denotes the tangential gradient with respect to M and ∆ M denotes the tangential Laplacian with respect to M . In order to use the Poincaré-type inequality (6.6), we need a test function which is smooth on regM , having vanishing integration. To this end, we first set up a lemma which gives a good characterization of the capillary geometry. Thanks to this lemma, we can derive a Minkowskitype formula, which gives us a desired test function to test the stability. Although these formulas are well-known and widely used in the smooth setting(see for example [AS16;LX17;Sou21]), yet we still need the useful smooth cut-off functions to derive the proof. Lemma 6.1. Let E ⊂ Ω 12 be a set with finite volume and perimeter with singM closed and H n−2 (singM ) = 0. Let ν M Γ denote the exterior unit conormal to ∂M in M , which is welldefined for H n−2 -a.e. x ∈ ∂M . Then Proof. Let a be any constant vector field in R n , consider the following vector field on M , which is a well-defined C 2 vector field on regM , here x T = x − x, ν M ν M is the orthogonal projection of x onto the approximate tangent space T x M , a T is understood smilarly. Notice aslo that |Y | is bounded on reg(M ) by some constant C 7 since a is a constant vector field and M is bounded. For each > 0, we have ϕ , S , S from Lemma 4.1. LetỸ : R n → R n be a C 2 vector field satisfying |Ỹ | ≤ C 7 in a neighborhood of M \ S ,Ỹ = Y on M \ S . 11 Notice that the boundaries of Ω are planar, thus BP i ≡ 0. 12 The structural lemma holds for any container Ω(could be unbounded) with C 2 boundary. Then let Y ∈ C 2 (R n ; R n ) be a vector field satisfying Y = ϕ Ỹ . (6.9) Thus we find where in the second equality, we use (6.10),(6.11); in the last equality, we use the fact that By Lemma 4.1, (4.6), (4.7) and Remark 3.1, sending 0, we find Since a is taken to be any constant vector field in R n , we have (6.7). Exploiting Lemma 6.1, we can obtain the following Minkowski type formula. Proof. For each > 0, we have ϕ , S , S from Lemma 4.1. LetỸ : R n → R n be a smooth vector field satisfying 13 Then let Y ∈ C 2 (R n ; R n ) be a vector field satisfying Integrating div M (Y ) over M \ S and using divergence theorem, we have By Lemma 4.1, (4.6) and (4.7), sending 0, we find On the other hand, by Proposition 6.1 we have on each regM ∩ ∂P i , where the constants c i are such that k, n i = − cos θ i , this implies By (6.7), inner product with k, exploiting (6.19) and (6.21), we have where in the last equality, we use Footnote 9. Combining with (6.17) and (6.22), we see that On the other hand, we consider a smooth vector field X(x) = x, integrating div Γ i X on each Γ i , the divergence theorem gives here H Γ i denotes the mean curvature of Γ i in M ∩ P i with respect to −ν B + i Γ i ; in the last equality, we decompose X to the tangent part and normal part with respect to Γ i , and we use the fact that Γ i = ∂M ∩ P i = ∂B + i is closed. Rearragning (6.24), we obtain (6.14). This completes the proof. Remark 6.2. In the proof of Proposition 6.3, we use the fact that H M = 0, which can be proved by Schätlze's strong maximum principle(Theorem 2.4). Indeed, we know from Proposition 6.1 that H M is a constant on M , if H M = 0, then v(M, 1) is a stationary multipilicty 1 n-rectifiable varifold on R n . Without loss of generality, we may assume that P (E; P 1 ) = 0, up to a rotation, we may assume that P 1 = x ∈ R n : x n = 0. Pick ν = e n , h 0 = 0 and U = {x ∈ R n : x n = 0}. Consider the costant function η(x 1 , . . . , x n−1 ) = 0, it is easily see that the graph of η is exactly the supporting hyperplane P 1 ∩U . From definition of E, we see that M lies above U and touches U at the points of ∂ * E ∩ ∂Ω. Since H M = 0 and the mean curvature of a hyperplane is also 0, we deduce from Theorem 2.4 that M must coincide with ∂E ∩ P 1 , which leads to a contradiction. Thanks to the Minkowski-type formula, we find a test function ζ = (n − 1) − H x, ν M + (n − 1) k, ν M , which is smooth on regM by virtue of Proposition 6.1, and has vanishing integration, i.e., regM ζ = 0, by virtue of Proposition 6.3. Here k is given in (6.20). Using ζ to test the stability (1.5) and noticing that each P i is planar 14 , thus we obtain (6.25) We need the following formulas and computations in the proof of Theorem 1.3, which are well-known to experts(see for example [LX17; WX19; Sou21]). It is worth mentioning that the proof do not depend on the divergence theorem, thus we can readily see that these formulas hold on regM . We are ready to give the proof of Theorem 1.3. The proof follows the ones in [LX17; Sou21], since we will use the divergence theorem in the non-smooth setting, for the sake of clarification, we present the proof. By virtue of Lemma 6.2 (1), (6.25) reduces to To this end, we explore the function Φ = H n−1 x − ν M , H n−1 x − ν M , which is smooth on regM by virtue of Proposition 6.1. 14 Hence the second fundamental form BP i = 0. In contrast to this, in our case, ν M Γ i , ν M On the other hand, integrating the smooth vector field ∆ M x on M , using divergence theorem and Proposition 5.2 (4), we obtain (6.44) By (6.43), (6.44) and the linearly independence of n i , we see that dH n−2 = (n − 1) sin θ i H n−2 (Γ i ). (6.45) In particular, this shows (6.40)=0, and hence (6.31). Thus (6.30) becomes We are now in the position to conclude the proof. Notice that since |k| ≤ 1, we have 1 + k, ν M ≥ 0, it follows that M (n − 1)||B M || 2 − H 2 dH n−1 ≤ 0. (6.47) On the other hand, the Schwarz inequality implies that on regM , By virtue of Proposition 6.1, regM is locally an analytic umbilical hypersurface in R n , which means it must be locally spherical. Moreover, if singM = ∅, we can find a volume-preserving perturbation as in the proof of Theorem 1.2(see also Figure 2) ,which strictly decreases the perimeter(and hence free energy functional). This contradicts to the fact that E is stable as in Definition 6.1. Remark 6.3. As considered by R. Souam in [Sou21], when each θ i is close enough to π 2 , then it must be that |k| ≤ 1 due to the continuity of k. In particular, this generalizes the results for the free boundary capillary hypersurface in a wedge of R. López [Lóp14, Theorem 1] to the measure-theoretic settings. As argued in Remark 4.1, for the free boundary measure-theoretic capillary hypersurface, the condition singM is closed can be removed, since it is an imediate result of [GJ86, Theorem 4.13]. Remark 6.4. We remark that Schätzle's strong maximum principle Theorem 2.4 provides an alternative approach for the singularity analysis, see [DM19, Theorem 1, conclusion of the proof] for a detailed argument. Using this method, one can prove that M must be union of finitely many spherical caps and complete spheres(could be mutually tangent), then using stability and the perimeter decreasing perturbation, one can conclude that these spherical caps and complete spheres could not be mutually tangent. It is worth mentioning that the maximum principle's approach does not depend on the stability of E, it works successfully for the stationary points. When L = K = 1 18 , the wedge-shaped domain Ω is indeed a half space of R n , in this regard, T heorem 1.3 becomes, Corollary 6.1. Let Ω ⊂ R n be a half space. Let E ⊂ Ω be a set with finite volume and perimeter, which is stable as in Definition 1.2. If the singular set singM is closed with H n−3 (singM ) = 0, then M must be a spherical cap, in particular, M is smooth. Proof. The proof is essentially the same with Theorem 1.3, it suffice to show that |k| ≤ 1. Indeed, up to translation and rotation, we may assume that Ω is the upper half space {x ∈ R n : x n ≥ 0}. In this case, n = −e n , and it follows that k in (6.20) is just k = cos θe n and we definitely have |k| ≤ 1. This completes the proof.
2021-06-29T17:49:48.335Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "7de80b289ecb104fda2e666d60a05695e6358e98", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "75db70353d180c6865d76868bdf1cc197d1d6bee", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
85447174
pes2o/s2orc
v3-fos-license
Structural determination of the complement inhibitory domain of Borrelia burgdorferi BBK32 provides insight into classical pathway complement evasion by Lyme disease spirochetes The carboxy-terminal domain of the BBK32 protein from Borrelia burgdorferi sensu stricto, termed BBK32-C, binds and inhibits the initiating serine protease of the human classical complement pathway, C1r. In this study we investigated the function of BBK32 orthologues of the Lyme-associated Borrelia burgdorferi sensu lato complex, designated BAD16 from B. afzelii strain PGau and BGD19 from B. garinii strain IP90. Our data show that B. afzelii BAD16-C exhibits BBK32-C-like activities in all assays tested, including high-affinity binding to purified C1r protease and C1 complex, and potent inhibition of the classical complement pathway. Recombinant B. garinii BGD19-C also bound C1 and C1r with high-affinity yet exhibited significantly reduced in vitro complement inhibitory activities relative to BBK32-C or BAD16-C. Interestingly, natively produced BGD19 weakly recognized C1r relative to BBK32 and BAD16 and, unlike these proteins, BGD19 did not confer significant protection from serum killing. Site-directed mutagenesis was performed to convert BBK32-C to resemble BGD19-C at three residue positions that are identical between BBK32 and BAD16 but different in BGD19. The resulting chimeric protein was designated BXK32-C and this BBK32-C variant mimicked the properties observed for BGD19-C. To query the disparate complement inhibitory activities of BBK32 orthologues, the crystal structure of BBK32-C was solved to 1.7Å limiting resolution. BBK32-C adopts an anti-parallel four-helix bundle fold with a fifth alpha-helix protruding from the helical core. The structure revealed that the three residues targeted in the BXK32-C chimera are surface-exposed, further supporting their potential relevance in C1r binding and inhibition. Additional binding assays showed that BBK32-C only recognized C1r fragments containing the serine protease domain. The structure-function studies reported here improve our understanding of how BBK32 recognizes and inhibits C1r and provide new insight into complement evasion mechanisms of Lyme-associated spirochetes of the B. burgdorferi sensu lato complex. Introduction Spirochetes belonging to the Borrelia burgdorferi sensu lato complex are the causative agent of Lyme borreliosis and include B. burgdorferi sensu stricto, B. garinii, and B. afzelii. B. burgdorferi sensu stricto (referred to as B. burgdorferi hereafter) causes 300,000 cases of Lyme disease in the United States each year [1], while B. garinii and B. afzelii are the most common etiological agent of Lyme disease in Europe and Asia [2]. B. burgdorferi sensu lato are the leading arthropod-borne infectious agents in the Northern hemisphere and are capable of hematogenous dissemination whereby a wide range of remote host tissues are colonized. To survive and persist in immunocompetent hosts, Lyme-associated spirochetes must evade host immune defenses including the evolutionarily ancient proteolytic cascade of innate immunity known as the complement system. Complement is a group of nearly three dozen proteins that combine to coordinate a tightly controlled set of proteolytic reactions directed at target cell surfaces [3,4]. Complement activation is initiated by soluble pattern recognition proteins which are capable of discerning foreign molecular surfaces. The specific mode of recognition defines the three conventional pathways of complement known as the classical pathway, lectin pathway, and alternative pathway. For instance, the classical pathway is activated upon binding of the complement protein C1q to antigen-bound antibodies (i.e. immune complexes). Likewise, the lectin pathway is activated following the binding of mannose-binding lectin or ficolins to foreign carbohydrate structures, while the alternative pathway is constitutively activated at low levels via a mechanism referred to as 'tick-over' [3,4]. Independent of the molecular initiating event, all three pathways proceed by activation of a series of specialized serine proteases that converge on the central molecule of complement, C3. C3 is cleaved by enzymatic complexes called C3 convertases which results in complement amplification at the target surface, activation of the terminal pathway of complement, and induction of downstream effector functions. Complement activation ultimately results in the opsonization of targeted surfaces, recruitment of professional phagocytes, and direct lysis of susceptible membranes [3,4]. Host cells express several proteins which function to regulate complement and thus are typically protected from unintended targeting by the cascade. In contrast, pathogens that traffic in fluids where complement is present at high concentrations have necessarily evolved mechanisms to evade complement detection and activation. For example, many human pathogens secrete membrane-associated proteins which bind endogenous host regulators of complement [5]. In this regard, a prominent bacterial target is the dominant negative regulator of the alternative pathway, factor H. Indeed, B. burgdorferi sensu lato species themselves are known to encode up to five distinct proteins (CspA, CspZ, ErpP, ErpC, and ErpA; note that the latter three are also referred to OspE-related proteins; Erps) that bind to and recruit factor H to the bacterial surface, thereby hijacking its complement protective activities [6][7][8][9][10][11][12][13]. In addition to factor H-binding proteins, distinct borrelial proteins are known that specifically block the formation of the membrane attack complex [14][15][16], recruit host plasminogen and degrade complement components [17][18][19][20], or bind directly to other complement components [21,22]. In total nearly a dozen B. burgdorferi sensu lato proteins have now been identified that exhibit specific complement inhibitory activities [23]. Within the small arsenal of borrelial complement inhibitors, the surface-expressed lipoprotein B. burgdorferi BBK32 remains the lone identified and characterized classical pathway-specific inhibitor [22]. The classical pathway is controlled by the action of the first component of complement, C1, which is a multi-protein complex composed of C1q bound to a heterotetramer of two serine proteases, C1r and C1s (S1 Fig). C1 thereby functions as both the pattern recognition molecule and initiating zymogen of the classical pathway. The C1 complex circulates in blood in an inactive form until C1q is recruited to the surface via recognition of receptors such as immune complexes. C1q-binding promotes autocatalytic activation of the C1r protease within the C1 complex which then cleaves C1s to form fully activated C1. At this step the C1s enzyme cleaves complement components C2 and C4, and the classical pathway intersects the lectin pathway at the formation of classical/lectin pathway C3 convertases (C4b2a). C3 convertases then convert complement C3 into its activated forms, which in turn drive downstream reactions of the cascade. Previously we have shown that the C-terminal globular domain of B. burgdorferi BBK32 (termed BBK32-C) blocks classical pathway activation by binding with high-affinity to the initiating serine protease C1r, and preventing both its autocatalytic and C1s cleavage activities within the C1 complex [22] (S1 Fig). In this study we investigated the activity of BBK32 orthologues encoded by the prevalent B. burgdorferi sensu lato species B. garinii and B. afzelii to better understand the structural and mechanistic basis for BBK32-mediated C1r inhibition. Herein we report the first crystal structure of the anti-complement domain of BBK32 which reveals a novel helical bundle fold. A chimeric mutagenesis strategy provided additional insight into the reduced in vitro activities of B. garinii BBK32 orthologue BGD19 relative to the B. afzelii BBK32 orthologue BAD16 and B. burgdorferi BBK32 itself. Biochemical studies were then used to map the BBK32 binding site on human C1r and demonstrated that the serine protease (SP) domain was required for BBK32/C1r complex formation. The results of this study significantly improve our understanding of the unique classical pathway inhibition properties of BBK32 and suggest that BBK32-mediated complement evasion activity is shared across major species of Lyme diseaseassociated spirochetes. factor H via factor H-binding proteins, unlike human serum-resistant B. burgdorferi and B. afzelii [37]. However, prior work showed that B. garinii strains are also human serum sensitive in a C1q-dependent manner, indicating a potential role for the classical complement pathway in the human serum susceptibility phenotype of B. garinii [36]. In this regard the relative classical pathway-specific complement inhibitory activities of the BBK32 orthologues B. garinii BGD19 and B. afzelii BAD16 are unknown. To address this, we produced recombinant proteins of each orthologue corresponding to the C-terminal complement inhibitory region of BBK32 (i.e. BBK32-C) (Fig 1B), and evaluated their ability to bind to human C1 complex and isolated human C1r using surface plasmon resonance (SPR). These experiments show that BAD16-C and BGD19-C each bind to human C1 and C1r with high affinity (Fig 2, S1 Table). B. garinii BGD19 has significantly reduced complement inhibitory activity relative to B. burgdorferi BBK32 or B. afzelii BAD16 Previously we have shown that the high-affinity interaction formed between BBK32-C and human C1r correlates with blockade of classical pathway activation in assays where human The C-terminal domain of BGD19 and BAD16 bind with high affinity to human C1 and C1r. The ability of the C-terminal region of BGD19 (BGD19-C) and BAD16 (BAD16-C) to bind human C1 or C1r, was assessed by SPR. BBK32-C was used as a control. For C1, a two-fold dilution series (0.1 to 150 nM) was injected over immobilized BBK32-C (panel A), BGD19-C (panel B), and BAD16-C (panel C). The raw sensorgrams are drawn as black lines and the results of kinetic fitting analysis using Biacore T200 Evaluation Software are drawn as red lines. For C1r the a single-cycle analysis was performed using a five-fold dilution series (1.6 to 1000 nM) of BBK32-C (panel D), BGD19-C (panel E), and BAD16-C (panel F). For clarity, the dissociation phase of each sensorgram is labeled with the C1r injection concentration. Sensorgrams from a representative injection series are shown and all experiments were conducted in triplicate with the dissociation constants (K D ) reported as the mean ± S.D. serum is used as the source of complement [22]. In a classical pathway-specific ELISA assay that monitors the deposition of the complement activation products C3b or membrane attack complex (MAC) we found that BGD19-C (IC 50, C3b deposition = 32 nM; IC 50, MAC deposition = 35 nM) exhibited a two to three fold decrease in potency relative to BBK32-C (IC 50, C3b deposition = 14 nM; IC 50, MAC deposition = 13 nM), whereas BAD16-C exhibited no greater than a two-fold increase in potency (IC 50, C3b deposition = 9.6 nM; IC 50, MAC deposition = 5.9 nM) (Fig 3A and 3B) based on non-overlapping 95% confidence intervals (S2 Table). In a classical pathway hemolysis assay we found a larger difference in relative potency as concentrations of B. garinii BGD19 up to 1 μM failed to exhibit saturable inhibition of classical pathway activation in hemolysis assays (estimated IC 50~7 ,600 nM), unlike BBK32 (IC 50 = 170 nM) or BAD16 IC 50 = 55 nM) which both exhibit dose-dependent protection of sheep red blood cell lysis by human serum (Fig 3C, S2 Table). Sequence alignment of BBK32, BAD16, and BGD19 reveals there are just three non-conservatively substituted amino acids shared between BBK32-C and BAD16-C that are different in BGD19-C (see highlighted residues and arrows Fig 1B). In B. burgdorferi BBK32 and B. afzelii BAD16 these residues are Glu-308, Gln-319, and Glu-324, whereas in B. garinii BGD19 these positions are changed to Lys-308, Lys-319, and Gln-324. To investigate the potential role of these residues in mediating C1r inhibition, we produced a chimeric BBK32-C protein, termed BXK32-C, where each residue was changed to the B. garinii BGD19 residue (i.e. BBK32-E308K-Q319K-E324Q). Interestingly, the inhibitory activity of the chimeric BXK32-C shifts from that of BBK32 to BGD19 (Fig 3A-3C). These data indicate that residues encoded at one or more of these positions in BGD19 likely contribute to its observed reduction in human classical complement pathway inhibitory activity. Next, we investigated the activity of BGD19 and BAD16 when expressed as full-length lipoproteins on the spirochetal surface by using the poorly adherent, non-infectious strain B314 of B. burgdorferi. A shuttle vector containing each orthologous bbk32 gene controlled by its native promoter was constructed, transformed into strain B314, and designated as B314/ pCD100 (B. burgdorferi bbk32) [22], B314/pBGD19 (B. garinii bgd19), and B314/pBAD16 (B. afzelii bad16) (S2A and S2B Fig). BBK32, BGD19 and BAD16 were expressed heterologously in B. burgdorferi strain B314 and surface localization was assessed using the proteinase K accessibility assay. Here, BAD16 and BGD19 were sensitive to protease digestion under conditions that the subsurface endoflagellar protein, FlaB, was not affected, indicating that the BBK32 orthologues were surface exposed and that the borrelial cells were structurally intact, respectively (S3 Fig). Additionally, we assessed the different transcript levels of the bbk32 orthologues by qRT-PCR analysis and found that all orthologues were expressed at levels that were not significantly different (S2C Fig). Next, Far Western blot experiments were performed using biotinylated-human C1 (Fig 4A and 4B) or human C1r (Fig 4C and 4D) as probes and normalized to the levels of B. burgdorferi BBK32 produced. These data suggest that BGD19 and BAD16 bind with similar affinity to human C1 (Fig 4B). However, when C1r is used as the probe, BGD19 binds only weakly and not significantly different relative to a vector only control (denoted as "Vector"), whereas BBK32 and BAD16 each bind to C1r (Fig 4D). To assess the binding of C1r to native, surface exposed BAD16 and BGD19 relative to BBK32, we incubated whole, intact borrelial strain B314 cells with immobilized C1r. In agreement with the whole cell lysate assays (Fig 4) BGD19 showed significantly reduced C1r binding relative to BBK32 and BAD16 (Fig 5A). Next, we assessed the ability of heterologously expressed BGD19 and BAD16 to confer serum resistance to strain B314. While expression of BBK32 and BAD16 at the B314 surface protected spirochetes from classical pathway-mediated complement killing, BGD19 did not significantly reduce killing relative to the vector only (Fig 5B). Interestingly, BAD16 exhibited greater resistance to serum relative to BBK32 (Fig 5B). The crystal structure of B. burgdorferi BBK32-C determined to 1.7Å resolution Circular dichroism studies indicate that the secondary structure of BBK32-C is predominantly helical in solution [38]. Beyond this, little is known about the structure of the complement inhibitory domain of BBK32. To address this, we initiated crystallographic studies with the goal of determining a high-resolution structure of BBK32-C. Attempts to crystallize the original BBK32-C construct (i.e. residues 206-354) were unsuccessful. Preliminary limited proteolysis experiments suggested that flexible residues were present at the N-and C-termini in BBK32-C. A number of constructs were designed to truncate BBK32, ultimately yielding a Cterminal truncation mutant lacking six residues (i.e. BBK32 (206-348) ) which produced protein crystals (see Methods and Materials). Importantly, the BBK32 (206-348) construct retained full C1-binding, C1r-binding and complement inhibitory activities (S4A-S4C Fig). Binding of C1 and C1r to BBK32 orthologues via far western blot analysis. A-D) BGD19, BBK32, and BAD16 were expressed as lipoproteins on the surface of B. burgdorferi B314. Whole cell protein lysates were separated on an SDS-PAGE gel and probed for binding to human C1 (panel A) or C1r (panel C) using a Far Western blot overlay. Samples tested include strain B314/pBBE22luc (vector only control; labeled as "Vector"), B314/pBDG19 (labeled as BGD19), B314/pCD100 (labeled as BBK32), B314/pBAD16 (labeled as BAD16), and B314 alone (labeled as null). FlaB was used as a loading control to normalize variation between C1 and C1r binding by BBK32, BAD16, and BGD19 in panels A and C. Densitometry was performed from independent blots to quantify the observed signals as depicted in panels A and C. Panels B and D report the signal detected for C1 and C1r to the samples indicated on the x axis, respectively. All values were normalized relative to BBK32 binding to either C1 or C1r. P values between samples are indicated above the bars. Native BAD16 and BGD19 exhibit differential binding to C1 and C1r and confer serum-resistance when expressed on the surface of spirochetes. A) Natively expressed bad16 (labeled as BAD16) and bgd19 (labeled as BGD19), were tested for their ability to bind immobilized C1r (blue circles) or BSA (yellow squares) relative to B314 containing BBK32 (labeled as BBK32) or B314 with vector DNA alone (labeled as Vector). Binding was done in triplicate for independent samples and the average and standard deviation shown. B) The ability of each protein to confer resistance to normal human serum (NHS) was assessed in the serum sensitive B. burgdorferi strain B314. BBK32 (206-348) crystals grew in space group P6 5 with one molecule per asymmetric unit and diffracted to 1.7 Å resolution ( Table 1). BBK32 consists of five α-helices and adopts a helical bundle fold (Fig 6A, S4D Fig). Starting from the N-terminus, helix α1 (residues Ser-211 to Met-245) interacts with helices α3 (Lys-256 to Ala-286), α4 (Ile-293 to Lys-317), and α5 (Leu-323 to Ile-347) to form an anti-parallel four-helix bundle. Helix α2 (Asn-251 to Ala-261) does not participate in the core bundle motif but rather forms hydrophobic interactions with helix α1 and sits at an angle of~120˚relative to helix 3. Adaptive Poisson-Boltzmann Solver software was used to calculate the electrostatic potential of the BBK32-C molecular surface [39] (Fig 6B). The protein surface is characterized by several contiguous positively charged regions with a larger negatively charged surface being formed where the C-terminal and N-terminal helices meet. Overall the structure of BBK32-C is best characterized by a positively charged anti-parallel four helix bundle where a fifth helix, helix α2, protrudes away from the helical core. The relative activities of the BXK32-C mutant suggest that one or more of the non-conservative amino acid substitutions between BBK32/BAD16 and BGD19 (i.e. E308, Q319, and E324) contribute to C1r inhibitory activity (Fig 3). In the BBK32-C structure E308 is a surface exposed residue located midway through α4, while Q319 and E324 are in the short loop Sensitivity was scored as a ratio of the affected cells relative to the total cells viewed. Cells affected were categorized as those that lacked motility, exhibited membrane damage, or manifested overt cell lysis (blue circles). Heat inactivated NHS was used as a control and is shown on the right (yellow squares). P values between samples are indicated above the bars. ns, not significant. connecting α4 and α5. Q319 and E324 are also surface exposed and together present a contiguous surface region (Fig 7A and 7B). Homology models of BAD16-C, BGD19-C, and BXK32-C were constructed using SWISS-MODEL and templated on the BBK32-C crystal structure (S5 Fig, PBD: 6N1L). These models predict that residues at the 308, 319, and 324 positions (BBK32 numbering) [40,41] are also surface exposed in each BBK32 orthologue protein (S5 Fig). As each of these residues is exposed to solvent, they are potentially positioned to interact with C1r, supporting the functional data implicating their involvement in BBK32-mediated complement inhibitory activity (Fig 3). The C1r-SP domain is required for high affinity interaction with BBK32-C The crystal structure of BBK32 (206-348) and identification of surface residues which affect C1r inhibition presented above provides insight into the structural determinants for BBK32-mediated C1r recognition. However, the C1r domains involved in mediating BBK32/C1r complex A) The structure of BBK32 (206-348) solved at 1.7Å resolution (PDB: 6N1L). A ribbon diagram representation using a spectrum-based coloration scheme of BBK32 where the N-terminal region of the protein is colored in blue and the C-terminus in red. The structure is shown turned 180˚about the y-axis. BBK32 (206-348) is characterized by a helical bundle fold where helices 1, 3, 4, and 5 form a core four-helix bundle motif and helix 2 extends away from the core at~120˚relative to helix 3. B) BBK32 is drawn in a surface representation in the same orientations as depicted in panel A. The Adaptive Poisson-Boltzmann Solver as implemented in Pymol was used to calculate the electrostatic potential of the molecular surface. The color scheme represents a gradient of electrostatic potential where regions of negative (red) and positive (blue) are contoured at ± 2 k b T/e where k b is Boltzmann's constant = 1.3806 x 10 −23 J K -1 , T is temperature in K, and e is the charge of an electron = 1.6022 x10 -19 C. https://doi.org/10.1371/journal.ppat.1007659.g006 formation are unknown. C1r is a 92 kDa chymotrypsin-like serine protease with a modular architecture consisting of two complement C1r/C1s, Uegf, Bmp1(CUB) domains, an epidermal growth factor (EGF) domain, two complement control protein (CCP) domains, and a serine protease (SP) domain arranged in sequential fashion (CUB1-EGF-CUB2-CCP1-CCP2-SP) (Fig 8). We noted that purified full-length C1r has been reported to undergo a series of autoproteolytic cleavages when incubated for prolonged periods [42,43]. Following overnight incubation of purified C1r at 37˚C, we injected the autolytic C1r digestion reaction onto a size exclusion chromatography column (Fig 8A, black line). The chromatogram displayed three well-resolved peaks (labeled 1, 2, and 5). Next, purified BBK32 (206-348) was injected alone resulting in a single peak (labeled 4) (Fig 8A, red line). Finally, a 2-fold molar excess of BBK32 (206-348) was mixed with the autoproteolytic C1r digestion reaction and injected onto the column (Fig 8A, blue dashed line). While peaks 1, 2, 4, and 5 remain, a new peak also appears (labeled peak 3). Analysis of each peak in the BBK32 (206-348) /C1r digestion injection was performed using non-reducing SDS-PAGE (Fig 8B). A single band which migrates on the gel at an apparent molecular mass of 55 kDa was found in peak 2. This same 55 kDa band coeluted with BBK32 (206-348) in peak 3. Mass spectrometry analysis identified the 55 kDa band observed in peaks 2 and 3 to be identical to one another and to correspond to residues Leu-300 to Asp-705 of C1r. This region maps to the C-terminal portion of C1r and includes the Cterminal six residues of the CUB2 domain and the entirety of the CCP1-CCP2-SP domains (hereafter referred to as C1r CCP1-CCP2-SP-auto ). The C1r CCP1-CCP2-SP-auto fragment identified here closely matches the previously reported autolytic C1r fragment known as γ-B [42,43]. As expected, the lower band observed on the gel which migrates at~17 kDa in peak 3, was confirmed as BBK32 (206-348) by mass spectrometry analysis. Co-migration of BBK32 (206-348) with the C1r CCP1-CCP2-SP-auto proteolytic fragment suggested that BBK32 binds to the C-terminal region of C1r. To confirm this, we purified C1r CCP1-CCP2-SP-auto and used SPR to assay its affinity for BBK32. Indeed, this autolytic C1r fragment displays similar affinity for BBK32 (Fig 8C, K D = 1.5 nM) to that previously measured for full-length C1r (Fig 2D). To further refine the mapping of the BBK32 binding site on Residues in the BBK32 to BGD19 chimera are solvent exposed. A) A chimeric BBK32-C protein encoding three charged residues which are identical between BBK32 and BAD16, but different in BGD19-C, exhibits BDG19-like activity (see sequence alignment in Fig 1B and Fig 3). The structure of BBK32-C is oriented to highlight each of these residues (colored orange, stick representation). B) A molecular surface representation of BBK32-C in the same orientation as shown in panel A indicates all three residues altered in the BXK32-C chimera construct are surface exposed in the BBK32-C crystal structure. https://doi.org/10.1371/journal.ppat.1007659.g007 Structure and inhibitory activity of BBK32 orthologues C1r we produced recombinant C1r domain truncations corresponding to the C-terminus of C1r. While C1r-CCP1 and C1r-CCP1-CCP2 failed to interact with BBK32 in SPR binding experiments, a construct containing only the CCP2-SP domains bound with similar affinity to BBK32 (Fig 8D, K D = 3.9 nM) as was found for C1, C1r, and C1r CCP1-CCP2-SP-auto . Efforts to produce a recombinant protein corresponding to the C1r SP domain only were unsuccessful. However, collectively the data presented above strongly suggests that the SP domain of C1r is required for high affinity interaction with BBK32. A model of full-length C1r was constructed Intrinsic proteolysis of C1r reaches completion upon overnight incubation at 37˚C resulting in the release of a fragment corresponding to the C-terminal domains CCP1-CCP2-SP. The auto-catalyzed digestion reaction of C1r was injected onto a size exclusion column. The C1r CCP1-CCP2-SP-auto proteolytic fragment elutes in peak 2. When BBK32 (206-348) is added to the C1r digestion reaction at 2-fold molar excess (relative to full-length C1r) a new peak appeared, peak 3, which contains both BBK32 (206-348) and the C1r CCP1-CCP2-SP-auto proteolytic fragment, as judged by mass spectrometry analysis. C) To confirm that BBK32 recognizes the C-terminal C1r CCP-1-CCP2-SP domains, SPR binding studies were performed. Purified C1r CCP1-CCP2-SP-auto exhibited high affinity interaction for BBK32 (206-348) (K D = 1.5 nM) D) Recombinant refolded His-C1r-CCP2-SP retains high affinity interaction (K D = 3.9 nM), whereas recombinant His-CCP1 or His-CCP1-CCP2 alone fail to interact with BBK32 (206-348). E) A model of full-length C1r is shown which is built from the available crystal structures of C1r domain truncations (PDB's: 4LOT, 6F39, and 1GPZ). The location of the C1r CCP1-CCP2-SP-auto proteolytic fragment is indicated. Together these data indicate that BBK32 targets the C-terminal region of the C1r protease and requires the SP domain for high-affinity interaction. https://doi.org/10.1371/journal.ppat.1007659.g008 Structure and inhibitory activity of BBK32 orthologues from the available C1r domain truncation mutants crystal structures [44][45][46], and the proposed BBK32 binding site is shown (Fig 8E). Discussion Human complement is an evolutionarily ancient arm of the innate immune system that was first described nearly 120 years ago. Historically, complement has been regarded as a 'firstline-of-defense' against invading pathogens. Indeed, if a pathogen is unable to evade detection by one of the three complement pathways, initiating serine proteases begin converting zymogen complement proteins into activated fragments resulting in distinct but synergistic host defense mechanisms that include: i) opsonization (C1q, C3b, C4b); ii) phagocyte recruitment (C3a, C5a); iii) priming of the adaptive immune system (C1q, C3b, C4b, C3a, C5a); and iv) lysis (membrane attack complex). In this context it is no surprise that microorganisms that encounter blood, and other complement containing fluids, have evolved mechanisms to evade complement recognition and activation. Lyme disease spirochetes of the Borrelia burgdorferi sensu lato complex are among a group of human pathogens that have evolved several mechanistically distinct extracellular complement inhibitor proteins [5,7,22]. For example, B. burgdorferi sensu lato species employ a well-known pathogenic anti-complement strategy via expression of proteins which recruit the endogenous host regulator of the alternative pathway of complement, factor H, to the bacterial surface [5][6][7][8][9][10][11][12]47,48]. Lyme-associated Borrelia also produce a distinct set of surface proteins capable of binding host plasminogen and specifically degrading complement components [17][18][19][20], proteins that prevent complement activation at the level of C4 cleavage [21,49], and those that interfere with the formation of the membrane attack complex [14][15][16]. While complement has traditionally been viewed as a sentinel against microbial intruders, it is no longer considered an isolated innate immune response. Complement is integral to homeostatic maintenance and has direct roles in the regulation of both T cell and B cell immunity [50][51][52]. Interestingly, it has been hypothesized that the dominant function of some microbial complement inhibitors may be to interfere with complement-dependent shaping of adaptive immune responses, rather than protection from complement-mediated lysis [53]. In our recent infectivity studies, we noted that when mice are genetically deficient in the classical complement pathway pattern recognition molecule C1q they exhibit altered T cell and B cell responses to B. burgdorferi infection compared to wild type mice [54]. This is of potential relevance to BBK32-mediated classical pathway evasion as it has been shown that abrogated deposition of C4 on follicular dendritic cells underlies diminished antigen presentation and alters the kinetics of germinal center formation during Lyme borreliosis [55,56]. Given that C4 is one of two native substrates for C1, and that BBK32 directly inhibits C1 activation, it is possible that BBK32 may contribute to the impairment of germinal center formation and therefore the quality of antibody response to B. burgdorferi infections. Future studies will be important to elucidate the in vivo role of BBK32-mediated classical pathway complement inhibition on the subversion of T-dependent B cell responses by Lyme disease-causing spirochetes. In regards to human serum susceptibility, B. burgdorferi and B. afzelii have been classified as resistant whereas B. garinii strains have often been classified as sensitive [36]. Differences in the susceptibility of Borrelia burgdorferi sensu lato species to complement may reflect an in vivo selection process that contributes to the pathogens ability to colonize different reservoirs [57][58][59]. For instance, a small rodent, Peromyscus leucopus, is the natural reservoir for B. burgdorferi in the Midwest and northeastern United States [60] whereas in Europe rodents and migratory birds are the principal reservoirs for B. afzelii and B. garinii, respectively [61]. For this study, we selected human serum-resistant strains of B. burgdorferi (strain B31) and B. afzelii (strain PGau), as well as a human serum-sensitive strain of B. garinii (strain IP90), to investigate the relative complement inhibitory activities of BBK32 orthologues. Quantitative affinity measurements using SPR with purified proteins indicated that recombinant BGD19-C binds C1r with similar affinity to that of BBK32-C and BAD16-C (Fig 2). Surprisingly, BGD19-C showed slightly weaker inhibition in complement assays involving artificial surfaces (Fig 3A and 3B), and conferred significantly reduced protection from complement-mediated lysis to the naïve membranes of sheep red blood cells (Fig 3C). These results suggest that binding of recombinant C-terminal BBK32 orthologues is necessary but not sufficient for potent C1r inhibitory activity. However, we note that when full-length BGD19 was expressed on the surface of a surrogate B. burgdorferi strain, it bound C1r weakly in qualitative binding assays relative to either full-length BBK32 or BAD16. It is unclear why C1r-binding observed in these assays differs from the similar affinities measured with recombinant proteins using SPR. Nonetheless, consistent with the weaker complement inhibitory properties of recombinant BGD19-C, surface expressed full-length BGD19 failed to protect B. burgdorferi B314 from complement-mediated killing (Figs 4 and 5). Collectively, our results show that B. garinii BGD19 has significantly reduced capacity to inhibit in vitro classical pathway complement activation compared to B. burgdorferi BBK32 or B. afzelii BAD16. Our data support the notion that the relative increased susceptibility of B. garinii to human serum killing is related to the reduced activity of borrelial complement evasion proteins, as has been previously proposed for B. garinii factor H-binding proteins [37]. However, the in vitro serum-sensitivity classification scheme is recognized as being dependent on reagents, experimental conditions, and importantly the strains being studied [23]. Like B. burgdorferi and B. afzelii, B. garinii also causes human infections, and thus, some B. garinii strains can overcome complement-mediated clearance in vivo. Borrelia burgdorferi sensu lato spirochetes, including B. garinii, likely have multiple layers of functional redundancy that make up its complement evasion repertoire in vivo. Thus, while the reduction in activity of a single complement inhibitor like the BBK32 orthologue BGD19 is expected to contribute to the relative ability of B. garinii to survive complement-mediated attack in vivo, it must be considered in the context of a functionally redundant borrelial complement evasion system. Ultimately it will be the collective activities of these inhibitors, rather than dominance by a single complement evasion molecule, that would be expected to drive in vivo susceptibility of Borrelia burgdorferi sensu lato spirochetes to complement. The crystal structure of B. burgdorferi BBK32-C presented here has provided the first insight into the structural determinants required for high affinity C1r interaction and inhibition by borrelial BBK32-like proteins. We probed the BBK32 molecular surface using a chimeric BBK32-C construct encoding three non-conserved surface residues originating from BGD19 (i.e. BXK32-C) and found that these substitutions alone shift the inhibitory activity of BBK32 towards that of BGD19. BBK32-like sequences are unique to the Borrelia genus and include three families of proteins found in relapsing fever-associated spirochetes termed FbpA, FbpB, and FbpC [62]. The relative sequence conservation of Fbp proteins to BBK32 is much lower than that of B. burgdorferi sensu lato orthologues and ranges between 25% and 60% identity at the amino acid level. Our data indicate that subtle changes in amino acid sequences can result in significant differences in the ability of B. burgdorferi sensu lato BBK32 orthologues to block human complement and thus it will be important to determine if BBK32-like classical pathway complement inhibition is restricted to Lyme disease spirochetes or is common to all pathogenic Borrelia. B. burgdorferi BBK32, as well as its orthologues, have unique and apparently disparate functions within the vertebrate host. The disordered N-terminal half acts as an adhesin by binding to glycosoaminoglycans and the extracellular matrix protein fibronectin, while the ordered carboxy terminal half acts as a C1r-binding complement inhibitor (Fig 1A). While discrete BBK32-binding sites have been identified for each of these host ligands, it remains unknown if BBK32 interacts simultaneously with C1r and fibronectin or GAGs. This may be of importance as the formation of functionally synergistic ternary complexes involving virulence factors from other pathogens, such as Staphylococcus aureus extracellular fibrinogen-binding protein (Efb) in complex with complement C3 and fibrinogen, have been described [63]. Furthermore, linkage of an intrinsically disordered host-interaction domain with an ordered host interaction domain, like that observed in BBK32, is seen in Efb and several Gram positive MSCRAMMs [63][64][65]. Whether BBK32-like, covalently linked, intrinsically disordered/ordered structural domains with multifunctional host interaction properties is common in Borrelia-or even in other human pathogens-has yet to be fully evaluated. Among the multi-pronged borrelial complement evasion arsenal [23], BBK32 is unique in its ability to specifically target the classical pathway of complement [22]. In fact, there are relatively few examples of pathogenic strategies which specifically target the classical pathway and BBK32 is the only known inhibitor which directly blocks the initiator protease C1r [66]. By localizing the BBK32 interaction site to the catalytically active serine protease domain on C1r and solving the high-resolution structure of BBK32-C in an unbound form, this study has greatly improved our knowledge of the molecular basis for BBK32-mediated C1r inhibition. Continued work in this area is needed to further refine the BBK32/C1r molecular interface and to pinpoint key residues that drive complex formation, knowledge of which will greatly improve our ability to harness the therapeutic potential of the potent and highly specific anticomplement activities of BBK32 proteins for use in complement-related diseases. Bacterial strains and plasmid constructs B. burgdorferi B31 strains ML23 and B314, as well as B. afzelii strain PGau and B. garinii strain IP90, were grown in BSK-II media supplemented with 6% normal rabbit serum (Pel-Freez Biologicals, Rogers, AR) under microaerobic conditions at 32˚C, 1% CO 2 atmosphere, pH 7.6. Strain B314 is a serum-sensitive, non-infectious strain B31 derivative that lacks most linear plasmids [67,68]. All B. burgdorferi cells were enumerated by dark field microscopy. Heterologous bbk32 genes from B. afzelii strain PGau and B. garinii strain IP90, designated as bad16 and bgd19, respectively, were cloned into the shuttle vector pBBE22luc. To carry this out, oligonucleotide primers were designed based on the sixteenth open reading frame of lp17 from B. afzelii strain PKo (Genbank accession number CP002942.1; region 12854-13912 of lp17 from B. afzelii PKo) and the nineteenth open reading frame of lp17 from the B. garinii strain PBr (Genbank accession number CP001309.1; region 12206-11160 of lp17 from PBr). The letter "D" or "d" used to denote the orthologous protein or gene, respectively, is due to their presence on the lp17 episome, which is referred to as the "D" plasmid in B. burgdorferi strain B31 [69]. Note that the corresponding proteins from both B. afzelii strains are 100% identical whereas the B. garinii proteins share 96% identity. Oligonucleotide primers were synthesized by Eurofins, Inc. (Lousville, KY) and their corresponding sequences are shown in Table 2. Oligonucleotide primers with sequences that overlapped with the borrelial gene and the vector pBBE22luc were used for PCR amplification using genomic DNA from B. afzelii strain PGau and B. garinii strain IP90 as template. The amplified fragments contained 395 and 491 bp of upstream sequences and 185 and 177 bp downstream from the translational start site and stop codon corresponding to the 1059 bp bad16 and 1065 bp bgd19 genes, respectively. The resulting PCR products were 1639 bp and 1733 bp for bad16 and bgd19, respectively. The plasmid pBBE22luc was digested with BamHI HF and SalI HF (New England Biolabs, Ipswich, MA) and assembled separately with each of the aforementioned PCR fragments using the manufacturer's instructions for NEBuilder (New England Biolabs). The resulting constructs were transformed into Escherichia coli DH5α cells (F -ϕ80lacZΔM15 Δ(lacZYA-argF)U169 recA1 endA1 hsdR17(r K -, m K + ) phoA supE44 λthi-1 gyrA96 relA1) and transformants selected on LB agar plates containing kanamycin at 50 μg/ml (Sigma-Aldrich; note that all chemicals and reagents mentioned herein were purchased from Sigma-Aldrich unless indicated otherwise). The resulting constructs, which contained bad16 and bgd19 expressed under the control of their native promoters, were confirmed by sequencing and designated pBAD16 and pBGD19, respectively. Transformation of strain B314 with pBAD16 and pBGD19 was done as previously described [70]. Transformants were selected for resistance to kanamycin and screened by PCR to confirm the presence of pBBE22luc vector. The cloning was confirmed with PCR and sequencing with primers pncAf and lucf ( Table 2). Surface plasmon resonance All SPR experiments were conducted on a Biacore T200 instrument at 25˚C and unless otherwise noted using a flowrate of 30 μl min -1 and a running buffer of HBS-T (20 mM HEPES (pH 7.3), 140 mM NaCl, 0.005% Tween-20). Proteins were immobilized using standard amine coupling chemistry on CMD200M biosensor chips (Xantec) as described previously [22]. The following immobilization densities were used for the corresponding injection series: C1 analyte over BBK32-C (680 RU), BGD16-C (850 RU), BAD16-C (720 RU); C1r analyte over BBK32-C (1800 RU), BGD19-C (4060 RU), BAD16-C (3200 RU); C1r CCP1-CCP2-SP-auto analyte over BBK32 (206-348) (780 RU, 1760 RU, 1600 RU). The C1 and C1r injection series were performed in HBS-T buffer supplemented with 5 mM CaCl 2 . C1 injections consisted of a twelve point, two-fold dilution series ranging from 0 to 150 nM C1 for 2 min association and 3 min dissociation. C1r was injected using a single-cycle kinetic format [73] using a five point, five-fold dilution series ranging from 1.6 to 1000 nM. Regeneration to stable baseline was achieved by injecting HBS-T supplemented with 10 mM EGTA for 1 min followed by three 30 s injections of a solution containing 0.1 M glycine (pH 2.2), 2.5M NaCl. C1r CCP1-CCP2-SP-auto , C1r CCP1 , C1r CCP2, and C1r CCP1-CCP2-SP injections were identical to that of C1r using a concentration range of 0.8 to 500 nM. Kinetic analysis was performed for each set of sensorgrams injections using T200 Evaluation Software (GE Healthcare) using a 1:1 (Langmuir) binding model and a dissociation constant (K D ) was calculated from the resulting fits. All injection series were performed in triplicate and the mean value is reported for each K D ± standard deviation. Complement inhibition assays The ability of recombinant BBK32 orthologue proteins to inhibit the activation of human complement was assessed using two assay formats. First, an ELISA-based assay was used that relies on the activation of the classical pathway via surface-immobilized IgM (Athens Research and Technology) and subsequent detection of the complement deposition products derived from normal human serum (Innovative Research), specifically C3b or MAC through use of monoclonal antibodies (both purchased from Santa Cruz Biotechnology) [74]. Each borrelial protein was evaluated using a duplicate 12-point, two-fold dilution series ranging from 2 to 2000 nM. A second assay was used which monitors the hemolytic activity of human complement activation via the classical pathway in the presence of recombinant BBK32-C or the BBK32 orthologues BAD16-C, BGD19-C, or BXK32-C by using sensitized sheep erythrocytes (Complement Tech, Tyler, TX). In each case these assays were performed in an identical manner to those described previously in detail for the evaluation of BBK32-C [22]. Proteinase K accessibility assay B. burgdorferi strains B314/pBAD16 and B314/pBGD19 were grown in complete BSK-II media, harvested by centrifugation at 5,800 x g, and washed twice with PBS. The cell pellet was re-suspended in 0.5 ml of either PBS alone or with PBS with proteinase K (Invitrogen) to a final concentration of 200 μg ml -1 . All samples were incubated at 20˚C for 40 min. Reactions were terminated by the addition of phenylmethylsulfonyl fluoride (PMSF) to a final concentration of 1 mM. Cells were again pelleted by centrifugation (9,000 x g for 10 min at 4˚C), washed twice with PBS containing 1 mM PMSF, re-suspended in Laemmli sample buffer, and resolved on SDS-PAGE. The separated proteins were transferred to a PVDF membrane (Thermo Fisher Scientific) and probed as described below with anti-BAD16, anti-P66, and anti-FlaB antibodies, respectively. B. burgdorferi whole cell adherence assays B. burgdorferi adherence assay was done as previously described with slight modifications [22,75]. Briefly, poly-D-lysine pre-coated coverslips (Corning Biocoat) were coated with 1 μg human C1r (Complement Tech) or BSA, respectively, and incubated at 4˚C overnight. The coverslips were washed thoroughly in PBS to remove excess unbound proteins. The coverslips were then blocked with 3% BSA at room temperature for 1 hr. B. burgdorferi strains B314/ pBBE22luc (vector only control), B314/pCD100 (expresses bbk32), B314/pBAD16 (expresses bad16), and B314/pBGD19 (expresses bgd19) were grown to mid-logarithmic phase at 32˚C, 1% CO 2 , pH 7.6. All B. burgdorferi strains were subsequently diluted to 10 7 organisms/ml in BSK-II medium without serum. The resulting B. burgdorferi samples, in 0.1 ml volumes, were applied onto the coverslips and incubated for 2 hr at 32˚C. Unbound bacteria were removed from the coverslips by gentle washing with PBS; this wash step was repeated 7 times. The coverslips were applied to a glass slide and the binding of spirochetes was scored by dark field microscopy. Serum complement sensitivity assay Complement sensitivity assays were performed as previously described [22]. Briefly, B. burgdorferi strains were grown to exponential phase at 32˚C, 1% CO 2 , pH 7.6, and 80 μl of a 10 6 cell suspension in BSK-II medium was added to 20 μl of normal human serum (NHS; Complement Technologies) to give a final volume of 100 μl (i.e., 20% NHS). The samples were placed in microtiter plates and the suspensions were sealed and incubated at 32˚C for 2 h. Heat-inactivated normal human serum (hiNHS) was used as a control. After incubation, B. burgdorferi cells were scored by dark field microscopy and the percentage of viable B. burgdorferi cells were calculated from randomly chosen fields and based on immobilization, loss of cell envelope integrity, and/or overt lysis. Crystallization, structure determination, refinement, and analysis BBK32 (206-348) was concentrated to 5.1 mg ml -1 in a buffer of 10 mM HEPES (pH 7.3), 50 mM NaCl. Crystals of BBK32 were obtained by vapor diffusion of sitting drops at 20˚C. Drops were setup by mixing 1 μl of protein solution with 1 μl of precipitant solution. Two crystallization conditions were identified. The first condition contained 0.1 M MES (pH 6.5), 0.2M ammonium sulfate, and 30% PEG-MME 5,000. Small plate clusters reproducibly appeared in this condition between 2 and 5 d with rounds of microseeding producing large plates which could be harvested and cryoprotected with supplementation of 5% glycerol to the precipitant solution. Crystals in this condition grew in the space group P2 1 with four BBK32 (206-348) molecules in the asymmetric unit, diffracting to 2.5 Å resolution. A second condition was identified containing 15% PEG 3,350 and 0.1M succinic acid (pH 7.0). These crystals appeared only after prolonged incubation (i.e., > 6 months). Cryoprotection was achieved by supplementing the precipitant solution with 20% glycerol. These crystals grew in space group P6 5 with a single copy of BBK32 in the asymmetric unit, diffracting to 1.7Å. Monochromatic X-ray diffraction data were collected at 0.973-Å wavelength using beamline 22-ID of the Advanced Photon Source (Argonne National Laboratory). Diffraction data were integrated, scaled, and reduced using the HKL2000 software suite [76]. Of all deposited structures in the RCSB database, none share > 25% sequence identity to BBK32-C. Exhaustive attempts at various molecular replacement strategies utilizing the P2 1 dataset failed. However, a single solution was found using the P6 5 dataset by the MRage program [77] implemented via the PHENIX crystallography software suite [78][79][80]. MRage was configured to use a homology search based on the top three hits for BBK32 obtained from the HHPRED server implemented via the MPI Bioinformatics Toolkit [81]. The top scoring solution was a homology model of 51 residues (BBK32 residues 267-317) based on a partial structure of PDBID: 5J0K [82]. Despite a relatively low scoring solution (LLG = 43.8, TFZ = 6.6), initial phases obtained from this search, yielded an initial PHENIX.AUTOBUILD model which refined to 24%/27% (R work /R free ). Subsequent manual building was performed using COOT [83] and iterative cycles of refinement using PHENIX. REFINE produced a final refined model of 20.5%/23.6% (R work /R free C1r autolytic digestion and BBK32-binding site analysis A total of 400 μl of purified C1r (Complement Tech) at 1.0 mg ml -1 was diluted into an equal volume of 50 mM Tris (pH 8.0), 0.5 mM CaCl 2 and allowed to incubate overnight at 37˚C. 150 μl of this reaction was mixed with either 100 μl of buffer or 100 μl of BBK32 (206-348) at 2 fold molar excess relative to full-length C1r. A third sample was prepared with only BBK32 . Each 250 μl sample was then injected onto a SuperDex 200 Increase 10/300 GL small scale size-exclusion column (GE Healthcare) previously equilibrated in 10 mM HEPES (pH 7.3), 140 mM NaCl at a flowrate of 0.5 ml min -1 . Peaks were evaluated by SDS-PAGE under non-reducing conditions. The bands in 'Peak 3' (elution volume 14.5 to 15.5) were identified by mass spectrometry. Gel bands representing the autolytically formed 55kDa form of C1r, and BBK32 (206-348) were excised from the SDS-PAGE gel, reduced and alkylated, and digested with trypsin overnight by standard methods. Extracted peptides from each digest were subjected to LC-tandem MS analysis for verification and determining the coverage of C1r in the 55kDa gel band. The digests were resolved by reversed phase nanoLC in a C18 column (50μm x 12cm, packed with Phenomenex Jupiter, 10μm), in a 1% to 35% acetonitrile, 100 minute gradient (buffer A, 0.1% Formic acid in water), eluting into a Q Exactive Plus MS system. Data were acquired in data dependent mode, MS scans at 35k, with 16 dependent MS2 scans per cycle at 17.5k resolution. The HRMS data files were searched using Mascot version 2.6 against a custom database consisting of the full length native human C1r sequence (Uniprot accession P00736), and a sequence for BBK32 consistent with the predicted amino acid sequence of the cloned construct. Peptides considered were restricted to semi-Trypsin specificity, with tolerances of 10ppm (MS) and 0.01Da (MS2 fragment) allowed, with fixed Carbamidomethyl (Cys), and variable modifications Deamidation (Asn,Gln) and Oxidation (Met) included. Peptides identified at P > 0.05 threshold were manually inspected to verify the quality of the apparent sequence coverage. Immunoblotting to detect the endoflagellar antigen FlaB was done for all samples using the same PVDF membrane used in C1 or C1r Far Western detection. A monoclonal to B. burgdorferi strain B31 FlaB (Affinity BioReagents) was diluted at 1:4,000 and incubated with the blot for 1 hr. After washing in PBS, 0.2% Tween-20, the blot was next incubated with a 1:10,000 dilution of Goat anti-mouse IgG with IRDye 680RD (Li-Cor Biosciences) as the secondary antibody. The membrane was washed extensively in PBS, 0.2% Tween-20 and scanned using the Li-Cor Odyssey Fc Imaging System. The signals obtained from the Li-Cor unit were analyzed using the Image Studio-lite Ver 5.2.5 software. The bands were detected with manual adjustment to their shape relative to background. All BBK32 orthologue signals obtained were initially normalized to the FlaB signal from the same sample, then to the B314/pCD100 on the same blot, to compare across distinct Far Western blots quantitatively as indicated here for BAD16: Signal for BAD16 ¼ BAD16 binding to C1=FlaB signal BBK32 binding to C1=FlaB signal on the same membrane The resulting values were then used in statistical analyses described below. Conventional immunoblots were also performed with rat polyclonal antibodies to BBK32-C, BAD16-C, and BGD19-C (each diluted 1:1000; kindly provided by Richard Marconi) produced in the B314 background strain. To detect membrane-bound immune complexes, Goat anti-Rat IgG with an IRDye 800CW conjugate (Li-Cor Biosciences) was used as a secondary antibody and diluted to 1:10,000. Detection of P66 was accomplished using rabbit anti-P66 serum (generously provided by Sven Bergström) diluted 1:1000 followed by detection on membrane immune complexes using a 1:10,000 dilution of Goat anti-rabbit conjugated with IRDye 800CW (Li-Cor Biosciences). The membranes were scanned using the Li-Cor Odyssey Fc Imaging System. Transcript quantification of bbk32 and orthologues Three independent cultures of each B. burgdorferi strain B314/pCD100 (expresses bbk32), B314/BGD19 (expresses bgd19), and B314/BAD16 (expresses bad16) were grown to the exponential growth phase (i.e., 5 x 10 7 cells per ml), and total RNA was isolated from 5 x 10 8 cells using Direct-zol RNA MiniPrep (Zymo Research, USA). The RNA samples were treated with the in-kit DNase I and TURBO DNA free kit (Invitrogen, USA) to eliminate contaminating DNA. RNA integrity was examined by gel electrophoresis. Oligonucleotide primers for amplifying flaB and bbk32 via quantitative RT-PCR (qRT-PCR) were adopted from prior studies [85] and primers for bad16 and bgd19 were designed in this study and shown in Table 2. Each primer pair was tested to confirm amplification of a single product of the expected size using genomic DNA from appropriate B. burgdorferi sensu lato strains as template. Reverse transcription reactions of three biological repeats of each strain were carried out with SuperScript II Reverse Transcriptase (Invitrogen, Carlsbad, CA). A control reaction with a mixture lacking reverse transcriptase was performed for each primer set to confirm that DNA was not present. Subsequently, the products from the reverse transcription reaction were subjected to quantitative real-time PCR using an Applied Biosystems StepOnePlus Real-Time PCR system. PowerUp SYBR Green Master Mix (Thermo Fisher Scientific) was used to perform quantitative PCR in triplicate (technical repeats). The constitutively expressed flaB gene of B. burgdorferi was used for normalization as previously described [85,86]. The expression levels of bbk32 orthologues were first normalized to the flaB in the same sample, then the normalized values of bad16 and bgd19 were compared to the level of bbk32 using the 2 -ΔΔCt method. The final fold differences were used in statistical analyses. Statistics Statistical analysis was performed with GraphPad Prism version 7. For calculation of IC 50 values in ELISA and hemolytic complement assays using recombinant proteins, non-linear regression was performed using a variable four-parameter fit where the top and bottom values were constrained to 100 and 0, respectively. Two-way ANOVA were used in C1r binding assay and serum sensitivity assay, and One-way ANOVA were used in Far Western and qRT-PCR analyses. Supporting information S1 Fig. BBK32-C mediated inhibition of the classical pathway of complement. A) A schematic depiction of classical pathway complement activation is shown. C1q, the pattern recognition subunit of the C1 complex, binds to the targeted surface. C1q binding autoactivates the initiator serine protease, C1r, which then proteolytically cleaves C1s. Activated C1s cleaves complement proteins C2 and C4 leading to the surface formation of C4b2a, the CP (Classical Pathway)/LP (Lectin Pathway) C3 convertase. C3 convertases then cleave C3 into C3a and C3b leading to CP/LP C5 convertase formation (C4b2a3b). Cleavage of C5 by C5 convertases releases the anaphylatoxin C5a and leads the formation of the membrane attack complex (C5b-9) on the surface of the target cell. The membrane attack complex is a lytic pore structure that can directly kill the targeted cell(s). For Borrelia species, BBK32, or active orthologues of BBK32, can block activation of C1r and inhibit the classical complement cascade. B) A model for BBK32-mediated inhibition of the classical pathway. C1 complex, consists of C1q, which is composed of six collagen-like structures connected to six globular head domains. C1q binds a C1r 2 C1s 2 heterotetramer to form C1 complex. The depiction of the arrangement of subunits within C1 is based on the work of Ugurlar and colleagues [87]. BBK32-C, binds the exposed serine protease (SP) domain of C1r and inhibits the autoproteolytic activation of C1r as well as the C1r-mediated cleavage of proC1s. Inhibition at this step halts the classical pathway at the initial proteolytic step and prevents formation of the downstream activation products of the cascade, including the membrane attack complex. (TIF) S2 Fig. Construction of bbk32 orthologues into pBBE22luc and expression of these genes in B burgdorferi strain B314. A) Schematic showing how the bbk32 orthologues bad16 and bgd19 from B. afzelii and B. garinii, respectively, were constructed using the pBBE22luc vector backbone. The resulting constructs were transformed into B. burgdorferi strain B314. B) PCR confirmation of bgd19 from B314/pBGD19, bbk32 from B314/pCD100, and bad16 from B314/ pBAD16. All constructs contained the bad16, bbk32, bgd19 expressed from their native promoters. The Vector lane refers to the use of pBBE22luc as template for PCR with the oligonucleotide primers used to screen inserts. Values listed to the left indicate the size of markers in kilobases (kb). C) Quantitative RT-PCR shows that the expression of bbk32 orthologues (e.g., bad16 and bgd19) in strain B314 using their native promoters make transcripts equivalent or greater than B. burgdorferi sensu stricto bbk32. Expression of the bbk32 orthologues was compared relative to the constitutively expressed flaB gene (internal control). The qRT-PCR was done in triplicate and the mean values obtained for bbk32 was used as a comparator for the other orthologous genes (i.e., bad16 and bgd19). (TIF) S3 Fig. Cross-reactivity of BBK32 orthologues and evaluation of their surface exposure in B burgdorferi strain B314. A) Antisera to BBK32 orthologues is cross reactive against all sensu lato isolates tested. Antisera against BGD19 from B. garinii, BBK32 from B. burgdorferi, and BAD16 from B. afzelii were tested in immunoblots of protein lysates from B. burgdorferi strain B314 containing the vector pBBE22luc (B314/luc), as well as B314 strain expressing B. garinii bgd19 (B314/pBGD19), B. burgdorferi bbk32 (B314/pCD100), and B. afzelii bad16 (B314/ pBAD16). Individual membranes were then probed with rat polyclonal antisera against BGD19-C, BBK32-C, and BAD16-C as specified on the right. In all instances, the reagent used recognized its homologous target protein best but also showed significant reactivity to the other heterologous proteins. Markers in kDa are indicated on the left. B) The BBK32 orthologues encoded by B. afzelii and B. garinii, designated as BAD16 and BGD19, respectively, are surface exposed in the surrogate B. burgdorferi B314 strain. B314/pBAD16 and B314/pBGD19, encoding BAD16 and BGD19, respectively, were grown, washed, and then either resuspended with Proteinase K (ProtK; denoted with a "+") or buffer alone (denoted with a "-"). Following processing, the resulting samples were subjected to SDS-PAGE and immunoblotted with antiserum directed against either BAD16-C, the outer membrane P66 protein, or the subsurface SWISS-MODEL was used to produce homology models of A) BAD16-C and B) BGD19-C that are based on the crystal structure of BBK32-C (PDB: 6N1L). Residues that are non-identical between BAD16-C and BBK32-C are shown in red on the protein surface (panel A), while residues that differ between BGD19-C and BBK32-C are shown in yellow (panel B). C) The homology models of BAD16-C and BGD19-C are structurally aligned. The coloring scheme shown in panels A/B is retained except overlapping residues are now colored in orange. Surfaces that remain yellow represent residues that are uniquely different in BGD19-C relative to BAD16-C. D) Three of these residues were selected for the BXK32-C chimera protein used in this study including residue positions 308, 319, and 324 (BBK32 numbering). A SWISS-MODEL homology model of the BXK32-C chimeric protein, also based on the BBK32-C crystal structure, predicts these three residues would remain solvent exposed. Global Model Quality Estimation (GMQE) is used by SWISS-MODEL to provide an estimate of model accuracy. Values range between 0 and 1, with higher numbers indicating higher model reliability and are as follows: BAD16-C (GMQE = 0.81); BGD19-C (GMQE = 0.93); BXK32-C (GMQE = 0.97). (TIF) S1 Table. Surface plasmon resonance binding and fitting parameters. The calculated equilibrium dissociation constants, rate constants, and associated fitting statistics are provided for surface plasmon resonance binding experiments. (DOCX) S2 Table. Complement assay IC50 data and non-linear regression fitting statistics. The calculated half maximal inhibitory concentration (IC50) values and associated fitting statistics are provided for each experimental set of complement functional assays. (DOCX)
2019-03-23T13:02:58.194Z
2019-03-01T00:00:00.000
{ "year": 2019, "sha1": "449c704f80202b3702a577e54f1925e49c99a29b", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plospathogens/article/file?id=10.1371/journal.ppat.1007659&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "449c704f80202b3702a577e54f1925e49c99a29b", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
264094693
pes2o/s2orc
v3-fos-license
Effects of healthcare financing policy tools on health system efficiency: Evidence from sub-Saharan Africa Background Evidence shows high levels of catastrophic and impoverishing healthcare expenditure among households in sub-Saharan Africa (SSA). The way healthcare is financed has an impact on how well a health system performs its functions and achieves its objectives. This study aims to examine the effect of healthcare financing policy tools on health system efficiency. Method The study classifies 46 sub-Saharan African (SSA) countries into four groups of health systems sharing similar healthcare financing strategies. A two-stage and one-stage stochastic frontier analysis (SFA) and Tobit regression techniques were employed to assess the impact of healthcare financing policy variables on health system efficiency. Data from the selected 46 SSA countries from 2000 to 2019 was investigated. Results The results revealed that prepayment healthcare financing arrangements, social health insurance, mixed- and external-financing healthcare systems significantly enhance health system efficiency. Reliance on a single source for financing healthcare, particularly private out-of-pocket payment reduces health system efficiency. Conclusion For policy-making purposes, health care systems financed through a mix of financing arrangements comprising social health insurance, private, and public funding improve health system efficiency in delivering better health outcomes as opposed to depending on one major source of financing, particularly, private out-of-pocket payments. Introduction Many countries around the Sub-Saharan African (SSA) region have undertaken or are considering a fundamental restructuring of their healthcare financing systems to achieve better health outcomes [ [1,2]].The current wave of enthusiasm to secure universal health coverage (UHC), which is inspired by the United Nation's Sustainable Development Goal indicator 3.8, provides a unique opportunity for policymakers to make use of evidence-based academic research in designing the most efficient health systems.The World Health Organization's (WHO) member countries approved Resolution WHA58.33 in 2005, which called for the development of an effective health financing system to hasten the pace to achieve the UHC goal [ [3]].Additionally, in 2006, the 56th WHO Regional Committee for Africa passed Resolution AFR/RC56/10 urging Member States to implement or broaden prepayment schemes.These healthcare financing policy arrangements seek to raise enough money to guarantee access to necessary healthcare services without the risk of financial catastrophe. Many countries in the region have implemented or are in the process of implementing sustainable health financing strategies, such as national or social health insurance, community-based health insurance, tax-based financing, private voluntary, and micro health insurance schemes [1,[4][5][6].Still, other countries are grappling with healthcare financing systems dominated by regressive financing practices such as direct-user fee charges resulting in substantial out-of-pocket expenses [ [7]]. Health care spending has increased in all SSA countries over the last two decades, from an average of $47 in 2000 to $125 in 2019.In 2017, health spending absorbed an average of about 6 % of GDP in SSA, ranging from a minimum of 2.5 % in DR Congo to a maximum of 10.9 % in Malawi [ [8]].The annual incidence of catastrophic health expenditure in SSA is estimated at 16.5 % for a threshold of 10 % of total household expenditure.After initially declining in the 2000s, there has been a steady rise in the incidence of catastrophic health expenditure in SSA between 2010 and 2020 [ [9]].Prepayment arrangements and other healthcare financing policies are designed to make health care affordable to all [ [8]].The impact of these policies on the efficiency of the health systems in delivering better health outcomes in SSA, however, has not received much attention in the empirical literature.The few existing health system efficiency studies in SSA focused on the impact of socio-economic and demographic factors on health system efficiency [10][11][12].There is no empirical evidence on the effects of healthcare financing policy tools such as social health insurance on health system efficiency in SSA.This study seeks to fill this gap in the literature and examine the effect of healthcare financing policy tools on health system efficiency. The remainder of this study is structured as follows: Section 2 describes the stochastic frontier methodology and the data used in this study.In Section 3, we discuss the results obtained from the estimations.Section 4 provides concluding observations and policy implications. Stochastic frontier analysis A stochastic frontier analysis (SFA) methodological framework is adopted in this study.The stochastic frontier analysis is an econometric technique that is commonly used in the literature to estimate the potential maximum output level (frontier) given resources (inputs) used.A deviation of the actual output from the estimated potential maximum, after accounting for a random variation, is computed as the level of inefficiency in production.In health production, efficiency is defined as the ratio of the actual health outcome achieved to the potential maximum that could be achieved given the resources used [ [13]]. The stochastic frontier analysis (SFA) was developed simultaneously by Aigner et al. [ [14]] and Meeusen and van den Broeck [ [15]] and has since been applied in a wide range of fields to evaluate the efficiency of decision-making units [ [16]].The SFA framework is used in most studies of health care efficiency, and it models healthcare outcomes as the output of a health production function while considering healthcare inputs, such as healthcare spending and demographic and economic factors that affect population health. The main alternative to the SFA method is the nonparametric data envelopment analysis (DEA) framework.All observed data points must fall below the frontier because the DEA is a deterministic production frontier approach, and any departures from the frontier are attributed to inefficiency.Thus, the DEA method fails to capture random noise such as measurement errors, unobservable individual characteristics, or macroeconomic level shocks that impact each DMUs differently.All deviations from the estimated frontier are interpreted as being due to inefficiency.This makes the DEA approach to efficiency estimation less attractive, albeit recent development in the DEA minimizes this problem (see [ [17]]).The decision to use the SFA approach in the current study is motivated by its ability to distinguish between the stochastic random noises from the inefficiency component of the deviations from the production frontier. Greene [ [18]] introduced the true fixed effect (TFE) model to relax the restriction of a common constant for all DMUs by incorporating several firm-specific dummy variables to distinguish between unobserved heterogeneity and inefficiency.The major challenge with the TFE model is that the inclusion of several country-specific dummies to capture unobserved heterogeneity has the potential to cause over-specification of the model.In resolving this problem, Greene [ [18]] further proposed the true random effects (TRE) model by introducing time-invariant and country-specific heterogeneity through the use of simulated maximum likelihood estimation techniques.This study employs all the four time-varying stochastic frontier models for panel data -Kumbhakar [ [19]], Battese and Coelli [ [20]], Greene's [ [18]] true fixed effect (TFE), and true random effect (TRE) -to estimate the health system efficiency of the 46 selected SSA countries and conduct comparative analysis of the efficiency estimates. The stochastic frontier analysis requires the specification of a production model to capture the relationship between inputs and outputs.Different production models that are usually used in the production literature include: Cobb-Douglas, constant elasticity of substitution (CES), translog, generalized Leontief, normalized quadratic functions and their variants.The Cobb-Douglas specification and the more flexible translog function are the two most commonly utilized functional forms in empirical production literature, including frontier studies.The translog framework has the advantage of being more flexible to accommodate a number of production functional forms without the need for their a priori specification [ [16]].Its major shortcoming is the need to have a large degree of freedom, where the more restricted Cobb-Douglas framework comes in handy since it is more parsimonious in its demand on data.In this paper, the log-linear Cobb-Douglas model is adopted to specify the production relationship between health outcomes and health-system inputs.The choice of the Cobb-Douglas production functional form is motivated by its parsimony and the fact that it has been generally accepted as sufficient for stochastic production functions [11,16,21]. At the macro level, the stochastic production frontier model can be expressed as in Equation (1). where y it is a vector of health outcomes; x it is the vector of health resources; c it is the vector of non-health systems factors that influence the health of the population; v it is the random symmetric component of the deviation which accounts for the idiosyncratic statistical noise; u it denotes a nonsymmetrical deviation component which represents the inefficiency [ [22]]; and λ (Lambda) is an asymmetry term which provides an indication of the relative contribution of u and v to the composite error term (ε = u + v).The higher the λ the higher the contribution of σ u to the error term (ε) relative to the σ v , an indication that the use of the stochastic production frontier model is justified.We estimated the efficiency scores using Jondrow et al. [ [23]] with the JLMS estimator, as defined by Greene [ [18]].Jondrow et al. [ [23]] (JLMS) proposed the first method of estimating the inefficiency (u i ) or efficiency (exp (− u i )) of a DMU based on the composite error term of the model (ε i = u i + v i ).The conditional mean of u i given ε i gives a point estimate of u i .Greene [ [18]] extended the JLMS model by using a group mean residual as a modification in a panel data framework. Tobit model In order to assess the effects of health-system financing policy factors on health system efficiency, a two-stage technique is used in this study.Two-stage models are appropriate since healthcare financing policy factors influence each health system's outcomes [24][25][26].In the first stage, as explained in the above section, the SFA is used to estimate the technical efficiency scores for the health systems.Then in the second stage, a regression model is specified to assess the technical efficiency effects.That is, the estimated technical efficiency scores for the health systems are regressed against the observable exogenous variables [ [27]], which in this study are the healthcare financing policy variables.A censored regression technique, the Tobit model, is considered a suitable tool to employ since the efficiency scores are censored between 0 and 1 [ [28]].Thus, the Tobit regression model is specified as in Equation [2].[2] where θ it denotes the technical efficiency score for country i observed at time period t, z it is the vector of healthcare financing policy variables, δ represents the vector of parameters to be estimated, and ε it is the error term.The two-stage approach requires that the explanatory variables in Equation [2] and the regressors (i.e.inputs) in Equation [1] are uncorrelated [19,26,27].A violation of this assumption implies that estimates of β, σ u , and, σ v are biased due to the omission of z it from the production function.In this paper, we adopted the two-stage approach because of the low correlation between the input variables in Equation (1) and the healthcare financing policy variables in Equation (2) (see Appendix A1 for correlation matrix involving all the variables used in this study). Other efficiency studies adopt a one-stage approach to examine the impact of environmental variables on the efficiency of decisionmaking units [ [29]] so that estimation of the efficiency scores as in Equation (1) and of the parameters of the exogenous variables as in Equation ( 2) is carried out simultaneously by maximum likelihood estimation (MLE).This is achieved when the truncated-normal distribution of u it is specified in Equation (1) [ [20]]. Empirical strategy From Equation [1], a stochastic health production function is specified as in Equation (3) for the empirical analysis, where ln is the natural logarithm, i and t index countries and time periods, respectively, β and φ denote the vector of coefficients of the health-system inputs and control variables, respectively.Infant survival rate per thousand live births (ISR) is used to represent the health outcome of the health system.Health expenditure per capita (HEPC) and the square of health expenditure per capita (HEPCSQ) are used as the healthcare system inputs.A couple of factors external to the health system but affect the health of the population are controlled for in the model.These are employment (EMP), education (EDU), and age structure (AGE).In order to control for any potential heteroscedasticity or autocorrelation present in the data, a cluster robust standard error estimation approach was used.[3] From Equation (2), the empirical Tobit model is specified in Equation (4) to assess the impact of healthcare financing policy variables on health system efficiency.The healthcare financing policy variables which are used as explanatory variables in the efficiency function are compulsory health financing arrangements (CFA) funds as a proportion of total health expenditure, domestic general government health expenditure per capita (GGHE), out-of-pocket health expenditure per capita (OOP), a binary variable that indicates the existence of social health insurance (SHI), a categorical variable that describes the predominant source of funding healthcare (FTYP) -private, public, external, and mixed (PRI, PUB, EXT, and MIX, respectively).[4] K. Arhin et al. in Equation ( 4), three financing policy variables (CFA, GGHE and OOP) are highly correlated to each other (see Appendix A1).To prevent the potential problem of spurious regression results, we estimated three models to which each of these three variables are added one after the other. Sensitivity analyses are conducted to assess the robustness of the empirical results via one-stage SFA where the parameters of both the production function and the inefficiency function are estimated simultaneously employing the Battese and Coelli [ [20]] SFA model.In this setting, since the parameters (δ) show how the healthcare financing policy variables (z) influence the inefficiency term (u it ), a positive coefficient implies that the variable increases inefficiency while a negative coefficient shows an inverse relationship between the variable and the inefficiency term. Definition of variables Infant survival rate (ISR), a measure of population health, serves as the dependent variable in the stochastic health production model.Infant mortality rate measures the number of deaths that occur among children aged one year and below per 1000 live births in a country in a year.Infant mortality is considered as a sufficient summary measure of the overall health of the general population [ [30,31]].Infant mortality rate (IMR) is viewed as an objective health outcome and has been widely used in health production and healthcare system efficiency studies [32][33][34].Since the model adopted in this study assumes that outcome variables are isotonic (i.e.increased health outcome increases efficiency), the study follows Afonso and Aubyn [ [35]], Hadad et al. [ [36]], and Novignon & Lawanson [ [11]] to transform the IMR to infant survival rate which can be interpreted as the proportion of children aged one year or below who survive as compared to those who died [ [34]].A higher value of ISR indicates better health status. Following previous studies [11,33,34,[37][38][39], health expenditure per capita (HEPC) is used in this study as the input to the health production function.HEPC is a proxy for the quantity of healthcare services consumed per person [ [40]].Since variations in expenditures across countries better reflect differences in quantity and quality of healthcare services, Andersen [ [39]] argues that HEPC is more appropriate than the use of stocks of providers such as the number of physicians, nurses, and beds.HEPC captures the final consumption of health care goods and services including personal health care (curative, rehabilitative, long-term, ancillary services, and medical goods) as well as collective services (public health services and health administration).It is relatively comparable between countries because it is measured in international dollars at purchasing power parity (PPP) rate.Countries with higher levels of healthcare services use are expected to have better health outcomes.Hence, the expected sign of HEPC is positive.The squared term of HEPC captures the non-linear relationship between the HEPC (i.e.health input) and the infant survival rate (i.e.health outcome).It is used to determine whether or not the input variable has a diminishing marginal effect on the outcome variable. Evidence demonstrates that economic and social factors outside the control of the health system can affect a country's capacity to maximize the impact of a given health spending on its health system's outcomes [25,33,34,36].To isolate the effect of health spending (HEPC), three socio-economic factors are controlled for in the models: employment (economic factor), education (social factor), and population age structure (demographic factor). In most studies, income (proxied by GDP per capita) and consumption are used to measure the impact of economic well-being on population health [40][41][42].But in the sample, HEPC and these variables had a strong association (pairwise correlation coefficients ranging between 0.820 and 0.903, p < 0.01).This suggests that these variables only provide a small amount of information to the models.For these reasons, they have not been included in the analysis. Therefore, employment (EMP) is used as a proxy for aggregate economic well-being [ [11,43]].Employment is measured as the number of persons aged 15 years and older who are engaged in employment as a share of the total population.Employment status predicts the overall economic well-being of a person [ [41]].While there is a strong evidence of an association between unemployment and poorer health outcomes [44][45][46], the relationship between employment status and health status has been mixed, with some studies showing positive effect of employment on health [ [47]], yet others show no relationship or negative effect [ [48]].Thus, the a priori sign of employment (EMP) is either positive or negative. The term "level of education" (EDU) refers to the sum of the predicted years of schooling for children and the mean years of schooling for adults, both given as an index and scaled with the corresponding maxima.With higher wages and more stable work as a result of education, families are better able to afford quality healthcare [ [49]].Additionally, studies show that adults who are less educated are more likely to engage in unhealthy practices such as smoking, eating unwholesome diet, and failing to exercise [12,50,51] found a strong association between education and health system efficiency in producing better health outcomes.Therefore, education (EDU) is expected to have a positive sign. Population age structure (AGE) is the proportion of the population aged 65 and above.The a priori expectation sign of AGE is largely dependent on the outcome variable.For instance, since health deteriorates with age [ [38]], it postulates a negative relationship between AGE and health outcomes such as healthy life expectancy.However, the relationship between AGE and infant survival rate is not direct.On the one hand, countries with a higher proportion of older adults may have better healthcare systems and social support structures, which could benefit infants and improve their chances of survival.On the other hand, a high proportion of older adults in a population may also indicate a demographic shift towards an aging population, which can strain health care and social support systems and potentially negatively impact infant survival rates. Therefore, the expected sign of AGE can be negative or positive.Health financing policy variables and characteristics were identified through extensive literature review [21,33,34,[37][38][39] and through a review of the World Health Organization's Global Health Expenditure Database (WHO-GHED).We used compulsory financing arrangement (CFA), domestic general government health expenditure per capita (GGHE), and out-of-pocket health K. Arhin et al. expenditure (OOP).These three time-variant variables were selected based on health financing policy reforms that have taken place in some countries across the SSA region over the last twenty years (from 2000 to 2020) to assess their effects on the efficiency of health systems.GGHE and OOP reflect the prioritization of health in government spending [ [52]] while CFA is used as a proxy for prepayment financing reforms.CFA is the sum of three main sources of healthcare finance1: (i) government health prepayment financing schemes; (ii) compulsory contributory health insurance schemes (i.e.social health insurance and compulsory private health insurance schemes); and (iii) compulsory medical saving accounts (SHA, 2011).Health financing systems in the SSA countries are complex institutional constructs that differ between countries.However, for the purpose of classifications it is necessary to reduce the complexity by focusing on the core financing part of each healthcare financing system.To this end, all the studied countries were classified into four types of health systems: public, private, external, and mixed based on which source of funding predominates healthcare financing (see Table 1 for details).According to Kutzin [ [52]], health systems are classified by their predominant source of funding.Böhm et al. [ [53]] classified 29 OECD countries into five healthcare systems based on which type of actors (state, private, etc.) dominate each core dimension (financing, regulation, and service provision) of the healthcare system.Similarly, Joumard et al. [ [54]] classified OECD countries into six health system groups using 20 policy and institutional indicators. Additionally, the health systems were characterized based on whether social health insurance (SHI) was one of the healthcare financing mechanisms in the country.Social health insurance (SHI) is the organizational mechanism for financing health care services based on risk pooling.SHI pools both the health risks of the insured on one hand, and the contributions of the individuals, enterprises, and the government on the other hand [ [55]].A statutory or national health insurance is SHI mandated by the government. Data The study sourced data from World Health Organization's Global Health Expenditure Database (WHO-GHED) and World Bank's World Development Indicators (WB-WDI) between 2000 and 2019 for 46 SSA countries with a total of 910 observations.The period and countries sampled for the analysis were based on availability of data.The definitions of variables used in the analysis and sources of data are presented on Table 2.The average health expenditure per capita in the SSA, between 2000 and 2019, is $216.14 for all the 46 selected SSA countries in this study.The cross-country variations in the health expenditure per capita is quite dramatic, ranging from a minimum of $6.90 (DR Congo in 2000) to maximum of $1476 (Mauritius in 2019) with a standard deviation of $261.On the average, health expenditure per capita has more than doubled in the SSA, increasing from an approximate value of $134 in the year 2000 to $285 in 2019, registering over 110 % increase over the last two decades (see Appendix A2). Descriptive statistics Education which is measured on the scale of 0-1 is averaged 0.42 in the SSA while the average of proportion of the population in employment is approximately 36 %, indicating a high dependency ratio across the SSA countries.The age structure of the population shows that an average of just 3.3 % of the total population are 65 years and above, an indication of a very youthful population in SSA.On the broader healthcare financing typologies, public health systems spent $410 per person per year, which is about 4.9 times that of externally-funded health systems and 2.6 times of private health systems (see Appendix A3).While 63.3 % of healthcare spending comes from prepayment arrangements in public health systems, just 28.9 % of healthcare expenditure of private health systems are financed through prepayment arrangements.It also worth noting that out-of-pocket payment constituted as much as 59.15 % of healthcare expenditure of private health systems. Estimated stochastic frontiers We selected four time-varying specifications (i.e.Kumbhakar [ [19]], Battese and Coelli [ [20]], true fixed effects, and true random effects (Greene [ [18]]) to estimate the stochastic production frontiers.The parameters were estimated by the maximum likelihood estimation technique.Table 4 reports the results. The first segment of Table 4 presents the frontier functions for the four models, while the second segment presents the variance decomposition (σ u σ v λ θ).The signs of all the estimated coefficients in the production function across the four models are consistent with theory.The statistically significant positive effect of per capita health spending indicates that it is an important determinant of a country's health production outcome.The coefficient of the quadratic term of the per capita health care expenditure is also statistically significant, indicating that the elasticity of ISR with respect to per capita health care expenditure diminishes as the level of per capita health care expenditure rises.These results are similar to the findings of many other studies [21,37,[56][57][58], but contradict the findings of other studies [59][60][61]. The significantly positive coefficients of employment and education, as indicators of the overall economic well-being and social factor, respectively, are as expected and in line with Self and Grabowski [ [59]] and Ambapour [ [12]].The negative impact of population age structure (in the TRE model) is consistent with previous studies (see Refs. [38,39,62]).This result supports the proposition that older people are higher users of healthcare service which strains healthcare and negatively impact infant survival rates. The consistency of results in terms of signs and values of the estimated parameters across the four models points to the reliability of the technical efficiency scores generated from each of the four specifications.In order to deal with any potential heteroscedasticity or autocorrelation present in the data, a cluster robust standard error estimation approach was used.It is worth noting that the estimate of λ was statistically significant for all the models, justifying the use of the SFA methodology in this study and the existence of technical inefficiency in the dataset.The value of λ is smallest for the TRE model and highest for the Battese and Coelli and Kumbhakar models.Theta (θ) which represents the component of the variance introduced in the TRE model to control unobserved heterogeneity among the cross-sectional units was statistically significant at 1 % level.This gives an indication that the TRE model was able to disentangle the time-invariant unobserved heterogeneity from inefficiency (u i ). Estimated technical efficiency scores Using the JLMS estimator developed by Jondrow et al. [ [23]] as described by Greene [ [18]], we calculated the technical efficiency of each health system for the four time-varying models.Based on the average of the estimated efficiency scores from the four models, we ranked each of the 46 SSA nations in the sample.In Appendix A4, a summary of the estimated technical efficiency scores and ranks is provided. From Table 5, the average estimated health system technical efficiency ranges from a minimum of 0.854 (Kumbhakar Model) to a maximum of 0.988 (TRE Model) across the four models.The average of the four models was estimated at 0.942. Appendix A3). Another possible source of inefficiency associated with public health financing might be as a result of a phemenon widely described as a 'Baumol's disease'.In health economics, health systems that have large decreasing returns (converging to 0) of public healthcare expenditure on health outcomes, such that large health expenditures have limited impact on health status of the population, are described as suffering from 'Baumol's disease' [ [66]].This phenomenon usually comes about when increases in public healthcare expenditures are largely taken up by expensive healthcare services and products which benefit small sub-populations to the detriment of services and products that produce both huge positive externalities and induced increasing returns such as vaccinations against childhood diseases. The conclusions of some previous studies are partially supported by these results [ [67]], though they contradict a priori expectations and previous empirical studies [ [68,69]].In contrast to more fragmented mixed and private financing systems, public-dominated healthcare financing systems are Beveridge-style single-payer tax-funded systems that rely on a small number of revenue sources, financing is concentrated, and private insurance for medical services is limited [ [21,68]].In theory, single-payer systems should have the advantages of lowering administrative costs, a monopsony power that controls provider costs, and limiting consumer choices to control resources devoted to health care [ [68,70]].However, single-payer systems may suffer from low access to healthcare services [ [71]] and inefficient utilization of healthcare resources due to poor governance [ [56,65]]. The results from Table 6 indicate that, with a given level of health expenditure, countries that offer social health insurance schemes, as compared those that offer alternative schemes, perform better in improving their health system efficiency to achieve better health outcomes.This evidence is pervasive across all the three models.Indeed, further analyses of the data (see Appendix A5) reveal that countries with SHI spent far less (M = $156,SE = 6.61) in per capita terms than countries without SHI (M = $263,SE = 14.23), and the difference was statistically significant at 1 % level [t(908) = 6.26,p < 0.01].Nevertheless, health outcomes (measured in terms of infant mortality rate) in countries that adopted SHI to finance healthcare services is better (M = 59.71,SE = 0.99) than in those without SHI (M = 60.39,SE = 1.21).These results are consistent with previous empirical and theoretical studies [72][73][74] and thus serve as cross-validation of the previous results.For instance, Green et al. [ [72]] noted that SHI improves efficiency of the healthcare system and assists patients to obtain primary health care at less cost.Social health insurance pools both healthcare funds and health risks which enhance cross-subsidization of healthcare costs and thus promote health system efficiency in achieving better health outcomes [ [74]]. Furthermore, given that the compulsory financing arrangement has a positive and significant coefficient, more people having prepaid health coverage will result in a more efficient healthcare system.This finding coincides with a priori expectations and the results obtained by Wranik [ [21]] and Gerdtham et al. [ [75]] who found that health systems that offer insurance coverage to the larger percentage of the population are more efficient.In fact, the data used in this study shows a fairly strong statistically significant correlation (r = − 0.724, p < 0.01) between percentage of the population in compulsory financing schemes and out-of-pocket payment as percentage of total health expenditure.This suggests that government-subsidized or privately-funded insurance coverage provides financial protection against out-of-pocket payments for the insured and improves the efficiency of health systems in SSA to deliver better health outcomes. General government health financing significantly has a positive association with efficiency of health systems (see Table 6).This implies that investment by governments in the health sector improves the efficiency of health systems in SSA.In previous studies, government health expenditure was found to have positive association [75][76][77], negative association [64,65,78], and no association [57,59,69,79] with health system performance.The positive impact of government health expenditure on efficiency favors the Abuja proposition that governments in Africa should invest at least 15 % of their budget in health. The coefficient of out-of-pocket payment is negative and statistically significant.This implies that out-of-pocket payments for healthcare reduce health system efficiency in SSA.This finding suggests the need for policymakers to design and implement healthcare financing schemes that have the potential to reduce out-pocket-payment to the barest minimum.The negative effect of out-of-pocket payment on health system efficiency is consistent with several past studies [ [33,80]].However, Ogloblin [ [63]] found a positive relationship between out-of-pocket spending and health system efficiency, perhaps because out-of-pocket spending constituted a relatively small proportion of total health expenditure in the sample used in the study (which excluded all low-income and war-torn countries). When one-step stochastic frontier analysis is used, as proposed by Battese and Coelli [ [20]], to analyze the sensitivity of the empirical findings, where the production function and inefficiency effects are concurrently evaluated, similar results are obtained (see Table 7).Since the inefficiency scores are used as dependent variable, a negative coefficient of a variable implies that the variable has a negative effect on inefficiency.In other words, an increase in the value of the variable leads to a decrease in the inefficiency score, which is desirable as it indicates that the firm is operating closer to the production frontier. Conclusion and policy implications In the past two decades, there has been wave of healthcare financing reforms across most countries in the SSA region.This paper examines the impact of these reforms on health system efficiency in improving health outcomes.The paper finds evidence that an increase in out-of-pocket payment is associated with decrease in health system efficiency and worsening health outcomes, but increase in compulsory healthcare financing coverage and general government health spending contribute positively in enhancing health system efficiency that results in improved health outcomes.Again, the findings of the paper show that healthcare financing structures influence efficiency of health systems.Health systems that are predominantly financed through public resources (i.e.state-funded health systems) are least efficient relative to the other three typesexternal, private, and mixed.The paper further provides evidence indicating that social health insurance coverage significantly improves health system efficiency and health outcomes.This study provides additional policy-relavant analysis of healthcare financing arrangements that contribute to efficiency.The key policy implications of the results of this study are as follows.First, there is evidence that financing healthcare via social health insurance and other compulsory financing mechanisms improve health system efficiency.This finding is also supported in previous empirical and theoretical studies and thus serves as a cross-validation of the previous results.It emphasizes the importance of pooling of funds to finance healthcare. Second, the evidence that predominantly state-funded healthcare systems are least efficient in the SSA calls for rigorous reappraisal of prioritization of health projects and programs that are undertaken by governments with the view to improve their performance on health outcomes.It also calls for policy designs that will reduce bereaucracy and corruption associated with publicly-funded healthcare programs and projects. Finally, based on the sign and statistical significance of the healthcare financing system type, donor-funded health systems perform better in improving health system efficiency relative to publicly-funded healthcare systems in the SSA.This finding calls for seriuos retrospection of the consensus reached by members of the Harmonization for Health in Africa, made up of health and finance ministers in Africa, during the 61st session of WHO Regional Committee in Yamousokro, that donor sources of health financing should "only play a catalystic role, and the bulk of funding for health should be mobilized from domestic sources".However, based on this study, domestic sources of funding (at least the general government expenditure component) fails to improve health system efficiency as much as the donor sources of health financing does to significantly improve health outcomes in the SSA.This emphasizes on the earlier points of reappraisal of the prioritization of general government health expenditure and increasing the share of social health insurance in the domestic total health expenditure so that when donor source of health expenditure becomes erratic, the health systems in SSA would not suffer. This paper, like any other research study, suffers from some limitations which provide opportunity for future studies to refine the outcomes of this current study.First, the empirical results from this study must be interpreted with caution since the stochastic frontier analysis framework was designed to measure association but not to establish causal relationships.Second, the method used to classify health systems is crude as it fails to account for the details and complexities of each healthcare system.It is, therefore, recommended that future studies explore additional features of health systems such as degree of centralization, gatekeeping and cost-sharing arrangements, and methods of payment to primary and specialist physicians. Again, infant survival rate as the outcome variable of health systems might not be adequate to capture the total contribution of health systems in improving the status of quality of life.It is recommended that future studies explore other variables of health outcomes, such as healthy life expectancy at birth (HALE) and disability adjusted life expectancy (DALE), that measure preventable years lost to both death and to poor quality of life. Further, limitation exists for availability of high quality data on hospital bed density (capital stock input), physicians density, and nurses and midwives density (labor stock input) for most SSA countries.This makes it difficult to estimate a stochastic frontier production function accurately.This challenge requires that researchers use healthcare expenditure per capita as the main healthare input variable.Future studies should explore the use of these healthcare inputs in the estimation of the SFA production function when high quality data become available. Despite these limitations, this study presents valuable guiding evidence for policy-making purposes.Evidence from this study support extension of social health insurance and other forms of compulsory healthcare financing coverage and the need to reappraise the prioritization of domestic general government health expenditure. Table 1 Healthcare financing system characteristics A, .Domestic General Government Health Expenditure as a percentage of Current Health Expenditure; PVT-D = Domestic Private Health Expenditure as a percentage of Current Health Expenditure; EXT = External Health Expenditure as a percentage of Current Health Expenditure; HFST = Healthcare Financing System Type; SHI = Social Health Insurance.The healthcare financing characteristics were assessed solely based on data in World Health Organization's Global Health Expenditure Database (GHED).Twenty-year (from 2000 to 2019) averages of the three sources of financing healthcare (Public, Private, and External) were computed for each health system and the source that predominated (i.e.provided 50 % or more) was used to characterize the health system.If none of the three sources predominated (i.e. each source provided less than 50 %) the health system is characterized as Mixed.With regard to Social Health Insurance (SHI), 'Yes' under SHI for a country means that the country had Social Health Insurance as one of the healthcare financing policy tools while 'No' indicated otherwise. A a Since 2012.b Since 2016.c Since 2005.d Since 2007.e Since 2006.f Since 2014.K. Arhin et al. Table 3 presents the descriptive summary statistics of the selected health outcome, input, control, and health financing policy variables.The results show that on the average infant mortality rate in SSA, between 2000 and 2019, is approximately 60 for every 1000 live births per year.Seychelles, Mauritius, Cabo Verde, and Botswana recorded the lowest values of infant mortality rate while Sierra Leone, Central Africa Republic, Liberia and Angola had the highest values.Infant mortality rate in SSA has decreased from an average of approximately 81 cases in 2000 to 45 cases in 2019 (see Appendix A2).Infant mortality rate ranged from a minimum of 11.8 deaths per thousand live births in Seychelles in 2005 to a maximum of 139.5 cases in Sierra Leone in 2000. Table 2 Definitions of variables and data sources. Health expenditure through out-of-pocket payments measured as a percentage of total health spending.WHO-GHED Notes: WHO-GHED = World Health Organization's Global Health Expenditure Database; WB-WDI = World Bank's World Development Indicators; THE = Total health expenditure. 1Source: WHO-GHED 2021 update based on System of Health Accounts (SHA 2011) methodology.K. Arhin et al. Table 4 Estimated stochastic frontier models (dependent variable: Infant survival rate). Table 6 Results of Tobit regression with efficiency as dependent variable. Table 7 One-step results of health production and inefficiency component.
2023-10-14T15:51:58.758Z
2023-10-01T00:00:00.000
{ "year": 2023, "sha1": "95140ac586363f8f2c75aee0ddd423b3956a857d", "oa_license": "CCBYNCND", "oa_url": "http://www.cell.com/article/S2405844023077812/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "33785a56be94eb499e6499828bac591a06874db3", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "extfieldsofstudy": [ "Medicine" ] }
3721057
pes2o/s2orc
v3-fos-license
Alternative management of the left subclavian artery in thoracic endovascular aortic repair for aortic dissection: a single-center experience Background Since the new 2009 guidelines for left subclavian artery (LSA) management using thoracic endovascular aortic repair (TEVAR), a few studies have been published about alternative LSA management. The objective of this study was to present the follow-up results of covered or revascularized LSA during TEVAR. Methods From January 2010 to August 2012, 109 consecutive patients were treated with TEVAR at the Department of Vascular Surgery, Changhai Hospital, for aortic dissection extending near the LSA. After evaluating the bilateral vertebral arteries, fifty-two LSAs were covered and not revascularized (covered group), while 57 LSAs were preserved (revascularized group). Complications were stratified according to the time of occurrence after surgery. Results Emergency operations were more common (17.3 vs. 3.5 %, P = 0.017) and operation time was shorter (96.9 ± 16.3 vs. 135.3 ± 38.4 min, P < 0.001) in the covered group. Pulselessness and intermittent claudication of the left arm occurred in most patients in the covered group (P < 0.001). Incidence of stroke and cold shoulder feeling were higher in the covered group compared with the revascularized group (P = 0.026 and <0.001, respectively). There were five aorta-related deaths in the covered group and one in the revascularized group. Eight endoleaks were observed in the revascularized group (P = 0.006). Conclusions The results of this study suggest that due to occurrence of complications, LSA should be preserved or revascularized to reduce complications and to improve patients’ quality of life. Background Aortic dissection is the disruption of the aortic media with bleeding within and along the aortic wall resulting in separation of aortic layers [1]. About two thirds of acute aortic dissections occur in men and in patients aged >60 years [2,3]. Estimated annual incidence of aortic dissection is 2-6 per 100,000 individuals [4,5]. Risk factors are hypertension, direct blunt trauma, pheochromocytoma, cocaine use, weight lifting, aorta coarctation, and some genetic syndromes [1,2]. Aortic dissection may result in aortic rupture, aortic valve insufficiency, end-organ complications, and death [1,2]. The advent of thoracic endovascular aortic repair (TEVAR) has altered the management algorithm for aortic dissections [6]. The increased use of TEVAR has been driven by advantages reported in older patients with greater comorbidities who have been judged unfit for direct open surgery and optimal medication regimens [7,8]. An adequate length of proximal landing zone is a prerequisite for endovascular therapy [9]. Therefore, covering the left subclavian artery (LSA) with a thoracic stent graft to achieve an adequate landing zone is sometimes inevitable. However, there is a controversy in the literature regarding whether to simply cover the LSA or to revascularize it. Several studies have concluded that the risks associated with simply covering the LSA are low and that subclavian artery bypass could be performed in cases of obvious postoperative complications, such as the presence of left arm claudication or vertebrobasilar insufficiency [10][11][12][13]. Conversely, other studies identified an increased risk of neurologic complications, specifically strokes and spinal cord ischemia following LSA coverage [14][15][16]. In 2009, the Society for Vascular Surgery published the clinical practice guidelines for LSA management during TEVAR [17]. The guidelines proposed three recommendations to address the LSA. The first two guidelines are for elective TEVAR and suggest revascularization is the most suitable method. The third recommendation suggests that revascularization should be individualized and addressed on the basis of anatomy, urgency and availability of surgical expertise to patients who need very urgent TEVAR for life-threatening acute aortic syndromes where achievement of a proximal seal necessitates coverage of the LSA. However, revascularization can be performed after emergency TEVAR. Therefore, these guidelines do not answer the controversy, and more results are needed to assess this point. On the basis of our previous experience in endovascular treatment and the branches of aortic arch [18][19][20][21][22], the present study aimed to assess the outcomes of patients who underwent TEVAR for aortic dissection and compare the outcomes of patients who had their LSA covered with those who had revascularized LSA. Patients This was a single-center retrospective study of patients with aortic dissection treated by TEVAR (n = 109) at the Vascular Surgery Department of Changhai Hospital, Shanghai, China, between January 2010 and August 2012. The study protocol was approved by the Ethics Committee of the hospital, and the need for individual consent was waived by the committee. Outcome and follow-up All patients were followed up with CTA of the aorta and the branches of the aortic arch at 6-month intervals for the first year and then once annually. The primary adverse events were stroke, paraplegia, and death. Follow-up was censored on December 2013. Statistical analysis Statistical analysis was performed using SPSS 19.0 (IBM, Armonk, NY, USA). Categorical variables are presented as numbers and proportions, and were analyzed using chi-square or Fisher's exact tests, as appropriate. Continuous variables are presented as means ± standard deviations or as median (range) and were analyzed using t tests or nonparametric tests, as appropriate. Event-free survival was analyzed using the Kaplan-Meier method and curves were compared using the log-rank test. Two-tailed P values <0.05 were considered significant. Patient characteristics Patient characteristics are presented in Table 1. The mean age at onset was 56.2 ± 9.6 years, and majority of the patients were male (86.2 %). Ten patients were older than 70 years of age. The mean body mass index was 23.8 ± 3.2 kg/m 2 . Ninety-three patients had a history of hypertension, and 41 patients were smokers at the time of admission. Associated comorbidities were chronic obstructive pulmonary disease (n = 7), diabetes mellitus (n = 11), stroke (n = 4), myocardial infarction (n = 9), and angina (n = 5). The proximal entry tears were located in the proximal descending aorta in 62 patients, in the arch in 32 patients, and in the ascending aorta in 15 patients. Three patients underwent preoperative hemodialysis, and one patient had preexisting congestive heart failure in the revascularized group. Two patients had renal failure and seven patients had pneumonia. General anesthesia was administered to 10 patients in the covered group and 14 in the revascularized group. The number of emergency procedures was nine in the covered group and two in the revascularized group (P = 0.017). The time taken for the surgical procedure was 96.9 ± 16.3 min in the covered group and 135.3 ± 38.4 min in the revascularized group (P < 0.001). Indications for alternative management of the LSA The reasons for alternative management of the LSA are presented in Table 2. There were 38 patients (including five emergency procedures) with a dominant right vertebral artery confirmed by preoperative CTA. Eleven patients (including four emergency procedures) had equipotent bilateral vertebral artery. Three cases of LSA thrombosis were included in the covered group. The revascularized group was composed of 41 patients (including two emergency procedures) with a dominant left vertebral artery. Three patients who underwent preoperative hemodialysis had a functional arteriovenous shunt in the left arm. Other indications for revascularization of the LSA included an occluded right vertebral artery (n = 3), planned long-segment coverage of the descending thoracic aorta (n = 5), patent left internal mammary artery to coronary artery bypass graft (n = 2), and bilateral internal carotid artery stenosis (n = 3). Selection of revascularization method The methods selected to preserve the LSA in the revascularized group are shown in Table 3. Eleven patients underwent bypass grafting, six the scallop or fenestration techniques, 12 the chimney approach, and 28 received single-branched stent grafts. Complications observed during follow-up periods The median follow-up period was 34 months, ranging from 16 to 48 months. All of the preserved subclavian arteries remained patent, and all of the proximal entry tears were successfully occluded. No access site complications occurred. Complete thrombus formation in the false lumen of the aorta was demonstrated in all patients, and significant true lumen recovery and false lumen shrinkage were demonstrated in the aorta. Complications during follow-up are presented in Table 4. Two strokes happened on the third and sixth day after the procedures, and five were observed during the mid-to long-term follow-up periods in the covered group (P = 0.026). Forty-six cases of pulselessness were observed; the patients had Doppler signals but no palpable pulses of the radial artery (P < 0.001), and twenty-four patients in the covered group suffered from intermittent claudication of the left arm when they performed physical activity (P < 0.001). Fifteen and two patients complained of a cold shoulder feeling in the covered group and revascularized group, respectively (P < 0.001). There was an aorta-related death in the covered group on the seventh day; the four other deaths in the group occurred on the second, fourth, fifth, and seventh month after the procedure. No abovementioned complication occurred in the revascularized group during short-term follow-up. However, some complications occurred during the mid-to long-term follow-up. Paraplegia was observed in four and two patients in the covered group vs. the revascularized group, respectively. A significant difference in endoleak occurrence was observed between the two groups (0 vs. 8, P = 0.006). The number of complications in the covered group was much higher compared with the revascularized group. In addition, patients in the covered group developed more complications in the third and sixth month after TEVAR, while the highest number of complications in the revascularized group occurred in the second month after TEVAR. In the covered group, coils were used as an adjunctive technique in 16 patients; this method was used if a type II endoleak was caused by collateral reflux. The LSA orifice was then occluded by coils after puncturing the left brachial artery. We compared complication rates in the subgroup of patients treated with coils compared to those treated without coils but found there was no significant difference between them (Table 5). To evaluate whether the entry point location had an influence on the complication rate, we compared the number of complications for patients subgrouped according to the location of entry tears. The data are presented in Table 5 and show that there was no significant difference between them. Figure 1 presents event-free survival. Compared with the covered group, the revascularized group had a better 4-year event-free survival (93.0 vs. 69.2 %, P = 0.002). During follow-up, twelve patients in the covered group underwent revascularization of the LSA to improve their quality of life. Discussion The objective of the present study was to present the follow-up results of covered or revascularized LSA during TEVAR. The results showed that emergency operations were more common and operation time was shorter in the covered group. Pulselessness and intermittent claudication of the left arm occurred in most patients in the covered group. The incidence of stroke and cold shoulder feeling were higher in the covered group compared with the revascularized group. There were five aorta-related deaths in the covered group and two in the revascularized group. Eight endoleaks were observed in the revascularized group. To avoid complications induced by stent-graft migration, the seal zones of the stent grafts are required to be no less than 2 cm [9]. In nearly 40 % of all patients, this proximal landing zone involves covering the LSA [28]. However, the management of the LSA in the setting of intentional coverage during TEVAR remains controversial. In 2009, the Society for Vascular Surgery developed the clinical practice guidelines for the management of the LSA with TEVAR and offered three main recommendations [17]. In the present study, different treatment strategies were used after evaluating the patients' conditions and blood supply, based on the three recommendations from the guidelines. Results showed that most non-revascularized patients had left arm complications, such as pulselessness and intermittent claudication. In addition, over-stenting of the LSA without revascularization was associated with a relatively high incidence of stroke and a cold shoulder feeling compared with patients who underwent preoperative revascularization of the LSA. The left vertebral artery originating from the LSA is a primary component of the vertebrobasilar artery, which divides into two posterior cerebral arteries and supplies two fifths of the blood to the brain [10]. More than 60 % of individuals have a dominant left vertebral artery, which has been used to justify routine preoperative LSA revascularization [15,17,29]. In addition, the guidelines underline that the LSA may be covered upon certain conditions [17]. Even in the absence of life-threatening symptoms, some benign symptoms may lower patients' quality of life. No other organ suffers more readily from an irreversible attack than the brain when its blood supply is insufficient. In those patients with a dominant right vertebral artery or equipotent bilateral vertebral arteries, covering the LSA would be devastating in the event of right vertebral artery stenosis or occlusion, such as by a thrombus resulting from atrial fibrillation. Indeed, the LSA is the primary artery to the left arm and a source of blood flow to the brain and spinal cord. Given the extensive circulation provided by the LSA, covering the LSA during TEVAR may not be inconsequential, which is associated with an increased risk of anterior and posterior stroke or spinal cord ischemia compared with patients whom this artery is not covered [8,13,15,30]. A multicenter registry analysis concluded that the incidence of paraplegia due to spinal cord ischemia and stroke was higher in LSA-covered patients than in those who received prophylactic revascularization [31]. The guidelines suggest that the LSA must be revascularized in some situations [17] such as bilateral internal carotid artery disease, isolated left brain hemisphere, and an incomplete circle of Willis. In the present study, one patient was referred to our center urgently, eliminating the possibility of revascularizing the LSA. Unfortunately, the patient died on the seventh day after the emergency procedure due to an acute cerebral infarction. It is possible that this patient would have survived if postoperative revascularization had been performed. The present study is not without limitations. First, it was a retrospective study performed in a small number of patients. In addition, a number of different approaches were used to revascularize the LSA, which could lead to bias. However, the present study analyzed the patients as patent/non-patent LSA. Further large multi-center studies are required to assess these points. Conclusions Some complications were observed when covering the LSA during TEVAR. Therefore, the LSA should be preserved or revascularized if possible, whether preoperatively or postoperatively. In patients who are referred urgently, a postoperative revascularization should be executed when possible. Competing interests The authors declare that they have no competing interests. Authors' contributions LZ participated in the conception and design, data collection, analysis and interpretation, and statistical analysis and wrote the manuscript. QSL and ZPJ participated in the conception and design and data collection, obtained funding, provided critical revision of the article, and take overall responsibility for this study. JZ and ZQZ participated in the data collection, analysis and interpretation, and statistical analysis and provided critical revision of the article. JMB participated in the data collection and analysis and interpretation and provided critical revision of the article. All authors read and approved the final manuscript. Submit your next manuscript to BioMed Central and take full advantage of:
2015-06-01T23:46:22.000Z
2015-05-31T00:00:00.000
{ "year": 2015, "sha1": "196c7538d8b5e932902f20a0a26cf0d3a06f2eb5", "oa_license": "CCBY", "oa_url": "https://eurjmedres.biomedcentral.com/track/pdf/10.1186/s40001-015-0147-z", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "196c7538d8b5e932902f20a0a26cf0d3a06f2eb5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
18707324
pes2o/s2orc
v3-fos-license
Novel pharmacologic treatment in acute binge eating disorder – role of lisdexamfetamine Binge eating disorder (BED) is the most common eating disorder and an important public health problem. It is characterized by recurrent episodes of excessive food consumption accompanied by a sense of loss of control over the binge eating behavior without the inappropriate compensatory weight loss behaviors of bulimia nervosa. BED affects both sexes and all age groups and is associated with medical and psychiatric comorbidities. Until recently, self-help and psychotherapy were the primary treatment options for patients with BED. In early 2015, lisdexamfetamine dimesylate, a prodrug stimulant marketed for attention deficit hyperactive disorder, was the first pharmacologic agent to be approved by the US Food and Drug Administration for the treatment of moderate or severe BED in adults. This article summarizes BED clinical presentation, and discusses the pharmacokinetic profile, efficacy, and safety of lisdexamfetamine dimesylate in the treatment of BED in adults. Introduction to binge eating disorder (BED)management challenges BED is a newly recognized clinical entity in the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5) 1 and an important public health problem worldwide. BED is the most common eating disorder and recent data from the World Health Organization Mental Survey Study, which included community surveys of 24,124 adult respondents across 14 countries on four continents, found a lifetime prevalence rate of BED of 1.4%. 2 In the USA, the lifetime prevalence of BED has been estimated to be 2.6%. 2,3 The DSM-5 diagnostic criteria for BED, including indicators for severity, are listed in Table 1. BED is characterized by recurrent episodes of binge eating (BE), defined as eating in a discrete period of time (~2 hours) an amount of food larger than most people would eat under similar circumstances, and having a sense of loss of control over the eating, without the inappropriate compensatory behaviors of bulimia nervosa (BN), for example, purging, vomiting, or excessive use of diuretics or laxatives. The BE episodes occur on average at least once a week for 3 consecutive months, and are associated with feelings of guilt, depression, or distress. Patients with BED often eat in secrecy; they are embarrassed by the BE behavior and their perceived inability to control the urges to overeat. During a BE episode, the patient might eat more rapidly than normal, eat until feeling uncomfortably full, or eat large amounts of food when not feeling physically hungry. BED co-occurs with a plethora of psychiatric disorders, most commonly with mood and anxiety disorders. 4 Indeed, approximately four out of five adults with submit your manuscript | www.dovepress.com Dovepress Dovepress 834 Guerdjikova et al lifetime BED have at least one comorbid psychiatric disorder, and approximately one out of two adults has three or more comorbid psychiatric disorders. 3 Obesity and its complications are among the medical comorbidities associated with BED. Growing evidence suggests that BED may independently increase the risk of development of certain components of metabolic syndrome, like diabetes, hypertension, and dyslipidemia, over and above the risk attributable to obesity alone. 5 Preliminary data indicate that the cardiovascular system, reproductive system, and cortisol response might also be affected in BED patients. 6 Psychological interventions have been recommended as first-line treatment for BED and are supported by metaanalytic reviews. 7 Cognitive behavior psychotherapy, interpersonal therapy, and structured self-help, based mainly on cognitive behavioral techniques, are effective for reducing BE symptoms and associated psychopathology but not for weight loss. 8 Various classes of medications, including antidepressants, antiepileptic drugs, antiobesity drugs, and medications approved for attention deficit hyperactive disorder (ADHD), have been tested in randomized, placebocontrolled trials in BED, and found helpful in improving BE behavior and eating related psychopathology. 9 All medications used for BED until 2015 had limitations related to their efficacy or adverse event (AE) profiles. Additionally, they were prescribed "off label" and their use in general practice was limited. On January 30, 2015, lisdexamfetamine dimesylate (LDX) received approval from the US Food and Drug Administration (FDA) for the treatment of moderate to severe BED in adults. LDX is the only medication currently approved for the treatment of BED 10 and the second medication approved for the treatment of any eating disorder, after fluoxetine was approved for BN in 1997. The review below will summarize the pharmacology of LDX, the rationale for use of LDX in BED, and tolerability of LDX in BED. The clinical use of LDX in BED will also be discussed. Pharmacology, mode of action, and pharmacokinetics of LDX LDX (abbreviation from l-lysine-dextroamphetamine) is a novel prodrug of dextroamphetamine (d-amphetamine) covalently linked to the amino acid l-lysine. LDX itself is pharmacologically inactive and metabolized to d-amphetamine by a unique mechanism involving an enzymatic process predominantly associated with the red blood cells. The pharmacology of d-amphetamine is complex and has been documented since the early 20th century when preparations containing d-amphetamine were used in World War II as a "go pill" to promote alertness and focus in Air Force pilots. 11 In vitro, d-amphetamine is a moderately potent inhibitor of DAT, NET, and VMAT2, with much weaker affinity for the SERT. D-amphetamine is also a weak MAO inhibitor. The net effect of these multiple activities in vivo is increased catecholamine availability in the extracellular space. 12 LDX (Vyvanse ® ) was developed by New River Pharmaceuticals (Radford, VA, USA) in the late 1990s with the intention of creating a longer-lasting formulation of d-amphetamine with lower abuse potential. New River Pharmaceuticals were bought by Shire Inc. (Dublin, Ireland) A Recurrent episodes of Be. An episode of Be is characterized by both of the following: 1) eating, in a discrete period of time (for example, within any 2-hour period), an amount of food that is definitely larger than most people would eat in a similar period of time under similar circumstances; 2) a sense of lack of control over eating during the episode (for example, a feeling that one cannot stop eating or control what or how much one is eating). B The Be episodes are associated with three (or more) of the following: 1) eating much more rapidly than normal; 2) eating until feeling uncomfortably full; 3) eating large amounts of food when not feeling physically hungry; 4) eating alone because of feeling embarrassed by how much one is eating; 5) feeling disgusted with oneself, depressed, or very guilty afterward. C Marked distress regarding Be is present. D The Be occurs, on average, at least once a week for 3 months. e The Be is not associated with the recurrent use of inappropriate compensatory behavior (for example, purging) and does not occur exclusively during the course of anorexia nervosa, bulimia nervosa, or avoidant/restrictive food intake disorder 835 Novel pharmacologic treatment in acute binge eating disorder in 2007, a few months before LDX was marketed in adult ADHD. Pharmacodynamics and pharmacokinetics LDX has high aqueous solubility, low lipophilicity, and is rapidly absorbed intact after oral administration in animals and humans, attaining maximum plasma concentrations (C max ) at 0.25-3 hours. It is inactive at receptors, transporters, and enzymes in vitro and its absorption occurs via an active transport process. Following absorption, LDX is hydrolyzed by peptidases associated with red blood cells to release the active drug, d-amphetamine, and a naturally occurring amino acid, l-lysine. Red blood cells have a high capacity for the metabolism of LDX to d-amphetamine, and substantial hydrolysis occurs even at low hematocrit levels. LDX is not metabolized by CYP enzymes and does not cross the blood-brain barrier. Its metabolism is restricted to the formation of d-amphetamine and l-lysine. d-Amphetamine and its metabolites are eliminated largely in urine, with small amounts excreted in feces and bile. 13 Indication In the USA, LDX (Vyvanse ® ) received FDA approval for the treatment of ADHD in children in 2007, in adults with ADHD in 2008, and for maintenance treatment of adulthood ADHD in 2012. In early 2015, LDX was approved for the treatment of moderate and severe BED in adults. LDX has also been granted approval in Europe for the treatment of ADHD in children, adolescents, and adults. LDX is not marketed for BED in any other country beyond the USA. Preliminary data suggest LDX might be effective as adjunctive therapy to antipsychotics in adults with clinically stable schizophrenia, 14 to mood stabilizers in adults with bipolar depression, 15 and to antidepressants in adults with major depressive disorder. 16,17 LDX has also demonstrated potential to be an efficacious treatment for patients with multiple sclerosis with cognitive impairment. 18 Rationale for LDX in BED Preclinical, genetic, clinical, and neuroimaging data suggest that BE may involve dysfunction of the dopamine (DA) and norepinephrine (NE) systems. Those systems are important in regulating eating behavior and reward. 19,20 BE rodents have lower D2-like DA receptor binding selectivity in the mesoaccumbens DA system. 21 Methylphenidate reduced sucrose intake in an animal model of BE. 21 A recent study demonstrated that LDX, via its metabolite, d-amphetamine, reduced chocolate binging in rats by 71%, partly by indirect activation of alpha1-adrenoceptors and perhaps D1 receptors. 22 Eating disorders characterized by BE have been associated with the hypofunctional short allele of the 3′-UTR VNTR polymorphism of the DA transporter gene. 23 Also, BE behavior was found to have a moderate association with the hypofunctional seven-repeat allele of the DA D4 receptor gene and increased maximal lifetime body mass index (BMI) in women with seasonal affective disorder, a condition characterized by overeating, carbohydrate craving, and weight gain. 24 Preliminary data suggest that agents like LDX that facilitate DA and/or NE neurotransmission may reduce BE in humans. The selective NE reuptake inhibitor atomoxetine has been shown to reduce BE and body weight in one placebo-controlled study of BED in adults. 25 Moreover, there are reports of stimulants reducing BE in patients with BN, a condition closely related to BED. Neuroimaging studies demonstrated that food stimuli, when administered with methylphenidate to amplify DA signals, significantly increased DA in the caudate and putamen in obese binge eaters but not in obese non-binge eaters, and increases in DA in the caudate were significantly correlated with BE scores. 26 Additionally, striatal DA release was significantly associated with the frequency of BE in a controlled positron emission tomography imaging study of 17 subjects with BN. 27 In sum, the pharmacologically active d-amphetamine, released from LDX hydrolysis, inhibits reuptake of DA and NE from the synaptic cleft and simultaneously enhances the release of the DA, NE, and serotonin. By regulating these neurotransmitter systems which are involved in regulation of appetite, hunger, and eating behaviors, it was hypothesized that LDX might reduce pathological overeating and be an efficacious treatment for BED. Efficacy studies of LDX in BED Shire Inc. sponsored a BED clinical development program that included three randomized, placebo-controlled studies in acute adult BED: an 11-week Phase II proof-of-concept study and two identically designed 12-week Phase III trials. Basic study information, demographics, as well as primary and key secondary measures of the three studies are summarized in Table 2 and are described below. Phase II study (NCT01291173) The Phase II study was an 11-week randomized, placebocontrolled, fixed-dose, parallel-group multicenter trial in Neuropsychiatric Disease and Treatment 2016:12 submit your manuscript | www.dovepress.com 836 Guerdjikova et al 259 adults across 30 sites in the USA, aged 18-55 years, diagnosed with BED, and with a BMI between 25 and 40 kg/m 2 . 28 Eligible subjects were randomized to receive LDX 30, 50 or 70 mg/day or placebo in 1:1:1:1 ratio. Intention-to-treat analyses included 255 subjects. To be randomized, the subjects were required to have moderate to severe BED, defined by experiencing at least three BE days per week for the 2 weeks before the baseline visit and verified with self-report take home diaries and clinical interview. Exclusion criteria included current BN or anorexia nervosa; a lifetime history of bipolar disorder, psychosis, or ADHD; significant clinical depression; use of a psychostimulant within 6 months of screening; a recent history of suspected substance abuse; or a lifetime history of psychostimulant abuse. Psychological or weight loss interventions initiated within 3 months of screening and history of diabetes or cardiovascular disease that might increase vulnerability to the sympathomimetic effects of stimulants were also exclusionary. Subjects with mild well-controlled hypertension on single antihypertensive agent were allowed in the study. The following medications were exclusionary: hypnotics, anxiolytics, antipsychotics, antidepressants, NE reuptake inhibitors, mood stabilizers, herbal preparations, and agents with weight-changing properties (eg, orlistat, topiramate, zonisamide, and antihistamines). After randomization, all treatment groups were initiated at the 30 mg/day dosage. Patients randomized to 50 or 70 mg/day were force-titrated weekly in increments of 20 mg/day to their assigned dosage. The 3-week forced-dose titration period was followed by an 8-week dose-maintenance period during which dose reductions were not permitted. The primary efficacy measure was the number of BE days per week. A BE day was defined as a day when at least one BE episode occurred. BE episodes were recorded by subjects in a self-report diary and were confirmed during clinical interview with trained clinicians. A hierarchical testing procedure in descending order of LDX dosage was used for pairwise testing between LDX and placebo on the primary end-point measures because it was hypothesized that higher LDX dosage was more likely to be efficacious than lower dosages. Secondary efficacy measures included the number of BE episodes per week, 1-week BE episode response status, and 4-week cessation from BE (defined as no BE episodes in the last 4 weeks of double-blind treatment). It was concluded that LDX decreased global BE severity and obsessive-compulsive and impulsive features of BED in addition to BE days. 37 Phase III studies (NCT01718483 and NCT01718509) The Phase III trials were two 12-week, randomized, placebocontrolled, parallel-group, dose-optimization, and multicenter studies (NCT01718483, referred to as study 1 hereafter; and NCT01718509, referred as study 2 hereafter). 38 The two studies used identical designs and methods and were performed across 93 unique sites in the USA, Sweden, Germany, and Spain. A total of 773 subjects were enrolled across placebo (N=386) and LDX (N=387) treatment groups. In study 1, 187 subjects were randomized to placebo and 192 subjects were randomized to LDX; in study 2, 185 subjects were randomized to placebo and 181 subjects were randomized to LDX. The inclusion criteria were similar to those in the Phase II study with the following exceptions: at both screening and baseline, eligible participants had to have BMI $18 and #45 kg/m 2 and BED severity was confirmed not only with a binge day frequency of 3 or more binge days/week for the 2 weeks between screening and baseline, but also with a CGI-S score of $4 (indicating at least a moderate severity of illness) at screening and baseline visits. Exclusion criteria different than the ones in the Phase II study included use of psychostimulants for fasting or dieting for BED #6 months before screening; and resting average sitting systolic blood pressure .139 mmHg or average diastolic blood pressure .89 mmHg at screening or baseline visits. Eligible subjects were randomized 1:1 to LDX 30 mg/day or placebo. The dose of study drug was titrated during the dose-optimization phase (weeks 1-4); 30 mg/day was increased to 50 mg/day after a 7-day period. After 7-14 days the dose was titrated up to 70 mg/day based on clinical need and tolerability. A single down-titration from 70 to 50 mg was allowed during the dose-optimization phase. After the 4th week of treatment, subjects continued with their established dose for the duration of the 8-week dose-maintenance period. Similar to the Phase II study, the primary efficacy measure was the number of binge days per week obtained from the participants' BE diaries and confirmed with clinician interview. Key secondary measures included CGI-I response at week 12/ET, 4-week binge cessation at week 12/ET, YBOCS-BE score, and change from baseline to week 12/ early termination (ET) in body weight and fasting triglyceride levels. Hierarchical testing procedures were used, with statistical assessments made in the following order based All secondary measures showed significant improvement. Reduction in 4-week BE cessation at study end was observed in 38% of LDX treated subjects (40% in study 1 and 36.2% in study 2) compared to 13% in the placebo group (14.1% in study 1 and 13.1% in study 2). CGI-I (P<0.001 for both studies) and YBOCS-BE score change at study end (P<0.001 for both studies) also showed statistically significant treatment effect favoring LDX. Significant reductions in triglyceride levels were observed (P<0.001, effect size 1.03 for study 1 and P=0.002, effect size 1.11 for study 2). Percent change in weight from baseline in study 1 was -6.25% for the LDX group vs +0.11% for the placebo group (P<0.001, effect size 1.64). Percent change in weight from baseline in study 2 was -5.57% for the LDX group vs -0.15% for the placebo group (P<0.001, effect size 1.22). Exploratory end-points included assessment of disability with the Sheehan Disability Scale 39 and health-related quality of life (with the EuroQoL 5-dimension 5-level questionnaire). 40 Post hoc analyses were performed to determine the relationship between LDX therapy and disability, BE days per week and disability, and BE episodes per week and disability. Results indicated that LDX therapy had a positive effect on Sheehan Disability Scale scores. Reduction in BE days per week and BE episodes per week was associated with improvement in disability over 12 weeks. 41 LDX also had a positive effect on health-related quality of life which was indirect and mediated in part by LDX effects on BE frequency, disability, and daily functioning. 42 Limitations of Phase II and Phase III studies In all three studies, participants were mainly women, white, overweight, or obese, and by design did not have any current psychiatric comorbidities or cardiovascular conditions. Generalizing the results to a more heterogeneous population of individuals with BED warrants caution. Potential sex differences in LDX efficacy in BED patients were not specifically explored, but the percentage of male subjects across the studies was low. Additionally, the studies were relatively short in duration and this limits extrapolations to the longterm efficacy, tolerability, and safety of LDX in individuals with BED. Ongoing studies are addressing these issues. Safety and tolerability LDX is contraindicated in patients with known hypersensitivity to amphetamine products or other ingredients of LDX. Anaphylactic reactions, Stevens-Johnson Syndrome, angioedema, and urticaria have been observed in post-marketing reports. LDX should not be administered along with MAO inhibitors or within 14 days of the last MAO inhibitor dose as hypertensive crisis can occur. In the USA, LDX is categorized as a Schedule II medication by the Drug Enforcement Administration. Schedule II encompasses medications from various classes with high abuse potential. The LDX prescribing insert contains a boxed warning informing about risk of abuse and dependence. The box urges for the risk of abuse to be assessed prior to prescribing, and to further monitor for signs of abuse and dependence while on therapy. Per LDX prescribing insert, the most frequent adverse reactions leading to LDX discontinuation at a rate at least twice that of placebo in adults with ADHD were insomnia (2%), tachycardia (1%), irritability (1%), hypertension (1%), headache (1%), anxiety (1%), and dyspnea (1%). Adverse reactions reported by 3% or more of adult patients with ADHD taking LDX and at least twice the incidence compared to patients taking placebo included decreased appetite, insomnia, dry mouth, diarrhea, nausea, anxiety, anorexia, feeling jittery, agitation, increased blood pressure, hyperhidrosis, restlessness, and decreased weight. LDX is pregnancy category C and should be prescribed only if potential benefits justify the potential risk to the fetus. In the Phase II BED study, discontinuing rate due to AEs was 3.1%, and due to serious AEs was 1.5%. AEs reported by .10% of subjects in the LDX treatment group and at a rate greater than placebo included dry mouth, decreased appetite, headache, and insomnia. One subject died because of toxicology findings consistent with a methamphetamine overdose 839 Novel pharmacologic treatment in acute binge eating disorder and this event was not considered related to the study drug. Mean (SD) changes in pulse and blood pressure measurement from baseline to study end were observed in the LDX treated group (increase of 3.8 [11.75] bpm for pulse rate and 0.1 [9.85] mmHg for systolic blood pressure; diastolic blood pressure decreased with treatment, -0.7 [7.32]). No clinically meaningful trends were observed for clinical laboratory results or electrocardiography interval data. In the Phase III BED studies, discontinuation rate due to AEs was 6.3% for study 1 and 3.9% for study 2 for LDX group as compared with 2.7% in study 1 and 2.2% in study 2 in the placebo group, respectively. AEs reported by .10% of subjects in the LDX treatment group and at a rate greater than placebo included dry mouth, headache, and insomnia. No deaths occurred in either study. Other serious AEs were rare and with similar incidence in LDX and placebo groups. Minimal increases in pulse and blood pressure from baseline to study end were observed in the LDX treated group (4.41-6.31 bpm for pulse rate, 0.2-1.45 mmHg for systolic blood pressure, and 1.06-1.83 mmHg for diastolic blood pressure). No clinically meaningful trends were observed for clinical laboratory results or electrocardiography interval data. The safety profile of LDX in adults with moderate to severe BED was consistent with data from ADHD studies. No suicidality or misuse was reported across the three trials. Administration and optimal dose LDX is available as 10,20,30,40,50,60, and 70 mg capsules. It is indicated for the treatment of ADHD and moderate to severe BED, but not for weight loss. The prescribing information warns that the safety and effectiveness of LDX for the treatment of obesity have not been established. LDX is to be taken by mouth in the morning with or without food. Afternoon doses are to be avoided because of risk for insomnia. The recommended starting dose in BED treatment is 30 mg/day to be titrated in increments of 20 mg at approximately weekly intervals to achieve the recommended target dose of 50 to 70 mg/day. The maximum dose is 70 mg/day. An adequate trial is 11 to 12 weeks or 50-70 mg for 8 weeks. The medication should be discontinued if there is no improvement. Patient focused perspectives BED is an unrecognized and undertreated condition. Indeed, recent data from an international survey indicated that less than 10% of respondents with BED received treatment for their eating disorder within the last year. 2 Patients rarely spontaneously disclose BE symptoms because of embarrassment or shame. BE behavior is often overlooked and treatment commonly focuses on obesity and its complications as the presenting problem. In routine clinical practice, the administration of a brief self-report measure like the SCOFF or the Eating Attitudes Test might assist the diagnostic process if BED is suspected. 43 Shire Inc. has developed a validated self-report instrument (Binge Eating Disorder Screener-7 [BEDS-7]), that consists of seven "yes" or "no" questions and is available free of charge (https://www.bingeeatingdisorder.com/hcp/content/media/ BingeEatingDisorder_Screener.pdf) on BED informational portal. The BED informational portal supported by Shire Inc. (https://www.bingeeatingdisorder.com/) also provides current information on clinical characteristics and functional consequences of BED along with expert videos discussing the illness and helpful links for further self-education. Additional resources to assist in patients' screening, diagnostics, psychoeducation, and treatment can be found on Binge Eating Disorder Association (bedaonline.com), Alliance for Eating Disorders Awareness (www.allianceforeatingdisorders. com), and National Eating Disorder Association (www. nationaleatingdisorders.org) websites. Patients can be offered self-help tools or psychotherapy as first-line of treatment, especially if BED symptomatology appears to be mild. Numerous applications (apps) for mobile devices have been developed in the recent years as self-help tools or to enhance treatment in eating disorders in general, 44 and in BED in particular. 45 Most of the currently available eating disorders-focused apps provide means of regular self-assessment and real-time monitoring of eating habits. As the apps' functionalities grow, there is the possibility that in the future some could deliver entire personalized BED-focused interventions. In moderate and severe BED cases, pharmacotherapy can be considered monotherapy or as adjunct to psychoeducation and psychological interventions. Importantly, patient preference needs to be considered when making treatment decisions. LDX is the first medication to receive regulatory approval for the treatment of BED in the world. It is specifically approved for moderate and severe BED in adults at 50-70 mg/day. LDX dosed at 50 or 70 mg significantly reduced BE symptoms as measured by weekly binge day frequency as well as improved obsessive compulsive features associated with the BE behaviors, and had a positive effect on disability. LDX is not approved as weight loss medication and thorough assessment of the binging behavior with clinical interview and/or review of food logs and self-report measures on eating pathology is paramount in making the correct diagnosis and further guiding treatment. In adult BED patients, LDX was generally well tolerated. No studies comparing LDX with other psychological and pharmacological treatments in BED have as yet been conducted. Therefore, no comments can be made about the relative efficacy and the tolerability of LDX as compared to self-help treatment, cognitive behavior psychotherapy, interpersonal therapy, antidepressants, antiepileptic, or any obesity drugs. Data are lacking regarding the efficacy and tolerability of LDX in adults with mild BED, youth, or elderly, and in BED patients with certain comorbid conditions such as mood, anxiety, or substance use disorders; clinically significant or unstable hypertension; cardiovascular disease, or diabetes. It might not be appropriate in adults with BED who also have bipolar disorder as it might exacerbate manic symptoms. However, in an 8-week placebo-controlled study of adjunctive LDX in bipolar depression, LDX was associated with significant improvement in BE, measured with BES, and no adverse psychiatric effects were observed. 15 LDX should not be prescribed in BED if drug or alcohol abuse is suspected because of its abuse potential, and in uncontrolled hypertension or cardiovascular disease. Long-term studies are essential to extend the results of the Phase II and Phase III studies. It would be of interest to examine LDX efficacy in mild BED and in adolescents as well as in older adults. Validation of BED and approval of the first medication for its treatment mark the beginning of a new era in management of eating disorders in general, and in pharmacotherapy of BED in particular. Disclosure Susan L McElroy is a consultant to or member of the scientific advisory boards of Alkermes, Bracket, Corcept, F. Hoffmann-La Roche Ltd., MedAvante, Myriad, Naurex, Novo Nordisk, Shire, Sunovion, and Teva. She is a principal or co-investigator on studies sponsored by the Agency for Healthcare Research & Quality (AHRQ), Alkermes, AstraZeneca, Cephalon, Eli Lilly and Company, Forest, Marriott Foundation, National Institute of Mental Health, Naurex, Orexigen Therapeutics, Inc., Pfizer, Shire, Takeda Pharmaceutical Company Ltd., and Transcept Pharmaceuticals. She is also an inventor on United States Patent No. 6,323,236 B2, Use of Sulfamate Derivatives for Treating Impulse Control Disorders, and along with the patent's assignee, University of Cincinnati, Cincinnati, Ohio, has received payments from Johnson & Johnson, which has exclusive rights under the patent. The other authors have no conflicts of interest to disclose. Neuropsychiatric Disease and Treatment Publish your work in this journal Submit your manuscript here: http://www.dovepress.com/neuropsychiatric-disease-and-treatment-journal Neuropsychiatric Disease and Treatment is an international, peerreviewed journal of clinical therapeutics and pharmacology focusing on concise rapid reporting of clinical or pre-clinical studies on a range of neuropsychiatric and neurological disorders. This journal is indexed on PubMed Central, the 'PsycINFO' database and CAS, and is the official journal of The International Neuropsychiatric Association (INA). The manuscript management system is completely online and includes a very quick and fair peer-review system, which is all easy to use. Visit http://www.dovepress.com/testimonials.php to read real quotes from published authors.
2016-05-12T22:15:10.714Z
2016-04-18T00:00:00.000
{ "year": 2016, "sha1": "c1680dcd434ddbee6457b54d02821f23954fa520", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=29916", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c1680dcd434ddbee6457b54d02821f23954fa520", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
210829310
pes2o/s2orc
v3-fos-license
Cullin-5 Adaptor SPSB1 Controls NF-κB Activation Downstream of Multiple Signaling Pathways The initiation of innate immune responses against pathogens relies on the activation of pattern-recognition receptors (PRRs) and corresponding intracellular signaling cascades. To avoid inappropriate or excessive activation of PRRs, these responses are tightly controlled. Cullin-RING E3 ubiquitin ligases (CRLs) have emerged as critical regulators of many cellular functions including innate immune activation and inflammation. CRLs form multiprotein complexes in which a Cullin protein acts as a scaffold and recruits specific adaptor proteins, which in turn recognize specific substrate proteins for ubiquitylation, hence providing selectivity. CRLs are divided into 5 main groups, each of which uses a specific group of adaptor proteins. Here, we systematically depleted all predicted substrate adaptors for the CRL5 family (the so-called SOCS-box proteins) and assessed the impact on the activation of the inflammatory transcription factor NF-κB. Depletion of SPSB1 resulted in a significant increase in NF-κB activation, indicating the importance of SPSB1 as an NF-κB negative regulator. In agreement, overexpression of SPSB1 suppressed NF-κB activity in a potent, dose-dependent manner in response to various agonists. Inhibition by SPSB1 was specific to NF-κB, because other transcription factors related to innate immunity and interferon (IFN) responses such as IRF-3, AP-1, and STATs remained unaffected by SPSB1. SPSB1 suppressed NF-κB activation induced via multiple pathways including Toll-like receptors and RNA and DNA sensing adaptors, and required the presence of its SOCS-box domain. To provide mechanistic insight, we examined phosphorylation and degradation of the inhibitor of κB (IκBα) and p65 translocation into the nucleus. Both remained unaffected by SPSB1, indicating that SPSB1 exerts its inhibitory activity downstream, or at the level, of the NF-κB heterodimer. In agreement with this, SPSB1 was found to co-precipitate with p65 after over-expression and at endogenous levels. Additionally, A549 cells stably expressing SPSB1 presented lower cytokine levels including type I IFN in response to cytokine stimulation and virus infection. Taken together, our results reveal novel regulatory mechanisms in innate immune signaling and identify the prominent role of SPSB1 in limiting NF-κB activation. Our work thus provides insights into inflammation and inflammatory diseases and new opportunities for the therapeutic targeting of NF-κB transcriptional activity. INTRODUCTION Few transcription factors have such crucial roles in the induction of innate immune and inflammatory responses as the NF-κB family (1). NF-κB is central in the pathogenesis of multiple inflammatory disorders including those in the airway by inducing the production of pro-inflammatory cytokines such as interleukins (IL) and tumor necrosis factors (TNF). In addition, NF-κB contributes to the expression of type I interferon (IFN) in association with the IFN regulatory factors (IRF)-3/7, and the activator protein (AP)-1. Once secreted IFN triggers the production of hundreds of IFN-stimulated genes (ISG) via the Janus-associated kinase (JAK)-signal transducers and activators of transcription (STAT) signaling pathway, which confer an antiviral state to surrounding cells. NF-κB thus also impacts on the host antiviral response. In the classical NF-κB pathway, the NF-κB p65 and p50 heterodimer is held inactive in the cytosol bound to the inhibitor of κB (IκB). Degradation of IκBα and subsequent release of NF-κB can be induced by multiple cytokine receptors and pattern-recognition receptors (PRRs) including Toll-like receptors (TLRs) that recognize viral and bacterial nucleic acids and lipids. Engagement of TNF-α with its receptor on the cell surface induces a signaling cascade that involves the TNFRassociated factor 2 (TRAF-2), whereas signaling downstream of TLRs and IL-1R employs TRAF-6. Activation of these signaling pathways induces the formation of polyubiquitin chains that act as a scaffold for the recruitment of the transforming growth factor (TGF)β-activated kinase (TAK)-1 complex via its TAK-1-binding proteins (TAB)-2/3 and the IκB kinase (IKK) complex (2,3). Once both complexes are recruited, TAK-1 catalyzes the phosphorylation and activation of the IKK catalytic subunits (IKK-α and IKK-β), which phosphorylate IκB (4). Phosphorylated IκBα is recognized by a Cullin-RING ubiquitin ligase (CRL) complex containing Cullin-1 and the Fbox protein β-transducin repeat containing protein (β-TrCP), also known as FBXW11 (5,6), that mediates its ubiquitylation and proteasome-dependent degradation (7). The essential role of the β-TrCP-containing CRL1 complex in NF-κB activation is highlighted by the long list of viruses that antagonize its function including poxviruses (8,9), rotaviruses (10), and the human immunodeficiency virus (11). Nuclear p65 associates with transcriptional activators such as p300/CBP and general transcription machinery and drives the expression of genes containing NF-κB responsive elements. A plethora of post-translational modifications (PTM) affecting p65 (i.e., phosphorylation, acetylation, methylation, ubiquitylation, etc.) modulate the potency of this response and its selectivity toward specific NF-κB-dependent genes (12). These modifications are critical in fine-tuning the transcriptional activity of NF-κB. Ubiquitylation of a protein involves its covalent attachment of ubiquitin moieties that can subsequently be ubiquitylated to form chains (13). CRL are the largest family of ubiquitin ligases and are characterized by the presence of a Cullin (Cul) protein acting as a scaffold (14). There are 5 major Cul and hence CRL (CRL1-5) families, all of which share similar architecture but recruit and target different subsets of substrates. CRL substrate recognition is directed by specific substrate receptor subunits. CRL5 complexes employ substrate receptor proteins containing the suppressor of cytokine signaling (SOCS) motif (15,16). The SOCS box in these adaptor proteins mediates interaction with the Elongin B/C proteins that associate with Cul-5, effectively forming a CRL5 complex also known as ECS (Elongin B/C-Cul5-SOCS-box protein). Analogous to F-box proteins and CRL1 complexes, the SOCS-box domain appears in combination with other protein-protein interaction domains including Ankyrin, SPRY, and Src homology domains (17,18). These domains are responsible for the recognition of substrates via unique signatures in the primary amino acid sequence or specific PTMs, but in many cases these remain to be elucidated. The most studied CRL5 complex is that formed by the so-called SOCS proteins (SOCS1-7), which target JAKs and act as potent inhibitors of JAK-STAT signaling (19). SOCS proteins are therefore potent negative regulators of cytokine signaling, particularly IFN, suggesting that other CRL5 adaptors may evolved similar functions suppressing innate immune activation. Here, we have performed an RNAi-based screen to assess the role of CRL5 adaptors in NF-κB signaling. Our work has identified several molecules positively and negatively regulating the pathway, in particular the SPRY and SOCS-box containing protein (SPSB)-1. Depletion of SPSB1 resulted in enhanced NF-κB activation and cytokine expression, whereas its overexpression suppressed NF-κB responses triggered by cytokines as well as viruses. Our results indicate that SPSB1 associate with p65 but does not block its translocation, which suggests that it targets released p65. SPSB1 is known to target the inducible nitric oxide synthase (iNOS) (20,21) and has been linked with several pathways related to cancer, but no direct role for SPSB1 in controlling NF-κB activation has been reported. Our data therefore define another function for SPSB1 in innate immunity and inflammation and reveal novel regulatory mechanisms modulating NF-κB activation. Identification of SPSB1 as a Negative Regulator of NF-κB Signaling In order to identify novel members of CRLs that act as regulators of the NF-κB pathway, we set up an assay to systematically deplete CRL genes in A549 cells stably expressing the firefly luciferase gene under the control of a synthetic NF-κB promoter (22). These cells were transfected with a commercial library of siRNA designed to target all predicted human SOCS-box containing proteins (Table S1). Each gene was targeted by a pool of four individual small interfering (si)RNA duplexes. After 72 h the cells were treated with IL-1β and the luciferase activity was measured 6 h later. In a similar experiment, the viability of the cells transfected with the siRNA pools was measured. The screen was performed twice and included a non-targeting control (NTC) siRNA pool and a siRNA pool targeting β-TrCP, the CRL1 adaptor required for NF-κB activation. Each screen included triplicate replicates for each sample, and the data were normalized to the NTC ( Figure 1A). As expected knock-down of β-TrCP resulted in a decrease in NF-κB activation. Amongst the 38 siRNA pools tested only SPSB1 depletion consistently resulted in a significant increase (∼240%) in NF-κB activity without showing major changes in the viability of the cells (arbitrary cutoff of >75 %), suggesting that this protein has a prominent role in controlling NF-κB signaling. To validate these initial data, we deconvolved the pool targeting SPSB1 and transfected the 4 different siRNA separately to test their effect on NF-κB activation under the same conditions used before, including NTC and β-TrCP siRNA controls. Two siRNA (#2 and #4) replicated the data observed for the pool ( Figure 1B) and this represented an H-score of 0.5, a value that supported the results from the first screen (23). We then performed stable depletion of SPSB1 via short hairpin (sh)RNA transduction. Depletion of SPSB1 in the shSPSB1 cells as compared to the NTC shCtl cells was confirmed by immunoblotting ( Figure 1C). These cell lines were then used to further confirm the impact of SPSB1 on NF-κB signaling. The cells were treated with IL-1β for 6 h and the mRNA levels of the cytokines iNOS and IL-6 were examined by quantitative PCR. Treatment with IL-1β resulted in 63-and 190-fold increase of iNOS and of IL-6 expression in the control A549 cell line, respectively. In the absence of SPSB1, this same treatment induced a significantly higher expression of both iNOS and IL-6 (150 and 660 fold, respectively) and this was statistically significant (Figures 1D,E). Taken together, these data identified SPSB1 as a novel negative regulator of the NF-κB pathway, with its depletion resulting in higher expression of pro-inflammatory NF-κB-dependent genes. SPSB1 Inhibits NF-κB, but Not IRF-3, AP-1, or STAT Activation To study the function of SPSB1, its sequence was cloned into a mammalian expression vector containing 3 copies of the FLAG epitope at the N terminus. SPSB1 was then tested for its ability to inhibit NF-κB activation. HEK293T cells were transfected with a reporter expressing firefly luciferase under the control of the canonical NF-κB promoter, a control reporter expressing renilla luciferase, and either SPSB1 or the corresponding empty vector (EV). After 24 h, the NF-κB pathway was stimulated with IL-1β or TNF-α for a further 6 h. The ratio of firefly and renilla luciferase activities was calculated and plotted as a fold increase over the non-stimulated EV-transfected condition. The same cell lysates were also examined by immunoblotting to determine SPSB1 expression levels. Stimulation by IL-1β or TNF-α triggered >20-and >60-fold increase, respectively, in reporter activity in EV-transfected samples. Expression of SPSB1 reduced the activation induced by IL-1β (Figure 2A) and TNF-α ( Figure 2B) in a dose-response and statistically significant manner. To assess the specificity of SPSB1 in controlling innate immune responses, the activation of the IRF, mitogen-activated protein kinase (MAPK)/AP-1, and JAK-STAT signaling pathways was examined using reporter gene assays specific for each pathway. To determine the impact of SPSB1 on the IRF-3 signaling pathway, cells were transfected and subsequently infected with Sendai virus (SeV), a strong inducer of IRF-3 (24). The infection induced a 13-fold activation in the control cells. This was blocked by the vaccinia virus (VACV) protein C6, a known inhibitor of IRF-3 signaling (25), but remained unaffected by SPSB1 ( Figure 2C). Stimulation of the MAPK pathway was achieved by incubation with phorbol 12-myristate 13-acetate (PMA) for 24 h, which induced a 4fold activation in the EV-transfected cells. This response was downregulated by the VACV protein A49 (26), but not by SPSB1 ( Figure 2D). Finally, to address whether SPSB1 was able to impact signaling triggered by IFN via the JAK/STAT pathway, Frontiers in Immunology | www.frontiersin.org cells were transfected with a reporter expressing luciferase under the control of the IFN stimulating response element (ISRE) and stimulated with IFN-β. This treatment resulted in a 15-fold induction in both EV-as well as SPSB1-transfected cells, indicating that SPSB1 did not affect JAK/STAT signaling ( Figure 2E). The above demonstrates that SPSB1 is not a general transcriptional modulator and specifically regulates NF-κB responses. SPSB1 Inhibits NF-κB Activation Downstream of Multiple Effectors and Requires Its SOCS Domain We then aimed to gain further insights into SPSB1 regulation of NF-κB signaling using different approaches. First, we performed luminescence-based mammalian interactome mapping (LUMIER) assays (27)(28)(29) to examine possible interactions between SPSB1 and a number of molecules operating at multiple levels downstream of the IL-1R signaling cascade ( Figure 3A). SPSB1 was initially found to self-associate ( Figure S1). This property was used to ensure that the fusion of SPSB1 with renilla luciferase (Rluc) did not affect its expression or folding. FLAG-SPSB1 was then co-transfected with Rluc fusions for either TRAF-6, TAK-1, TAB-2, TAB-3, IKK-α, IKK-β, IKK-γ, and β-TrCP as well as SPSB1 or an Rluc only construct. Rluc activity was measured before and after immunoprecipitation of FLAG-SPSB1 and RLuc ratios were calculated. Using this assay none of the tested NF-κB components interacted with SPSB1 ( Figure S2). The second approach relied on reporter assays in which NF-κB was triggered by over-expression of signaling molecules. SPSB1 was able to suppress NF-κB activation deriving from the adaptors TRAF-6 ( Figure 3B) and TRAF-2 ( Figure 3C); the RNA sensor RIG-I (Figure 3D), which activates NF-κB at the level of TRAF-6; the kinase IKK-β ( Figure 3E); and the DNA sensors cGAS and STING (Figure 3F), which converge on the NF-κB pathway at the level of the IKK complex. Taken together, these data indicated that SPSB1 acted downstream of these molecules. In agreement with this observation SPSB1 inhibited NF-κB activation triggered by p65 over-expression ( Figure 3G). We also created a panel of SPSB1 mutants ( Figure 4A) including one lacking the entire C-terminal SOCS-box domain ( SOCS), one lacking the first 85 amino acids ( 85) and one containing the point mutation R77A. These two have been described to disrupt the interaction between SPSB1 and its targets (30,31). All these constructs expressed at similar levels ( Figure 4B) and suppressed p65-induced NF-κB activation to similar extent with the exception of SPSB1-SOCS, which was clearly impaired (Figure 4C). This indicated that the SOCS-box domain is needed for SPSB1 to show inhibitory activity. SPSB1 Does Not Interfere With IκBα Phosphorylation or Degradation If SPSB1 blocked p65-induced NF-κB activation, IκBα should be phosphorylated and degraded in the presence of SPSB1. To verify this, we first generated A549 stable cell lines expressing SPSB1 or GFP as a control protein using lentiviruses. Immunoblotting against FLAG revealed the successful transduction and expression of SPSB1 ( Figure 5A). To validate that SPSB1 expression was sufficient and functional in these cells, the expression of ICAM-1 ( Figure 5B) and iNOS (Figure 5C), all of which contain NF-κB sites in their promoters, was assessed by quantitative PCR in response to IL-1β stimulation. The expression of all these genes was enhanced upon IL-1β treatment, but to a lower extent in cells expressing SPSB1. We then assessed the kinetics of phosphorylation and degradation of IκBα in these cells. Exposure to IL-1β induced significant p-IκBα levels in as little as 5 min and this was concomitant with subsequent degradation of IκBα (Figure 6). The presence of SPSB1 had no effect on either the intensity or the kinetics of phosphorylation of IκBα, nor its subsequent degradation, and this was confirmed by densitometric analysis of the images (Figure S3). We also assessed the phosphorylation of Ser536 in p65, a cytoplasmic event that relates to p65 activation (32). No differences in p-p65 levels were observed between SPSB1 and its control cell line (Figure 6), and this was also confirmed by densitometry (Figure S3). In addition, the total levels of p65 remained similar in the presence of SPSB1, suggesting that this protein does not affect p65 turnover. SPSB1 Does Not Interfere With p65 Translocation When IκBα is phosphorylated and degraded, the NF-κB heterodimer is free to translocate into the nucleus and induce the expression of NF-κB-dependent genes. We therefore assessed whether p65 would translocate in the presence of SPSB1. Cells were challenged with IL-1β and after 30 min stained for p65 and SPSB1 (FLAG). In control unstimulated cells p65 located in the cytosol and moved to the nucleus upon stimulation (Figure 7). In SPSB1-expressing cells, p65 translocated to the nucleus to the same extent upon IL-1β exposure and no differences were observed. This indicated that SPSB1 was not able to restrict p65 translocation. Interestingly, SPSB1 showed both nuclear and cytosolic distribution in unstimulated cells, but a notable nuclear localization after stimulation, indicating that either SPSB1, or its target, alters its cellular distribution in response to NF-κB signaling. SPSB1 Associates With p65 The fact that SPSB1 shadows p65 nuclear translocation (Figure 7) and inhibits p65-induced NF-κB activation ( Figure 3G) suggested an association between SPSB1 and p65. We thus immunoprecipitated HA-tagged p65 in the presence of SPSB1 or GFP as a control and observed a specific co-precipitation between SPSB1 and p65 ( Figure 8A). To confirm this interaction at naturals levels, we immunoprecipitated endogenous p65 from A549 cells treated with IL-1β for 30 min or left untreated. Despite some unspecific binding in the isotype control samples we observed a significant enrichment for SPSB1 in the p65 pull-down from cells previously treated with IL-1β (Figure 8B), indicating that activation of the pathway enhances the interaction between SPSB1 and p65. This is in agreement with the prominent nuclear localization of SPSB1 after IL-1β treatment (Figure 7) and its capacity to inhibit NF-κB activation after ectopic expression of p65 ( Figure 3G). Given the inability of SPSB1-SOCS to suppress NF-κB activation, we also assessed the potential interaction between this mutant and p65. Full-length SPSB1, SPSB1-SOCS, and cGAS as a control were expressed in HEK293T cells and subjected to affinity purification and immunoblotting ( Figure 8C). Whilst no binding was observed for cGAS, both full-length and SOCS interacted efficiently with p65, indicating that the SOCS-box domain is dispensable for binding to p65. Given that the SOCS-box domain is known to mediate interaction with Cul-5, these data suggest that although SPSB1-SOCS interacts with p65, this is not sufficient to inhibit p65 transcriptional activity and that this requires engagement with CRL5 complexes, presumably to allow ubiquitylation. Collectively, these data revealed that SPSB1 is a potent suppressor of NF-κB responses that interacts with p65 after p65 activation in a manner that does not affect stability or translocation. SPSB1 Inhibits NF-κB Activation Induced by RSV Infection Viruses are common inducers of NF-κB signaling. Respiratory syncytial virus (RSV) is a common respiratory pathogen and the main cause of airway inflammation in infants, and it is known to trigger NF-κB and type I IFN responses in the airway (33). To address the role of SPSB1 in controlling virus-induced responses, we infected our A549 cell lines with 2 PFU/cell of RSV and performed qPCR analysis on a number of cytokines. RSV infection triggered measurable levels of IFN-β in these cells and this was reduced by SPSB1 ( Figure 9A). Interestingly, we also observed a significant reduction in the levels of IFNdependent genes such as ISG54 and OAS1 (Figures 9B,C). Given the inability of SPSB1 to directly downregulate the JAK-STAT signaling pathway, these results indicate that SPSB1 effective suppression of IFN-β production impacted on the expression of these antiviral genes. DISCUSSION Here we have explored the role of CRL5 complexes in NF-κB activation in airway epithelial cells using an unbiased screen for SOCS-box proteins. This screen has identified SPSB1 as a novel regulator of the pathway: SPSB1 depletion resulted in enhanced NF-κB-dependent transcriptional activity (Figure 1) and this effect was reversed by SPSB1 overexpression (Figures 2, 3). In addition, our results indicate that SPSB1 controls NF-κB activation when cells were exposed to inflammatory cytokines ( Figure 5) as well as viruses (Figure 9). Therefore, our work highlights SPSB1 as a novel and important participant in the signaling network that governs the production of NF-κBdependent cytokines. SPSB1 is the first member of the SPSB family, a group of 4 proteins (SPSB1-4) characterized by the presence of a SPRY domain and a C-terminal SOCS box domain that engages with the CRL E3 ubiquitin ligase complex (34). SPSB1, SPSB2, and SPSB4 are known to target the inducible nitric oxide synthase (iNOS) via the SPRY domain and trigger its proteasomal degradation (20,21,35). In addition, SPSB1 regulates multiple cancer-associated pathways via interactions with c-met (36,37), the apoptosis-related protein Par-4 (30,38) and the TGF-β receptor (39,40). The specific motif that SPSB1 recognizes on its targets was suggested to be (D/E)-(I/L)-N-N-N. However, the degron recognized by SPSB1 in the TGF-β receptor has been mapped to N-I-N-H-N-T (39). The difference in sequence of the proposed motifs suggests the existence of more SPSB1 degrons than previously inventoried. Interestingly, SPSB1 has recently been shown to direct non-degradative ubiquitylation in the nucleus to regulate alternative splicing (41). Our results reveal that SPSB1 restricts the extent of NF-κB activation induced by cytokines and viruses downstream of IκBα degradation and p65 translocation and associates with p65. This indicates that SPSB1 targets p65 in the nucleus or in the cytosol in a manner that does not affect its ability to translocate. SPSB1 targeting affects the transactivation potential of p65, but not its stability. An interesting possibility is that SPSB1 mediates non-degradative ubiquitylation of p65 itself and affects its transcriptional activity perhaps by competing with other PTM known to activate p65 such as acetylation (42). Our results showing that an SPSB1 mutant that lacks the CRL5-interacting SOCS-box domain lost inhibitory capacity would support this notion, and the ability of SPSB1 to catalyze non-degradative K29 ubiquitin chains has already been described (41). Interestingly, this mutant retained the ability to interact with p65, indicating that binding to p65 is not sufficient to suppress NF-κB activation and optimal inhibition requires engagement with Cul-5, which complexes with the E2 enzyme required for ubiquitylation. Thus, SPSB1 would limit signal-induced p65 activation without triggering ubiquitin-dependent proteolysis. An SPSB1 substrate that is of particular importance in innate immunity and inflammation is iNOS (also known as NOS2) and its catalytic product NO. iNOS is an inducible gene that is expressed at low levels in human respiratory epithelia and is upregulated in disease. Activation of NF-κB and STAT1 in response to TLR agonists and cytokines is largely responsible for the transcriptional induction of iNOS expression (43). Interestingly, SPSB1 expression is also enhanced by NF-κB and type I IFN (20). This indicates that SPSB1 expression is tightly regulated and that, according to the data presented here, represents a negative feed-back loop on NF-κB. This also implies that SPSB1 has a dual role controlling iNOS: (i) it limits iNOS expression by downregulating NF-κB activation, and (ii) it drives iNOS ubiquitylation and its proteasome-dependent destruction. SPSB1 is thus a unique molecule controlling inflammatory responses. We did not observe a role for other iNOS-modulating CLR5 complexes such as SPSB2 or SPSB4 in inhibiting NF-κB activation, although we cannot rule out that the RNAi depletion for SPSB2 and SPSB4 in our screen was not sufficiently efficient. It would therefore be interesting to assess the role of these molecules as well as its paralogue FBXO45 in the regulation of NF-κB signaling. Our screen has also highlighted other molecules that might regulate NF-κB signaling. For instance, depletion of SOCS5 led to a substantial reduction in NF-κB activation. SOCS5 has been shown to regulate IL-4 (44) and epidermal growth factor receptor EGFR) signaling (45,46). In addition, inhibition of EGFR/PI3K signaling by SOCS5 conferred protection against influenza infection (47). Our data suggest that SOCS5 is a critical factor required for NF-κB activation since its depletion severely impaired NF-κB reporter activation (26% upon stimulation). This may indicate that SOCS5 is necessary to activate NF-κB during viral infection and mount a protective response. We also noticed that depletion of SOCS1, a known inhibitor of NF-κB and JAK/STAT signaling (48,49), did not result in significant changes in NF-κB activation in our screen. Further analysis of gene expression data revealed that SOCS1 is not expressed in A549 cells (50), which accounts for these results. Excessive inflammation is central to a large number of pathologies. For instance, in the respiratory tract obstructive lung diseases such as asthma or chronic obstructive pulmonary disease (COPD) are characterized by inflammatory gene expression and the production of inflammatory mediators that enhance the recruitment of inflammatory cells (51). NF-κB is an important player in this multifactorial diseases as evidenced by the fact that the therapeutic efficacy of the main treatment for asthma-glucocorticoids-is thought to be largely caused by their ability to suppress NF-κB and AP-1 responses (52). In these diseases, NF-κB occurs largely in response to cytokines such as IL-1β and TNF-α or by infection with viruses during exacerbations (53). Mechanisms that limit this excessive inflammation are crucial to maintain homeostasis. E3 ubiquitin ligases are potent post-translational regulators with capacity to regulate inflammatory responses as recently reported for the E3 ligase TRIM29 and its critical role in regulating NEMO stability in alveolar macrophages and consequently the levels of IRF-3 and NF-κB activation (54). Here we present the novel finding that a member of the CRL5 family, SPSB1, downregulates the expression of inflammatory cytokines and other NF-κB-dependent genes in airway epithelial cells exposed to cytokines and viral infection. SPSB1 may thus have regulatory functions in chronic inflammatory disorders of the respiratory tract as well as acute virus-induced exacerbations. Our work reveals a new connection between CRL and innate immunity and may offer alternative strategies for the manipulation of NF-κB transcriptional activity in inflammatory pathologies. RNAi Depletion Screens siRNA sequences targeting CRL5 adaptors were purchased from Horizon Discovery and resuspended in nuclease-free water to 1 µM final concentration. A549-κB-LUC cells were reverse-transfected in triplicate replicates with 30 nM siRNA using Interferin-HTS (Polyplus) and incubated for 72 h. The cells were then stimulated with 1 ng/mL of IL-1β for 6 h and subsequently washed with ice-cold PBS and lysed in Passive Lysis Buffer (PLB; Promega). Luciferase activity was measured in a Clariostar plate reader (BMG Biotech) and data for each sample were normalized to its non-stimulated condition and plotted as mean ± SD over the NTC-transfected control. Data shown are representative of 2 independent screens showing similar results. Cells were also reversetransfected in identical manner to determine cell viability 72 h post-transfection using CellTiter-Glo (Promega) following manufacturer's recommendations. To generate A549 cells overexpressing SPSB1, SPSB1 was PCR amplified using primers 5 ′ -GAAGCGGCCGCGGGT CAGAAGGTCACTGAG-3 ′ (fwd) and 5 ′ -GACTCTAGATCA CTGGTAGAGGAGGTAGG-3 ′ (rev). The PCR product was subsequently ligated into a pcDNA4/TO expression vector (Invitrogen) previously modified to express genes in frame with 3 N-terminal copies of the FLAG epitope, an N-terminal copy of the V5 epitope, or an N-terminal tandem affinity purification tag containing 2 copies of the strep tag and 1 copy of the FLAG tag as previously described (29,57). FLAG-SPSB1 was then subcloned into a lentivirus vector carrying puromycin resistance (a gift from Greg Towers). Lentiviral particles were produced in HEK293T as above and virus supernatants were harvested at 48 and 72 h posttransfection. FLAG-SPSB1 was also used as a template to generate deletion mutants SOCS (encompassing amino acids 1-231) and 85 (encompassing amino acids 86-273) using primers amplifying the specified regions. Mutant R77A was generated by site-directed mutagenesis using KOD hot-start DNA polymerase (Millipore). Quantitative PCR RNA from confluent 6-well plates of A549 cells was purified using the Total RNA Purification Kit (Norgen Biotech). One µg of RNA was transformed into cDNA using Superscript III reverse transcriptase (Invitrogen). cDNA was diluted 1:5 in water and used as a template for real-time PCR using SYBR R Green PCR master mix (Applied Biosystems) in a LightCycler R 96 (Roche). Expression of each gene was normalized to an internal control (18S) and these values were then normalized to the shCtl or GFP control cells to yield a fold induction. Primers used for the detection of CXCL10 (58), IFNβ and 18S (59) have been described. Primers used for iNOS detection were 5 ′ -ACAAGCCTACCCCTCCAGAT (fwd) and 5 ′ -TCCCGTCAGTTGGTAGGTTC (rev). Data shown are representative of at least 3 independent experiments showing similar results, each performed in triplicate and plotted as mean ± SD. Reporter Gene Assays HEK293T cells were seeded in 96-well plates and transfected with the indicated reporters and expression vectors using PEI as described in the figure legends. The reporter plasmids have been described previously (8). After 24 h the cells were stimulated by exposure to different agonists or by co-transfection with activating plasmids as indicated in the figure legends. Plasmids for signaling molecules have been described (57) with the exception of untagged and HA-tagged p65 that were from Geoffrey Smith (University of Cambridge, United Kingdom). TAP-tagged VACV C6 (25) and HA-tagged VACV A49 (8) have been described. After stimulation cells were washed with icecold PBS and lysed with PLB. Luciferase activity was measured in a Clariostar plate reader and firefly and renilla ratios were calculated for each condition. Data were normalized to mockinfected samples or samples transfected with an empty vector and presented as a fold increase. In all cases data shown are representative of at least 3 independent experiments showing similar results, each performed in triplicate and plotted as mean ± SD. LUMIER Assays HEK293T cells were co-transfected with FLAG-SPSB1 and Rluc fusions for NF-κB signaling components for 24 h. Rluc fusions were from Felix Randow (Laboratory of Molecular Biology, University of Cambridge, United Kingdom) and/or have been previously described (28,(60)(61)(62). Cells were lysed in IP buffer (20 mM Tris-HCl pH7.4, 150 mM NaCl, 10 mM CaCl2, 0.1% Triton-X and 10% glycerol) supplemented with protease inhibitors (Roche) and cleared lysates were subjected to affinity purification (AP) with streptavidin beads for 6 h at 4 • C. The beads were then washed 3 times with lysis buffer prior to elution with biotin (10 mg/ml) diluted in PLB. Luciferase activity was measured and data were plotted as a binding fold over Rluconly control. Pull-Down Assays HEK293T cells were seeded in 10-cm dishes and transfected with 5 µg of the indicated plasmids using PEI. After 24 h cells were lysed with IP buffer as above. Cleared lysates were incubated with HA antibody (Sigma) for 16 h at 4 • C and subsequently Protein G beads (Santa Cruz) were added for a further 2 h. For IP at endogenous levels cleared lysates from 15cm dishes of A549 cells were incubated with p65 antibody or an isotype control and Protein G beads as above. For streptavidin affinity purification assays HEK293T cells in 10-cm dishes were transfected with 5 µg of the indicated plasmids using PEI. After 24 h cells were lysed with IP buffer as above. Cleared lysates were incubated with Streptavidin beads (Sigma) for 16 h at 4 • C. The beads were then washed 3 times with IP buffer prior to incubation at 95 • C for 5 min in Laemmli loading buffer to elute bound proteins. Cleared lysates and pull-down fractions were analyzed by SDS-PAGE and immunoblotting. Data shown are representative of at least 3 independent experiments showing similar results. Statistical Analysis Statistical significance was determined using an unpaired Student's t-test with Welch's correction where appropriate. DATA AVAILABILITY STATEMENT The datasets generated for this study are available on request to the corresponding author. AUTHOR CONTRIBUTIONS IG planned and performed experiments, carried out data analysis, and prepared and edited the manuscript. CM planned and performed experiments, aided in data analysis, and wrote the manuscript. FUNDING This work was supported by an Asthma UK Innovation Grant (AUKIG2016353) to CM.
2019-05-26T13:46:52.088Z
2019-05-10T00:00:00.000
{ "year": 2020, "sha1": "2b50126c42bb6fabe9553ed5ea23c05f04c85908", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2019.03121/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "de27a58226ed63f74ae134c3421a58ddd18bf1b7", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Chemistry", "Biology" ] }
252737548
pes2o/s2orc
v3-fos-license
Effect of radioiodine treatment on muscle mass in hyperthyroid cats Abstract Background Approximately 75% of hyperthyroid cats lose muscle mass as accessed with a muscle condition scoring (MCS) system. After treatment, MCS improves as the cats regain muscle mass. Objectives To quantify the degree of muscle loss in hyperthyroid cats using ultrasonography and evaluate changes in muscle mass after treatment. Animals Forty‐eight clinically normal cats and 120 cats with untreated hyperthyroidism, 75 of which were reevaluated after radioiodine‐131 therapy. Methods Prospective cross‐sectional and before‐after studies. All cats underwent ultrasonography and measurement of epaxial muscle height (EMH), with subsequent calculation of vertebral and forelimb epaxial muscle scores (VEMS and FLEMS). A subset of hyperthyroid cats underwent repeat muscle imaging 6 months after treatment. Results Untreated hyperthyroid cats had a lower EMH than did clinically normal cats (median [25th‐75th percentile], 0.98 [0.88‐1.16] cm vs 1.34 [1.23‐1.58] cm, P < .001). Seventy‐seven (64.2%) untreated cats had subnormal EMH. Similarly, compared to normal cats, hyperthyroid cats had lower VEMS (0.93 [0.84‐1.07] vs 1.27 [1.18‐1.39], P < .001) and FLEMS (1.24 [1.10‐1.35] vs 1.49 [1.39‐1.63], P < .001). After treatment, EMH increased (1.03 [0.89‐1.03] cm to 1.33 [1.17‐1.41] cm, P < .001), with abnormally low EMH normalizing in 36/41 (88%). Both VEMS (0.94 [0.87‐1.10] to 1.21 [1.10‐1.31], P < .001) and FLEMS (1.31 [1.17‐1.40] to 1.47 [1.38‐1.66], P < .001) also increased after treatment. Conclusions and Clinical Importance Almost two‐thirds of hyperthyroid cats have abnormally low muscle mass when measured quantitatively by ultrasound. Successful treatment restores muscle mass in >85% of cats. EMH provided the best means of quantitating muscle mass in these cats. | INTRODUCTION Hyperthyroidism is a catabolic state that leads to loss of body weight because of decreases in both fat stores and lean body mass (primarily muscle). In untreated human patients, initial weight loss is predominantly caused by a loss of muscle mass, rather than loss of fat. [1][2][3] After successful treatment for hyperthyroidism, weight gain is expected but the pattern of recovery is reversed, with muscle mass being restored before fat deposits. 1,[3][4][5][6][7] In cats, weight loss is the earliest and most common clinical feature of hyperthyroidism. 8 As in human hyperthyroid patients, this weight loss appears to be largely because of loss of muscle mass, affecting >75% of hyperthyroid cats in 1 study 9 when accessed clinically by use of a 4-point muscle condition scoring (MSC) system. 10,11 Similar to human patients, the hyperthyroid cats' muscle condition scores improved after treatment, as the cats became euthyroid and regained their lost body weight. 9 The MSC system is a clinical, semiquantitative method based on physical assessment of a cat's muscle mass (ie, visualization and palpation of musculature over the spine, scapulae, skull, and pelvis). This can lead to problems of interobserver reproducibility. In 1 study designed to validate its accuracy in cats, this system had a good intrarater agreement (repeatability), but only fair inter-rater agreement (reproducibility), especially for cats with mild to moderate muscle loss. 12 Similar findings are reported for the repeatability and reproducibility of MCS in dogs. 13 Therefore, although clinically useful in cats and dogs when performed by single observers, results are subjective and can vary when a cat is evaluated by multiple clinicians. In addition, MCS might not be precise enough for research studies, especially when the changes in muscle mass are small or if one is quantifying changes in muscle mass over time. Several other more objective, quantitative techniques have been used to estimate body composition (specifically muscle mass) in cats, including deuterium oxide dilution, 14,15 bioelectrical impedance analysis, [16][17][18] magnetic resonance imaging, 15 computed tomography, 19 and dual-energy X-ray absorptiometry (DXA). 15,[18][19][20][21] However, all of these methods have a cost disadvantage, typically require general anesthesia or heavy sedation, and have other limitations that make them less accurate for the measurement of muscle. 15,20 To more easily identify and quantitatively monitor the treatment of muscle wasting syndromes (eg, cachexia or sarcopenia), clinical investigators have sought an accurate, noninvasive method to assess muscle loss. Recently, investigators have evaluated ultrasound imaging of muscle as a quantitative measure of muscle loss in humans, [22][23][24] dogs, 13,25,26 and cats. 27,28 In dogs and cats, investigators measured epaxial muscle height (EMH) from transverse ultrasonographic images obtained at the level of the 13th thoracic vertebrae (T13) and validated this approach in healthy cats and cats with various degrees of muscle loss. 13,[25][26][27][28] The method displayed good repeatability and reproducibility. In this study, we sought to quantify the prevalence and degree of muscle loss in untreated hyperthyroid cats using ultrasound imaging. We then prospectively followed a subset of these cats to examine the effects of successful radioiodine treatment on their muscle mass using ultrasound imaging. | Study cohorts and study design This study was conducted in 2 phases. The first phase was a prospective cross-sectional study conducted from January 2018 to January 2021 that included both hyperthyroid cats referred to our clinic for evaluation before radioiodine treatment, and a cohort of clinically normal cats. The second phase was a before-after study 29,30 involving a subset of the cats from the initial study that were reevaluated approximately 6 months after treatment with radioiodine. 4 ], and thyroid stimulating hormone [TSH]). 31,32 All study cats also underwent quantitative thyroid scintigraphy, which was used as the reference standard to confirm hyperthyroidism. [33][34][35] We excluded cats if they had received methimazole within 15 days of evaluation or if they had concurrent nonthyroidal disease, such as azotemia ( Figure 1). | Initial cross-sectional study On the day of treatment with radioiodine ( 131 I), a single investigator (PX) weighed each cat and assigned a body condition score (BCS; 9-point scale) and muscle condition score (MCS; 4-point scale). 10,11 A right lateral radiograph (centered over the 4th thoracic vertebrae) was then obtained using a computed radiography system (Neovet V210, Sedecal, Madrid, Spain). One investigator (SIS) measured the length of the 4th thoracic vertebrae using the line tool on a DICOM workstation (CR 30-XB, Agfa Healthcare, Mortsel, Belgium); the vertebrae length was measured 3 times, and the results averaged, as previously described. 27,28 The same investigator (SIS) also measured the forelimb circumference at the approximate midpoint between the carpus and elbow joints of the left hand of each cat. 27,28 Ultrasonographic measurements were next obtained at the level of 13th thoracic vertebrae with a 7.5 to 12 MHz, multifrequency, To address differences in size of our cats, EMH was also normalized on the basis of 4th thoracic vertebral length and forelimb circumference 25-28 by calculating ratios for the vertebral epaxial muscle score (VEMS) and forelimb epaxial muscle score (FLEMS), as follows: [36][37][38] Cats that remained hyperthyroid were excluded from this part of the study ( Figure 1). The primary investigator reweighed all eligible study cats and assigned a BCS and MCS. All cats then underwent repeat muscle ultrasound and forelimb circumference measurement, as described above, without knowledge of the cats' pretreatment values. We did not repeat the chest radiography or remeasure the length of the 4th thoracic vertebrae but used the original length to calculate the VEMS. | Data and statistical analyses Data were assessed for normality by the D'Agostino-Pearson test and by visual inspection of graphical plots. 39 Data were not normally distributed; therefore, all analyses used nonparametric tests. Results for continuous data (eg, EMH, VEMS, FLEMS) are expressed as median (interquartile range, [IQR], 25th-75th percentile) and are represented graphically as box-and-whisker plots (Turkey method). 40 Reference intervals (RI) for EMH, forelimb circumference, VEMS, and FLEMS were established by the robust method using Box-Cox transformed data from the results of our 48 clinically normal cats (Table 1). 41,42 The 90% confidence intervals (CI) for the upper and lower limits of the RI were also calculated. In our initial cross-sectional study, we compared continuous variables between hyperthyroid and clinically normal cat groups by the Mann-Whitney U-test. In the subgroup of hyperthyroid cats that were reevaluated after successful treatment with radioiodine, we compared the before-and-after variable groups by the Wilcoxon signed ranks test. BCSs were determined on a 9-point scale, with a score of 5/9 designating an ideal body weight, scores of ≤4/9 being underweight and ≥6/9 being overweight. 10 | RESULTS 3.1 | Initial cross-sectional study (untreated hyperthyroid cats and clinically normal cats) Hyperthyroid cats During the 3-year study period, we evaluated 134 hyperthyroid cats, of which 120 met the eligibility requirements ( Figure 1). The 120 cats ranged in age from 7 to 20 years (median, 13 years; IQR, Figure 2). Length of 4th thoracic vertebrae and forelimb circumference Hyperthyroid cats had a length of 4th thoracic vertebrae not different from that of euthyroid cats (median, 1.06 vs 1.07 cm; P = .5; Figure 3), but a forelimb circumference less than that of euthyroid cats (8.2 vs 9.1 cm; P < .001; Figure 4). Eighteen (15%) of the 120 untreated hyperthyroid cats had a forelimb circumference below the reference limit for normal cats. Vertebral epaxial muscle score Table 1 and Figure 5); 29 of these 31 cats also had a low to low-normal EMH. F I G U R E 4 Boxplots of the forelimb circumference in 120 untreated hyperthyroid cats and 48 clinically normal euthyroid cats. See Figure 2 for key F I G U R E 5 Boxplots of the vertebral epaxial muscle score (VEMS) in 120 untreated hyperthyroid cats and 48 clinically normal euthyroid cats. See Figure 2 for key F I G U R E 6 Boxplots of the forelimb epaxial muscle score (FLEMS) in 120 untreated hyperthyroid cats and 48 clinically normal euthyroid cats. See Figure 2 for key 3.2.2 | EMH, VEMS, and FLEMS (before-after cat study) Forelimb circumference In the 48 hyperthyroid cats evaluated both before and after treatment, forelimb circumference increased from a median value of 8.5 vs 9.2 cm (P < .001; Figure 9). Before treatment, 6 (13%) of the 48 hyperthyroid cats had a small forelimb circumference; all normalized after 131 I treatment. | Hyperthyroid cats In the 120 hyperthyroid cats studied before treatment, body weight When the prevalence of muscle loss using MCS and EMH measurements were compared, we identified more hyperthyroid cats with muscle loss using MCS than EMH. Of the 120 untreated cats, 95 (79.2%) had low MCSs, whereas 75 (62.5%) had low EMH (P = .007) All 20 of these discordant cats (ie, low MCS with normal EMH) had EMH values that were within the lower quartile of the reference interval (ie, low-normal values for EMH). However, of the 25 hyperthyroid cats judged to have normal muscle mass with MCS, 3 (12%) had low EMH when measured by ultrasound. Therefore, although the MCSs correlated well with the F I G U R E 9 Boxplots of the forelimb circumference in 45 hyperthyroid cats evaluated before and after treatment with radioiodine. See Figure 2 for key F I G U R E 1 0 Boxplots of the forelimb epaxial muscle score (FLEMS) in 48 hyperthyroid cats evaluated before and after treatment with radioiodine. See Figure 2 for key F I G U R E 8 Boxplots of the vertebral epaxial muscle score (VEMS) in 75 hyperthyroid cats evaluated before and after treatment with radioiodine. See Figure 2 for key EMH values, 23 (19.2%) of the 120 untreated cats had discordant values. | DISCUSSION We found that about two-thirds of hyperthyroid cats have mild to severe muscle loss when quantified by ultrasound muscle imaging. This finding is similar to the 75% prevalence of muscle loss reported in hyperthyroid cats evaluated clinically with a subjective muscle condition scoring system. 9 After successful 131 I treatment, over 90% of cats with muscle wasting regained lost muscle mass, as demonstrated by normalization of their epaxial muscle height. Therefore, this study also confirms that successful treatment of hyperthyroidism will improve or resolve muscle loss in most cats. 9 The MCS system involves a subjective, physical assessment of a cat's muscle mass, which includes visualization and palpation of the musculature over the spine, scapulae, skull, and pelvis, with results reported as normal muscle or mild, moderate, or severe muscle loss. 10,11 In contrast, ultrasound imaging, as performed in this study, is a quantitative measure of muscle mass but obviously takes more time to perform than muscle condition scoring. However, ultrasound is a non-invasive method that can be readily performed in clinical practice without the need for anesthesia or purchase of additional, expensive equipment. In our study, MCSs of our hyperthyroid cats correlated well with epaxial muscle measurements obtained with ultrasound imaging, similar to results of a recent study of 40 cats with nonthyroidal illness (eg, because of CKD, neoplastic, cardiac, or hepatic disease). 28 Quantitative ultrasound imaging, however, appears to be a more accurate measurement of muscle mass than the qualitative clinical MCS system for assessment of muscle loss. This is especially true in cats with early or mild muscle loss, when the clinical MCS system is less accurate. 28 Overall, muscle ultrasound provides a clinically feasible method for easily detecting and quantifying early muscle loss, as well as for monitoring changes in a cat's muscle mass over time. As an outcome measure, quantitative ultrasound is certainly recommended over MCSs for use in research studies, particularly when the small changes in muscle mass need to be detected. We used a described protocol for ultrasound muscle imaging of cats 27,28 and dogs. 13,25,26 In those studies, investigators normalized epaxial muscle height (EMH) to vertebral length or forelimb circumference to address differences in the size and weight of individual dogs and cats. [25][26][27] Although this appeared to be useful in dogs and clinically normal cats, 13 We found the use of forelimb circumference as a potential means of normalizing EMH in hyperthyroid cats to be even more problematic than VEMS. The forelimb circumference of our untreated hyperthyroid cats was $1 cm smaller than that of clinically normal cats ( Figure 4), indicating that weight loss likely contributed to the smaller forelimb circumference in hyperthyroid cats. After 131 I treatment, forelimb circumference increased and normalized in all of these cats as they gained weight (Figure 9). Given that the basis of normalizing requires a fixed (unchanging) constant against which to index, these marked changes in before-after forelimb circumference (denominator) makes use of this ratio questionable. In untreated hyperthyroid cats, the smaller forelimb circumference would tend to falsely increase the apparent FLEMS value, whereas the increase in forelimb circumference that developed after successful treatment would tend to mask the associated increase in muscle thickness (because the numerator and denominator are both increasing). Overall, EMH alone appeared to be the best quantitative indices for quantitating muscle mass with ultrasound imaging, at least in cats with hyperthyroidism. This deduction agrees with the conclusions of a study of cats with nonthyroidal illness, 28 in which EMH provided the best means of assessing and monitoring muscle mass in cats. In this study, we identified more cats with muscle loss using MCS than EMH (≈80% vs 65% We did not study the biological variation in EMH in this study, but our findings suggest that some cats might simply be more muscular than others. In other words, some hyperthyroid cats will lose small to moderate amounts of muscle mass but still maintain EMH within the established reference interval. After the hyperthyroid state resolves, these cats regain lost muscle mass, with their low-normal EMH values increasing into the mid-normal or even high-normal reference interval. The present study had several important limitations. First, no reference standard for determining muscle mass was used in this study, so the true accuracy of our ultrasonographic measurements cannot be determined from our results. However, the EMH (as well as VEMS and FLEMS) correlated well with the MCSs from our cats, as also reported in other studies in cats, 28 suggesting that the ultrasound measurements are clinically meaningful. Another limitation was the unblinded nature of our study, that is, the investigators knew which cats were hyperthyroid vs clinically normal and had access to the pretreatment data when reevaluating the cats after 131 I treatment. A final limitation of this study was that the EMH was measured at only a single vertebral location (ie, T13); additional studies are needed to determine whether the degree of muscle loss in cats with hyperthyroidism is comparable along the entire length of the thoracic vertebrae. Regardless of these limitations, our results confirm that ultrasonographic measurements EMH at the location of T13 can be used for quantitative assessment of muscle mass, and that these measurements agree with those obtained with the clinical MCS system. In conclusion, most hyperthyroid cats evaluated in this study had loss of muscle mass when evaluated by a clinical MCS system or with ultrasound imaging. After successful treatment, the normal muscle mass was restored in most hyperthyroid cats, as weight was regained. Overall, MCS is ideal for clinical use in cats because it can be quickly and easily performed. However, it might not be precise enough as an outcome measure for research studies when the changes in muscle mass are small or quantification of muscle mass over time is needed. Ultrasound imaging offers a clinically feasible alternative method for monitoring and quantifying muscle loss that can easily be performed in clinical feline practice. ACKNOWLEDGMENT No funding was received for this study. The preliminary results of this study were presented an oral Research Report at the XXXVII VetMadrid Veterinary Congress (2020). We thank Dr. Mark Rishniw for editorial and statistical assistance.
2022-10-07T06:17:42.280Z
2022-10-06T00:00:00.000
{ "year": 2022, "sha1": "493ea860fe0c2cd6591345635afcc8796d4fd11b", "oa_license": "CCBYNCND", "oa_url": null, "oa_status": "CLOSED", "pdf_src": "PubMedCentral", "pdf_hash": "509db799d11b19bf2aa301a98ce8d7876bacca35", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
219177192
pes2o/s2orc
v3-fos-license
Enabling 5G on the Ocean: A Hybrid Satellite-UAV-Terrestrial Network Solution Current fifth generation (5G) cellular networks mainly focus on the terrestrial scenario. Due to the difficulty of deploying communications infrastructure on the ocean, the performance of existing maritime communication networks (MCNs) is far behind 5G. This problem can be solved by using unmanned aerial vehicles (UAVs) as agile aerial platforms to enable on-demand maritime coverage, as a supplement to marine satellites and shore-based terrestrial based stations (TBSs). In this paper, we study the integration of UAVs with existing MCNs, and investigate the potential gains of hybrid satellite-UAV-terrestrial networks for maritime coverage. Unlike the terrestrial scenario, vessels on the ocean keep to sea lanes and are sparsely distributed. This provides new opportunities to ease the scheduling of UAVs. Also, new challenges arise due to the more complicated maritime prorogation environment, as well as the mutual interference between UAVs and existing satellites/TBSs. We discuss these issues and show possible solutions considering practical constraints. I. INTRODUCTION With the continuous development of marine activities, the demand for maritime broadband communications increases dramatically. Till now, the data rate that can be supported on the ocean has approached a few Mbps [1]. However, this is still way below that supported by the fifth generation (5G) cellular network at the scale of Gbps. To meet the dramatically increasing demand, new solutions to maritime communication networks (MCNs) have become a pressing need. Different from the urban area, it is challenging to densely deploy base stations on the ocean. In order to extend 5G services to the ocean, Ericsson and China Mobile jointly established a Time Division Long Term Evolution (TD-LTE) trial network in the Qingdao sea area of China. By building shorebased terrestrial based stations (TBSs) along the coast, this trial network can provide broadband communication services for an area of up to tens of kilometers away from the shore [2]. To further extend the coverage, multi-hop system was adopted in the TRITON [3] and BlueCom+ projects [4]. In these projects, vessels were employed as relay nodes to enhance communications. Moreover, tethered balloons at an altitude of 120 m were utilized in the BlueCom+ project to further enhance the coverage of vessel-based relays. It can offer data rates in excess of 3 Mbps up to 150 km offshore. However, as most vessels follow fixed sea lanes to avoid shipwrecks, this multi-hop solution lacks flexibility. Coverage holes may exist in areas far away from the sea lanes of the relay nodes. To cover more remote areas far away from the coast, satellites can be exploited. The most well-known solution is the X. Li, W. Feng (corresponding author), and N. Ge are with Tsinghua University; J. Wang is with Nantong University; Y. Chen is with University of Warwick; C.-X. Wang is with Southeast University and Purple Mountain Laboratories. W. Feng and J. Wang are also with the Peng Cheng Laboratory. marine satellite, i.e., the Inmarsat. Because of its inherent long transmission distance, the data rate of satellite communications is usually much less than that of the terrestrial 5G. In order to meet the increasing data demand, developing high-throughput satellites has attracted much research attention. For example, the Inmarsat's fifth-generation (Inmarsat-5) satellite network deployed in the Geostationary Earth Orbit (GEO) can offer Ka-band services of 50 Mbps forward and several Mbps return data rates [5]. Besides, the Iridium NEXT system consisting of 66 Low Earth Orbit (LEO) satellites at an altitude of 780 km is expected to offer Ka-band services with data rates of up to 8 Mbps [6]. These efforts have substantially improved the performance of satellite communications. However, the large communication delay remains an open issue. Moreover, these new developments require dedicated terminals using high gain antennas. In addition to shore-based TBSs and marine satellites, highaltitude aerial platforms (HAPs) can also be used as communications infrastructure. For example, the Loon project employs super-pressure balloons at an altitude of around 20 km to realize broadband coverage for countryside and remote areas [7]. This network was reported to provide communication services of up to 10 Mbps. Also, as aerial communication platforms, unmanned aerial vehicles (UAVs) are more agile than balloons, due to their better mobility at a lower altitude. Most existing studies on UAV communications focus on the terrestrial scenario, where it has been recognized that UAVs are promising for dynamic coverage enhancement [8,9]. Although the maritime environment is quite different from terrestrial scenarios, one believes that UAVs can also offer agile aerial platforms above the ocean to enable on-demand maritime coverage enhancement. In this article, we investigate the integration issue of UAVs with existing MCNs. In the concerned scenarios, UAVs are flexibly deployed to fill up the broadband coverage holes on the ocean, which cannot be covered by conventional shorebased TBSs and marine satellites. The integration leads to a hybrid satellite-UAV-terrestrial network architecture. We investigate the potential gains of hybrid satellite-UAV-terrestrial networks for maritime coverage in the 5G era. Different from existing studies on UAV communications, vessels are the main users on the ocean, which often keep to sea lanes for safety and are sparsely distributed. These properties render it possible to elaborately schedule the UAVs to match the user demand. Within this framework, we also discuss new challenges of deploying UAVs above the ocean, which stem from the more complicated maritime prorogation environment, as well as the mutual interference between UAVs and existing shore-based TBSs/satellites. A. Agile Mobility of UAVs As illustrated in Fig. 1, a hybrid satellite-UAV-terrestrial maritime network can be established by integrating UAV communications into existing MCNs. In contrast to on-shore TBSs and satellites, the unique advantage of UAVs lies in their agile mobility. In the following, we compare the mobility of TBS, satellites and UAVs. • TBS. In general, TBS should be deployed on the mountains or highly-elevated towers along the coastline. Thus, the deployment of TBSs is quite limited and fixed in practice. To enhance the mobility, shipborne base stations may be deployed, which play a similar role as the TBSs. However, their mobility remain limited due to the restriction of sea lanes. This restriction cannot be broken in general, because it concerns the navigation safety of the corresponding vessel. • Satellite. According to the Orbital Dynamics theory, the deployment of satellites is largely restricted. For instance, both the aforementioned Inmarsat-5 and Iridium NEXT satellites follow certain orbits mainly determined by astrodynamics. In general, we are able to choose a proper orbit, but cannot create an arbitrary orbit. For this reason, the expensive LEO constellation is usually necessary to achieve global coverage. • UAV. The UAV has the most flexible deployment. As shown in Fig. 1, a UAV can fly with the target vessel, so as to provide on-demand broadband communication services. Nevertheless, the endurance of UAVs is usually limited because of limited energy onboard. Likewise, the weather condition also imposes restrictions on the deployment of UAVs. Therefore, it is necessary to optimize the scheduling of UAVs considering all practical constraints. In summary, the agile mobility of UAVs is unique and quite valuable, because the access point equipped at UAV can fly closer to the target user, thereby significantly improving the transmission rate and shortening the communication latency. By exploiting the characteristics of maritime user distribution and predictable mobility as described in the following, it is possible to improve the UAV efficiency in maritime communications. B. Unique Characteristics of Maritime Users Different from the terrestrial case where the majority of users move randomly, vessels on the ocean have unique characteristics in terms of both distribution and mobility. Their distributions are both spatially and temporally sparse on the vast ocean. As an example, we show the typical vessel distribution within a coastal area of China in Fig. 2. The practical Automatic Identification System (AIS) 1 data is used to obtain the distribution. For the spatial domain, the latitude is in the range of [22.5 • N, 37.3 • N ] and the offshore distance is in the range of [20, 30] km. For the temporal domain, a period from 1st October 2015 to 3rd October 2015 is taken into account. In the figure, the number of vessels appeared in a square area with latitude 0.1 • in length and 10 km in width during an hour is accumulated as one data point. It is shown that vessels are sparsely distributed in both spatial and time domains. For most of the areas, the color map is dominated by dark blue, indicating that very few users (or even no user) are distributed in these areas. The red line around latitude 30 • indicates the existence of a sea lane. Actually, most maritime users follow fixed shipping lanes rather than randomly move. We further illustrate various sea lanes in Fig. 3. The curves in the figure are obtained from 610 C. On-Demand Coverage by Maritime UAVs Exploiting both the agile mobility of UAVs and the unique characteristics of vessels, it is natural to design an on-demand coverage framework. For a vessel user that requires broadband services outside the coverage area of existing MCNs, a UAV can be dispatched. The UAV can either work in a serve-andleave manner, or it can move with the vessel to guarantee long-term broadband services. After the transmission task has been accomplished, the UAV flies back to the charging station, or towards the next vessel user in the service queue. This is quite different from conventional UAV communications where users are assumed to be fixed or have random distribution and moving patterns. Compared with TBSs and satellites, which can also support on-demand coverage with dynamic beams at the cost of expensive antenna arrays, the mobile agility of UAVs makes it possible to accomplish this in a more efficient way. In particular, UAVs can be dynamically deployed only to cover the sea lanes. In the temporal domain, communication requests could appear intermittently. Then, UAVs can be flexibly and dynamically scheduled according to the time-varying communication demand. These imply that UAV-enabled maritime ondemand coverage has great potential to improve the efficiency of maritime communications. III. CHALLENGES AND POSSIBLE SOLUTIONS In this section, we discuss the challenges of integrating UAVs into existing MCNs. They are summarized by the following three aspects: 1) harsh maritime environment may affect the real-time deployment of UAVs, 2) the hybrid network architecture requires joint resource allocation and interference coordination, and 3) limited channel state information due to the dynamic propagation environment and large transmission delay will bring new challenges to the system optimization. A. Harsh Maritime Environment Different from the terrestrial case, the maritime environment is seriously affected by weather conditions, such as typhoon and disastrous waves. In the extreme case, the wind speed caused by typhoon could be larger than 30 m/s. Most existing UAV products are not designed for all-weather service. As summarized in Table I, they are more likely to be deployed under relatively good weather conditions, i.e., wind speed smaller than 17.1 m/s. In practice, the UAV used in the maritime environment should be carefully chosen. Another important issue is that the vast sea area makes it difficult for UAVs to land and charge, which seriously restricts the UAV deployment in practice. Offline deployment of UAVs taking these restrictions into account is a possible solution. As discussed in the previous section, most vessels travel regularly along sea lanes. This can provide important prior information for the deployment of UAVs. For example, by using the historical information on the communication demand over sea lanes, the hotspot areas where broadband coverage is requested can be predicted. By intentionally deploying UAVs within their endurance time over these areas, broadband coverage holes can be efficiently filled. Also, the serving latency can be reduced by this pre-deployment scheme in contrast to the request-triggered temporary dispatch manner. In practice, the time advance and duration of offline deployments should be controlled within the predictable range of maritime environment. Online decision should also be activated for better adaptability in extremely dynamic weather conditions. This leads to an online and offline collaboration framework. As summarized in Table I, the UAV's maximum duration of flight is usually less than 8 hours due to the limited energy onboard. The UAV deployment should be carefully determined according to the residual energy, and how to deploy service stations for energy replenishment on the vast ocean area becomes an important issue. To address this problem, vessels can be used as service stations. But as discussed above, their locations are restricted to sea lanes. Thus, dedicated and vessel-based service stations should be deployed in a synergetic manner. Note that the offshore distance of the coastal area is about 370 km for the exclusive economic zone. If service stations are only deployed along the coast, considering the cruising speed and maximum flight time, only the oil-powered fixed-wing UAV is possible for a 740 km round trip from Table I. The other UAVs listed in Table I can only work in areas near the coast if vessel-based service stations are not available. To achieve continuous communication using the energylimited UAVs, efficient scheduling of a warm of UAVs is necessary. Recalling that vessels are sparsely distributed, it is more likely that a maritime UAV could be part-time idle during its flying time. Hence, when a UAV has to go to terrestrial/shipborne service stations for energy replenishment, neighbouring idle UAVs with enough residual energy can be dispatched as replacements to guarantee the continuous coverage. This leads to another important optimization dimension of UAV scheduling, that is minimizing the number of UAVs scheduled, which not only saves costs but also facilitates management. In the extreme case that there are no neighbouring idle UAVs with enough residual energy, vessels may temporarily request degraded services from existing MCNs. B. Coordination Issues Maritime UAVs are part of a hybrid satellite-UAV-terrestrial communication network. They rely on existing MCNs for backhaul links. Different from traditional UAV communications in the cellular architecture, where the backhaul is not crucial due to the ubiquitous coverage of cellular networks, current MCN is usually not sufficient to build a reliable wireless backhaul for UAVs on the vast ocean. Specifically, TBSs can only support UAVs in the coastal area. When UAVs are far away from the coast, satellites could be the only choice for wireless backhaul with inevitable large delay and limited communication rate. Moreover, to communicate with satellites, UAVs should be equipped with airborne high-gain antennas. Considering these facts, the backhaul issue should be taken into account in the scheduling of UAVs. Alternatively, data caching on the UAV can be used, which allows interim outage of backhaul given the information delay tolerance. In this case, communications, control of UAV's trajectory and caching need to be jointly designed. In addition to backhaul, UAVs may also share spectrum with existing MCNs so as to alleviate the spectrum scarcity problem. However, due to the mobility of UAVs, the co-channel interference under spectrum sharing is more complicated than the traditional case with fixed communications infrastructure [10]. In practice, the trajectory of UAVs can be exploited to predictively characterize the interference distribution. By doing so, process-oriented interference coordination can be derived between UAVs and TBSs/satellites. C. Limited Channel State Information To improve the quality of service, the location (or trajectory) planning and the resource allocation for UAVs are required using the channel state information (CSI). However, in the maritime scenario, the CSI is usually difficult to acquire due to the following reasons. 1) As previously discussed, the trajectory planning for maritime UAVs is likely to be pre-determined offline. This means that the trajectory optimization has to be conducted using only the predictable CSI, rather than the instantaneous CSI. 2) When UAVs share spectrum with satellites (or TBSs) to improve spectrum efficiency, the interference from UAVs to satellite users is inevitable. To mitigate the interference, the CSI between UAVs and satellite users has to be known. However, in practice, there are usually no direct links between UAVs and satellite users for CSI feedback. This CSI has to be exchanged between satellite sub-system and UAV sub-system via a dedicated central processor, which may lead to undesirable delay. In practice, the large-scale CSI, such as path loss, shadowing, angle of departure, angle of arrival and so on, varies slowly and is closely related to transceiver's positions, which could be predicted by using the historical data and/or premeasured data [11]. To deal with the challenges mentioned above, utilizing large-scale CSI could be a reasonable choice for the optimization of a hybrid satellite-UAV-terrestrial network. We can create a radio map on the ocean focusing on shipping lanes. The map generates the large-scale CSI for any given positions. In its initial stage, dedicated UAVs and vessels can be dispatched to measure the large-scale CSI. Then, communication data containing channel knowledge can be used to update the radio map for better resolution in an online manner. This creates a novel lookup-table approach for CSI acquisition instead of conventional pilot-based channel estimation and feedback approaches. Correspondingly, new methodology for reliable resource allocation and the placement (or trajectory) optimization for UAVs with large-scale CSI, i.e., a radio map in practice, should be conceived. IV. NUMERICAL EXAMPLE AND DISCUSSIONS We use an example to show the benefit of hybrid satellite-UAV-terrestrial networking, as illustrated in Fig. 1. The UAVs are dispatched in an on-demand manner: a UAV is sent to the objective vessel on request, and flies back to the service station when the transmission is accomplished. The trajectory of the UAV from time t 1 to t 3 is pre-designed according to the shipping lane information and predicted large-scale channel information. On the backhaul side, the UAV directly communicates with the nearest TBS. On the access side, the UAV shares spectrum with satellites in an opportunistic manner. The interference from the UAV to the satellite users is controlled by an interference temperature limitation I. Also, orthogonal resources, e.g., different subcarriers or different time slots, are used to mitigate the interference between the access link and the backhaul link of the UAV. A typical composite channel model is considered, consisting of both path loss and Rician fading [12,13]. We assume that only the large-scale CSI is available for the UAV predeployment. The trajectory and the transmit power of the UAV are jointly optimized to maximize the minimum ergodic achievable rate during the period that the UAV serves the vessel, under various practical constraints including the maximum transmit power P max , the residual energy E, the limited backhaul capacity, and the interference temperature limitation I [14]. The goal of maximizing the minimum achievable rate is to improve the coverage performance, i.e., to promote the worst-case user's performance. Other metrics, e.g., sum rate maximization, can also be pursued according to practical requirements. We assume that the shipping lane of the vessel is known beforehand. Without loss of generality, we assume the vessel is moving from (5.0 × 10 4 , 0, 10) m to (6.8 × 10 4 , 0, 10) m with a velocity of 10 m/s while the UAV serves it. Via simulation, the minimum ergodic achievable rate during the period is compared for different approaches in Fig. 4, where the simulation parameter setting is described in Table II. First of all, the minimum ergodic achievable rate is compared between the UAV-assisted MCN and the traditional shore-based MCN. For the shore-based MCN, the vessel is directly served by the TBS and we assume that accurate CSI is known at the TBS. Although the UAV-assisted method has additional restrictions, such as backhaul capacity and inaccurate CSI, its performance can still be improved by employing the UAV to reduce the transmission distance to the vessel. We also note that it would be inefficient, if not impossible, to directly apply the existing UAV scheduling methods (which was designed for the terrestrial scenario) on the ocean. For comparison, the performances of two UAV scheduling algorithms are demonstrated in Fig. 4, including 1) the algorithm in [15], which was designed for the terrestrial scenario and has shown significant gains in improving the performance of cellular networks, and 2) the algorithm proposed in [14], which utilizes only the large-scale CSI, and additionally considered constraints on the interference and maximum transmit power. When I = −40 dBm, the constraint on the interference is looser compared with others, and hence it can be ignored. In Fig. 4, when P max ≥ 28 dBm, I = −40 dBm and E = 1.5 × 10 3 J, the performance is not varied when P max is increased, and thus the performance is mainly determined by constraints on the residual energy and the backhaul capacity. Also, when E = 3×10 4 J, the constraint on the residual energy can be ignored. When P max ≥ 38 dBm and E = 3 × 10 4 J, the effect of the constraint on the interference can be seen. The performance is improved when I is increased. One sees that by using only the large-scale CSI, better performance can be obtained by our tailored algorithm for the maritime applications. V. CONCLUSIONS In this article, we have discussed opportunities and challenges for integrating UAVs into existing MCNs. First of all, we have shown that most vessels keep to sea lanes and are sparse distributed on the ocean. These characteristics of vessels and the UAV's agility bring opportunities to realize the on-demand coverage using UAVs. Moreover, challenges can be well addressed by dynamically deploying and scheduling UAVs, which have been designed considering the coordination among TBSs, UAVs and satellites and the optimization using the predictable large-scale CSI. At last, a case study has been conducted to demonstrate benefits provided by the hybrid satellite-UAV-terrestrial network.
2020-06-02T21:03:19.546Z
2020-06-01T00:00:00.000
{ "year": 2020, "sha1": "196a330784963642a44f8105f082c54b0821656f", "oa_license": null, "oa_url": "http://wrap.warwick.ac.uk/139782/1/WRAP-Enabling-5G-ocean-hybrid-satellite-UAV-terrestrial-Chen-2020.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "196a330784963642a44f8105f082c54b0821656f", "s2fieldsofstudy": [ "Computer Science", "Environmental Science" ], "extfieldsofstudy": [ "Computer Science", "Engineering", "Mathematics" ] }
267028886
pes2o/s2orc
v3-fos-license
Geometric Deep Learning sub-network extraction for Maximum Clique Enumeration The paper presents an algorithm to approach the problem of Maximum Clique Enumeration, a well known NP-hard problem that have several real world applications. The proposed solution, called LGP-MCE, exploits Geometric Deep Learning, a Machine Learning technique on graphs, to filter out nodes that do not belong to maximum cliques and then applies an exact algorithm to the pruned network. To assess the LGP-MCE, we conducted multiple experiments using a substantial dataset of real-world networks, varying in size, density, and other characteristics. We show that LGP-MCE is able to drastically reduce the running time, while retaining all the maximum cliques. Introduction Graphs, also called networks, are ubiquitous structures that represent relationships between objects or entities in a variety of domains such as social, biological, technological and information.In general, networks can be used to model complex systems (hence the name "complex networks") and to study their properties, such as the spread of diseases in a population, the exchange of goods in a trade network, or the diffusion of information in a social network.For instance, in a social network, each person can be represented as a node and the edges between nodes represent friendships or interactions between them; in molecules, nodes represent atoms, and edges represent chemical bonds between them; in a recommender system, nodes represent users or items, and edges represent preferences or co-occurrences between them.Due to the importance of networks in many real-world applications, there is a growing interest in developing efficient algorithms to solve problems on graphs, including trainable ones.In fact, networks have gained tremendous attention also in Machine Learning due to their ability to capture complex dependencies and interactions between data points allowing also to take into account topological properties.Graph-based machine learning models leverage the graph structure to learn representations of nodes or entire graphs.In particular, in the last years a new branch of Deep Learning, called Geometric Deep Learning (also known as Graph Representation Learning) has emerged [1].It aims at learning representations of graphs, nodes and edges into a Euclidean space in order to apply classical Deep Learning algorithms, and approach a variety of tasks like classification, regression, clustering, link prediction, etc.The most popular algorithms in this field are Graph Neural Networks (GNNs) [2,3], which are a generalization of Convolutional Neural Networks (CNNs) to operate on graphs.More in detail, they perform message-passing between nodes in the graph, where each node aggregates information from its neighbors (using a learnable function) and then updates its own representation.In this way, the graph structure, along with the input node (and/or edge) features, is taken into account during the learning process.Some examples of Graph Neural Network layers include: • Graph convolutional networks (GCNs) that are a special type of graph neural networks (GNNs) that use convolutional aggregations.Applications of the classic convolutional neural network (CNN) architectures in solving machine learning problems, especially computer vision problems, have been hugely successful • GraphSAGE is a framework for inductive representation learning on large graphs, it that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings for previously unseen data [4] • Graph attention networks (GATs): it is a neural network architectures that operate on graph-structured data, leveraging masked self-attentional layers.They are able to attend over their neighborhoods' features and enable specifying different weights to different nodes without requiring any kind of costly matrix operation [3] • Graph Isomorphism Networks (GINs): GINs generalizes the Weisfeiler-Lehman graph isomorphism test and hence achieves higher discriminative power than other GNNs [5]. Unfortunately, many algorithms to analyze or manipulate graphs are computationally difficult (i.e., NP-hard) and cannot be employed in real-world applications due to the large size of the input data and the exponential time complexity of the algorithms.However, we often encounter specific instances of NP-hard problems that can be solved efficiently using ML techniques.By analyzing these specific instances, we can gain insights into the mechanism of the used ML technique and develop algorithms that work well in practice.Furthermore, understanding the properties of specific instances can also help us identify instances that are likely to be easy or hard to solve, which can be useful in designing approximation algorithms or heuristics.Therefore, analyzing specific problems can be a valuable approach to tackling NP-hard problems in practice. In this paper, among the problems on networks that are NP-hard, we focus on the Maximum Clique Enumeration (MCE) problem.It is a fundamental graph problem that belongs to the class of combinatorial problems.In particular, it consists in enumerating all the cliques of maximum size in an undirected graph [6,7].Other than the pure theoretical aspects, finding the maximum cliques has several practical applications, for instance, Liuet Alii [8] use the MCE to understand the principles of cell organizations and discover protein complexes, Kose at Alii [9] derive structural information and correlations between different metabolite levels by enumerating all the maximum cliques.Other applications are presented in [10], where Fukagawa et alii proposed a method for transforming the Tree Edit Distance (TED) between unordered trees problem into a maximum vertex weight clique problem and Mori et al. [11] into a maximum clique problem.As mentioned above, the MCE problem is particularly challenging, especially for large networks.In the literature there are several heuristic solvers (see [12,13]) that often provide suboptimal solutions in reasonable time, but only recently, the use of machine learning has been investigated [14][15][16].For instance, in [17][18][19][20] the authors proposed a technique aiming at reducing the MCE search space by pruning the input networks, by removing those nodes that probably doesn't belong to any maximum cliques.More precisely, each node v is removed if a classic machine learning algorithm predicts the probability it belongs to some maximum clique is lower than some threshold.Predictions are performed by using both network measures and statistical properties as features for each node in order to capture structural properties of the network.Here, we further enhance the state-of-the-art and propose a pre-processing step for the MCE problem inspired from those proposed in [17][18][19]21].We named our algorithm as Learning Graph Pruning for MCE (LGP-MCE).More specifically, we employ Geometric Deep Learning [22] to implement a network pruning strategy aiming at reducing the running time required to enumerate the maximum cliques.Exploiting the capabilities of Graph Neural Networks, we show that our method is able to drastically prune real-world networks by removing up to *99% of nodes and *99% of edges while preserving all the maximum cliques.Such pruning ratios translate into speed-ups up to 21K times for a variation of the Bron-Kerbosch algorithm [23] implemented in the igraph [24] network analysis software.We validate the proposed methodology by training and testing on already pruned instances.In particular, we first remove all the nodes not belonging to the K-core of the network, and then apply the same pipeline described above.While the vertex and edge pruning rates of LGP-MCE drop significantly on some of such K-core pruned networks, we note that they are way denser than the original ones, and thus harder to prune safely. The paper is organized as in the following.Section Materials and Methods briefly introduces the MCE problem and related literature, discusses the proposed approach and lists the database used to validate the results.Finally, section LGP-MCE reports on results discussing them. Materials and methods In this section we formalize the MCE problem and describe the heuristic used to reduce the time needed to solve the problems, finally we describe the experiments setup and provide reference to the used datasets. The Maximum Clique Enumeration problem A clique in graph theory is a subset C of vertices in an undirected graph G(V, E) such that every pair of distinct vertices in C are adjacent, i.e., for every two vertices u, v 2 C with u 6 ¼ v, there exists an edge (u, v)2E in the graph.In other words, a clique is a complete sub-graph of the original graph, meaning that all the vertices in C are pairwise connected.The size of a clique is the number of vertices it contains, and a maximum clique is a clique of the largest possible size in a given graph.The problem of finding a maximum clique in a graph is known as the maximum clique problem and is NP-hard. The algorithms for finding and enumerating all the maximum cliques can be divided into the following three families: 1. Exact solvers utilize a branch and bound approach to find an optimal solution with exponential complexity.This method involves dividing the entire research space into a number of sub spaces called branches, and iteratively pruning branches that are not useful for the final solution.The decision of whether to remove a branch or not is called bounding operation.Overall, exact solvers remain a crucial tool in combinatorial optimization and continue to be an active area of research. 2. Heuristic solvers are algorithms that utilize probabilistic models to explore the graph's subsets of vertices in the most efficient way possible, while not being certain about the solution's accuracy.Unlike exact solvers, heuristic solvers require polynomial time to find a (often suboptimal) solution.Therefore, it is not guaranteed that a heuristic solver can find all the maximum size cliques of a graph, nor is it assured that the size of every clique found by the algorithm is accurate. 3. Domain specific solver solve the problem on graphs belonging to a specific domains taking advantage of the domain properties. In the literature there are numerous algorithms that fall into the three families.For instance, the Bron-Kerbosch algorithm [25], used to determine all cliques of the maximum size of an undirected graph, is characterized by a computational complexity of O(3 (|V|/3) ), where |V| is the number of nodes of the graph.Xu and Zhang, in [26], propose an algorithm employs an upper bound on the maximum clique size and prunes the search space by eliminating vertices that are unlikely to be part of a maximum clique, and Tsubaki et Al., in [27], propose a method based on the observation that high-degree vertices are more likely to be part of maximum cliques, and thus, removing low-degree vertices can reduce the search space while maintaining the same maximum clique size.However, the last two approach do not explicitly reports on complexity but reports the results of several benchmarks that, they tell, outperform state-ofthe-arts. Fast Max-Clique Finder algorithm [28] is a heuristic solver whose computational complexity is O(|V| 2 ), where it is the maximum degree within the graph.Cliquer is another very popular implementation of the algorithm defined by O ¨stergård in 2002 [29], and adopts the branch and bound strategy.Since the performance of this algorithm depends on node sorting, it uses a heuristic approach based on vertex coloring.EmMCE [30] is an algorithm that uses external memory to store those networks that cannot be allocated within RAM memory due to excessive size.To improve running time, this algorithm also runs in parallel.Recently, Chatterjee and Al.[31] proposed two new heuristic algorithms for the maximum clique problem, based on local search and probabilistic pruning techniques.The algorithms are designed to balance exploration and exploitation of the search space and are shown to outperform existing stateof-the-art heuristic solvers on various benchmarks.For further discussion about the MCE algorithms, we refer the Reader to [32]. LGP-MCE Our heuristic-based approach draws inspiration from the work of Lauri et al. [17,19,20,33].It involves enhancing existing solvers by reducing the search space through node pruning in input networks.Specifically, we propose a pre-processing step where nodes are removed (pruned) based on a Machine Learning model's predicted probability of not belonging to a maximum clique, as long as the probability falls below a defined threshold called "confidence threshold".The confidence threshold allows for a trade-off between accuracy and pruning rates, enabling the LGP-MCE to be more flexible and tailored to specific application needs.In fact, higher thresholds may grant higher pruning rates and lower accuracy, while lower thresholds may grant lower pruning rates and higher accuracy.Importantly, LGP-MCE does not affect the computational complexity of solvers since the algorithms are not modified, but the proposed pre-processing step produces smaller input graphs, thereby improving overall computational time and memory usage. LGP-MCE differs from previous approaches because it employs Graph Neural Network (GNN) layers instead of classic Machine Learning algorithms, like Multi-Layer Perceptrons and random forests, and it only performs a single pruning stage. More in detail, we use a Geometric Deep Learning model with Graph Attention Network (GAT) [22,34] layers stacked together with a Multi-Layer Perceptron that, given the node embeddings, performs regression on the nodes and returns a value ranging over [0, 1].We choose the Graph Attention Network layers since they are particularly suitable for our task: they do not perform neighborhood sampling, and they leverage the attention mechanism, proposed in [35] for Natural Language Processing tasks, to assign a relative importance score (i.e., the attention coefficient) to each neighboring node.Furthermore, Graph Attention Network layers use self-attention (through self-loops) and allow multiple attention-heads to improve the prediction performance.As in [17], after the pre-processing stage, we feed the pruned network to an exact clique solver (specifically, to the one implemented in the Python igraph library) to retrieve all the maximum cliques.It's important to emphasize that alternative clique-finding algorithms, including heuristic methods like MoMC [36], could be employed.The choice of different solvers might influence the overall computational time, yet it doesn't impact our pre-processing step. Our model is trained in a supervised manner on a set of real-world networks, where maximum cliques can be efficiently found, and then applied to larger networks.Each node's training target is a binary value, depending on whether it belongs to a maximum clique or not, respectively.Regarding the input node features, we compute the normalized node degree, the Local Clustering Coefficient (LCC), the Chi-squared χ 2 of normalized degree (with respect to neighboring nodes) and of the Local Clustering Coefficient, and the normalized K-core value.Such features are both local (i.e., they are computed on the neighborhood of each node) and global (i.e., they are computed on the whole network), and capture different aspects of the network topology.These features include a subset of those used in [17], since they can be computed in linear time with respect to the number of edges, and they are sufficient to achieve good results.Moreover, the local features are aggregated by the Graph Neural Network layers through a message passing algorithm with learned weights, producing higher-order node features that better capture the topology. The computational complexity of the proposed pre-processing method is linear with the number of nodes and edges in the network.In fact, the most expensive node feature to compute is the K-core, which is: where |V| and |E| are the number of nodes and edges respectively.On the other hand, the GAT layers have the following computational complexity: where h is the number of attention heads used.Thus, the final computational complexity is: which is linear with respect to the number of nodes and edges.Thus, LGP-MCE is computationally efficient and introduces an overhead that is negligible with respect to the computational complexity of the exact clique solvers, and that can be applied to very large networks, as we will show in the experimental evaluation, where we use networks with up to 7 million edges.We also note that we train and validate the performance of the proposed approach on networks that have already been pre-processed by removing all the nodes that do not belong to the K-core, as detailed in the following. Experimental setup In this Section we describe the experimental settings and the evaluation metrics used to assess the performance of LGP-MCE.Dataset.We use real-world networks from various domains, including social, biological, and communication networks, obtained from NetworkRepository.com [37].The networks are downloaded in batch and transformed into undirected, while self-loops and parallel edges are removed.Then, the networks are pruned by removing all the nodes that do not belong to the maximum K-core of the network.The final step, as elaborated in Section LGP-MCE, serves as a benchmark strategy utilized in other works.It is intended to extract the denser segment of the network.Although this process may result in the loss of some maximum cliques, the resultant network instance is notably denser than the original, serving as a hard to safely solve benchmark for evaluating the algorithm. After such elaboration, we split the networks into training and test sets.In particular, for the training we use *350 networks with a number of nodes from 1K to 70K, a number of edges from 1K to 1.8M, and a maximum clique size from 5 to 108.For the test set, we use 32 networks with a number of nodes from *300K to *200K, a number of edges from 20K to 7M, and a maximum clique size from 15 to 200.The test networks are also various in terms of average degree, degree assortativity, density, clustering coefficient, and K-core value.For the full list and statistics of the test networks used in our experiments, see Table 1. Table 1.Test networks used in our experiments, along with their statistics.The networks are already pruned using the largest K-core value, and are obtained from the Network Repository [37].The elaboration of the networks and the feature computation are performed using the graph-tool library [38], while the Maximum Cliques are found using the igraph library [24]. Network Model architecture.We use Geometric Deep Learning models with Graph Attention Network layers [22] to learn the node embeddings, which are then classified by a Multi-Layer Perceptron (MLP).The activation function used between the layers is the Exponential Linear Unit (ELU), whereas the output layer uses the Sigmoid function to produce values between 0 and 1.We build a model that also computes the graph embedding by aggregating all the node embeddings using the max-pooling operation.This embedding is then concatenated to the node embeddings and fed to the MLP, that aims at capturing the global topology of the network and improve the classification performance. Training.We train our models in a supervised manner using an Adam optimizer [39] that is a computationally efficient algorithm that is able to deal with both sparse gradients and non-stationary pbjectives.Regarding the loss function, we use the Binary Cross Entropy, which is frequently employed in binary classification tasks in machine learning, as it quantifies the dissimilarity between predicted probabilities and actual binary labels [40].Since the dataset is heavily unbalanced, we use different positive and negative weight in the loss function to account for the different number of positive and negative examples in the training set.The hyper-parameters are tested in a grid search, and the best performing model parameters are selected, during the training, based on the balanced accuracy and recall metrics on the training set itself.To make the training more efficient, we use the Early Stopping technique, which stops the training when the scores do not improve for a certain number of epochs. Performance measures.To evaluate the performance of the pruning obtained with LGP-MCE, we define the following measures: where |MC| p , |MC|, MC p , #MC are the size and number of maximum cliques respectively in the pruned (G p (V p , E p )) and original graph G(V, E).In particular, the similarity measures the percentage of cliques of maximum size held after the pruning stage, the speedup measures the improvement in computational time (i.e., the minimum time spent by the solver on the original graph over the minimum time spent on the pruned graph), and N R p and E R p are the percentage of removed nodes and edges (as a consequence of removing the nodes). Results In this Section we present and discuss the results of the LGP-MCE, obtained using the Geometric Deep Learning models described in sub section Model Architecture and trained on the *350 networks described in Section Dataset.We note that, in order to align LGP-MCE with smarter pruning strategies and make the comparisons more meaningful, we prune all the networks (including the ones used for training) by computing the K-core and remove all the nodes that do not belong to the inner one (i.e., largest k).On the other hand, it makes the experimental data harder to prune further and, thus, to get higher speedups. The models have a variable number of Graph Attention Layers (from 1 to 4) and a variable number of hidden units (from 5 to 40) and heads (from 5 to 15), while the Multi-Layer Perceptron used for classification is fixed to four layers with a decreasing number of units (from 100 to 1).The models are trained using the Adam optimizer with a learning rate of 0.003, a weight decay of 0.0001, a dropout rate of 0.3, and a batch size of 8 (where each network is a batch unit).We train the models for 200 epochs, select the model that provides the best balanced accuracy and ROC AUC on the training set, and stop the training if the such score does not improve for 50 epochs.We stress that testing multiple models is a highly parallelizable task, and thus, we can train and test multiple models in parallel in order to select the best one.Considering that the computational time may vary depending on the system load, we solve each instance 10 times and take the shortest time as the solver time.In the experiments, the confidence threshold T selections is crucial: a too low value may result in a significant increase in computational time since too little nodes are removed, while a too high value may result in a significant loss of cliques.Here, instead of fixing the threshold T a priori, we select it for each network in the test set starting with a very high threshold (e.g., T = 0.99), and decreasing it until a congruous number of cliques is found.While this strategy is not optimal, it is simple and effective.In fact, finding the exact maximum cliques is not feasible in most real applications due to the size of the networks and the exponential complexity of the problem, however a best effort approach is the only solution available.Moreover, very high threshold values translate in very high pruning rates, and thus in a significant reduction of the computational time.This makes testing multiple threshold values feasible, and allows us to select the threshold T for each network in the test set.As shown by Table 2, in which we report the 18 networks Table 2. Node pruning results that provide an optimal solution on the real-world network instances already pruned using the K-core.For the speedup, we enumerate the cliques ten times on both the original and pruned instances, and take the ratio between the minimum CPU time.The node pruning rate is the ratio of nodes predicted to not be part of a maximum clique, while the edge pruning rate is the ratio of edges removed as a direct consequence of the removal of nodes.The solver time columns are the time taken by the exact clique solver before and after the pruning (in seconds), while the s column is the speedup. Network Pruning where the pre-processing step retains all the maximum cliques and provides speedups greater than 1.5 times, LGP-MCE obtains the optimal solution up to 21K times faster by removing up to 99.1% of the nodes and 99.40% of the edges.Networks that are harder to solve, like bio-WormNet-v3, also benefit from very high speedups (760x) and pruning rates (67.65% for the nodes and 79.98% for the edges), which also translate in a significant reduction of the memory usage. As said above, we should note that LGP-MCE may not guarantee the optimal solution, like the networks in Table 3, where the pre-processing step removes some maximum cliques.Yet, this is a false problem since the LGP-MCE still provides extreme speedups (up to 100K times faster) and pruning rates (up to 99.99% for the nodes and 99.99% for the edges), and thus, it is still very useful in practice since it is a very close approximation of best solution.Moreover, we stress that a possible strategy could be to tune the threshold T in order to find at least one maximum clique (or a clique of close size) in very short time, and then leverage this information to prune the original networks safely.In fact, since at least a clique with that size was already Table 3. Node pruning results that keep at least one maximum clique on the real-world network instances already pruned using the K-core.For the speedup, we enumerate the cliques ten times on both the original and pruned instances, and take the ratio between the minimum CPU time.The node pruning rate is the ratio of nodes predicted to not be part of a maximum clique, while the edge pruning rate is the ratio of edges removed as a direct consequence of the removal of nodes.The solver time columns are the time taken by the exact clique solver before and after the pruning (in seconds), while the s column is the speedup.The % column is the similarity measure defined in subsection Performance measures, and quantifies the percentage of maximum cliques retained by the pre-processing step.found, all the nodes with lower degree are guaranteed to not belong to any maximum clique, and can be safely removed. Conclusion In this study, we introduce a Geometric Deep Learning-based node pruning pre-processing, that aims at reducing the computational time of Maximum Clique Enumeration solvers.We show that LGP-MCE is able to obtain extreme speed-up, while keeping all the maximum cliques.Furthermore, our experiments show that improved performance can be obtained by customizing the confidence threshold, specifically tailored to the characteristics of the network under analysis.We also emphasize that combining our pruning method with a heuristic solver, for example, might enhance solving times but could also elevate the risk of missing some maximum cliques.This configuration could prove beneficial in tackling exceedingly large instances that might otherwise be difficult to approach.These two aspects deserves further investigations that will be object of future works.Another direction for future works regards the extension of the method presented in this paper to find cliques in time varying networks [41].
2024-01-19T05:05:37.897Z
2024-01-16T00:00:00.000
{ "year": 2024, "sha1": "e54e251eee05256ccf5cb1f66223d81490ec00f7", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0296185&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e54e251eee05256ccf5cb1f66223d81490ec00f7", "s2fieldsofstudy": [ "Computer Science", "Mathematics" ], "extfieldsofstudy": [ "Medicine" ] }
225232316
pes2o/s2orc
v3-fos-license
A Study of Conversational Implicature in the Movie “Flipped” Based On Cooperative Principle and Politeness Principle The film “Flipped“ has been highly praised since its release, and it has always been on the list of “must see movies”. As a new rising star in linguistics, pragmatics focuses on the analysis of how to understand and use different languages in special contexts, especially in different communication environments. From the perspective of Grice’s cooperative principle and Leech’s politeness principle, this paper analyzes the dialogues in the films, and explores the conversational implicature and implied meanings behind the dialogues. On the one hand, it helps us grasp the characters’ personalities. On the other hand, it may help us better understand the importance of cooperative principle and politeness principle. Introduction Conversation is the most important form of communication in people's daily life, and how to carry out the most effective communication has been a problem discussed by oratory, rhetoric and linguistics since ancient times. Since the 20th century, with the rapid development of linguistics, the study of conversation has made great progress, and the research perspective is also developing towards diversification. As a new discipline, the rise of pragmatics marks a new stage of linguistic research (Yu Dongming, 2011). Cooperative principle and politeness principle are two important theories in pragmatics, which are widely used to guide verbal communication and interpret conversational implicature. Cooperative principle focuses on describing how people abide by or violate the principle and its norms, while politeness principle mainly explains why people express politeness at the cost of destroying cooperative principle (Wang Ya, 2011). In terms of movies, as Marcel Gabriel, a famous French movie and television theorist said "film is not only an art, but also a language" (Marcel Gabriel, 1992). Art comes from life, and film is also the reflection of our real life. Then, as the basic form of film art, dialogue plays an important role in presenting the theme of the film, depicting the image of characters, and exerting artistic appeal. Therefore, the film dialogue also reflects the features of daily communication language, and there will be opposition and unity between cooperative principle and politeness principle. Therefore, this paper aims to analyze the dialogues in the movie "Flipped" based on the cooperative principle and politeness principle, so as to further understand the conversational implicature of the movie dialogue, grasp the character characteristics and understand the theme of the movie. Cooperative Principle The cooperative principle (CP) was put forward by H.P. Grice in 1975 in "Logic and Conversation." The cooperative principle describes how people interact with one another. As phrased by Grice, it states, "Make your contribution such as it is required, at the stage at which it occurs, by the accepted purpose or direction of the talk exchange in which you are engaged." Though phrased as a prescriptive command, the principle is intended as a description of how people normally behave in conversation. According to Kant's four philosophical categories of quantity, quality, relation and manner, Grice puts forward four principles of cooperative principle. The first is the maxim of quantity, which includes: (a) make your contribution as informative required; (b) do not make your contribution more informative than is required. The second is the maxim of quality, which includes: (a) do not say what you believe to be false; (b) do not say that for which you lack adequate evidence. The third is the maxim of relation, that is, make your contribution relevant. The fourth is the maxim of manner, which includes: (a) avoid obscurity of expression; (b) avoid ambiguity; (c) be brief; (d) be orderly. In a word, Grice's four cooperative principles actually require people to pay attention to the informativity, authenticity, relevance and clarity of conversation in daily communication (He Zhaoxiong, 2011). This kind of conversation is direct and efficient. For a long time, as the guiding principle of conversational behavior, cooperative principle can be used to explain many language phenomena in daily communication. However, in the actual communication process, the cooperative principle can not always be fully followed. Sometimes, if the conversation is too straightforward, it will lead to the embarrassment and unhappiness of communication, and ultimately influence the actual communication effect. Therefore, sometimes in order to achieve a specific purpose of communication, people often violate the principle of cooperation and express their intention in an implicit and indirect way (He Zhaoxiong, 1999). Politeness Principle In order to explain this phenomenon scientifically, Brown and Levinson further improved and supplemented the cooperative principle and put forward the politeness principle. However, Brown and Levinson's analysis model cannot fully explain all the phenomena related to politeness. Like other theoretical models, the explanation is incomplete. In 1983, Leech put forward the politeness principle (PP) on the basis of two scholars. He revised and supplemented the cooperative principle reasonably and effectively to remedy the limitation of the cooperative principle. This principle states that people deliberately violate the cooperative principle in verbal communication and let the hearer understand the speaker's real intention out of the consideration of politeness. Politeness principle includes the following basic principles. Politeness principle improves the conversational implicature theory and explains the problems that cooperative principle can't explain. Therefore, the relationship between politeness principle and cooperative principle is complementary to each other. When explaining people's communicative behavior, we can combine politeness principle with Grice's cooperative principle. For example, people deliberately violate the cooperative principle, but actually follow the politeness principle, which makes the conversation have a deeper meaning, reflects the specific communicative purpose and pragmatic effect, and shows different pragmatic implications. In Leech's words, the politeness principle can "rescue" the cooperative principle. 3.The Cooperative Principles and Politeness Principles in the dialogues of the "Flipped" "Flipped" is adapted from the novel of the same name written by Wendelin Van Draanen, directed by Rob Reiner. It was released in 2010, describing the interesting "war" between boys and girls in adolescence, and also a story about growing up. The story starts with the hero Bryce Loski's family moving to the town where the heroine Juli Baker lives. Juli Baker falls in love with Bryce Loski at first sight. However, this is not the beginning of all good things. Juli Baker's approach to Bryce Loski made him feel uncomfortable. As a result, the conflicts between the two protagonists deepened and the "war" was declared. But after a series of events such as "sycamore tree incident", the attitude of Bryce Loski to Juli Baker has changed obviously. Finally, they let go of the past and fall in love with each other. This paper will select seveal dialogues from the film to analyze, to explain how the characters in the film violate the cooperative principle, and how the politeness principle rescue the dialogues, and how these two principles achieve unity in the opposition of dialogues. This is the beginning of the film. When the Bryce family first moved to the small town where Juli lives, Juli fell in love with Bryce at first sight and wanted to carry luggage with Bryce in the car. Meanwhile, Bryce's father, Steven, was also in the car. When Julie was ready to help carry one of the boxes, Steven stopped her. When Juli asked "don't you want some help", Steven answered that he didn't need help, and then went on to say "there's some valuable things in there", which violates the quantity maxim: do not make your contribution more informative than is required. Then, Juli pointed to another box and asked if it was possible to carry the box. Steven answered that it was not necessary, and then continued to say, "run home." In this way, it violates two principles: one is quantity maxim, the other is the relation maxim: make your contribution relevant. In such a situation, Steven uses the tact maxim: minimize cost to other, to rescue the conversation and makes up for it: "your mother's probably wondering where you are," which is intended to express that he is worried about Juli's mother looking for her. By this sentence, it not only expresses Steven's true intention, but also saves Juli's face. In fact, Steve's intention is to keep Juli away from his luggage. From this, we can also see that Steven is contemptuous of his neighbor. Violation of Quality Maxim and Remedy with Approbation Maxim Mark: That is so neat.How about that, huh, Bryce? This conversation takes place when Bryce and his friends watch a snake swallow an egg. Mark sees the snake swallowing an egg. He sighs that this behavior is very quick and asks Bryce for his opinion. But Bryce's answer actually violates the quality maxim: do not say what you believe to be false, but uses the approbabation maxim of politeness principle: minimize dispraise of other and maximize praise of other, that is, try to narrow the differences between yourself and others and make remedies. In fact, Bryce actually felt that this scene was very disgusting, and did not feel that the scene was neat. However, in order to save Mark's face, he agreed with Mark's evaluation, which also reflects that Bryce is a timid boy and not good at expressing his ideas, which may be influenced by his father's demanding personality. www.ijssei.in Bryce: It wasn't me. My dad didn't think it was worth the risk. Juli didn't leave immediately after she sent the eggs to Bryce again. At this time, Bryce happened to go out and throw the garbage, so Juli found that he not only wanted to throw away the garbage, but also decided to throw away the eggs she had sent. Julie asked Bryce if they were her eggs, and Bryce answered her "Yeah.Yeah." This obviously violates the quality maximum: do not say what you believe to be false, because in fact, the eggs have not been dropped into the trash can and broken. However, in order not to hurt Juli, he chooses to violate this principle and replaced by the tact maxim: to minimize cost to others and make remedies. At the same time, in this dialogue, Bryce's final answer is "it wasn't me. My Dad didn't think it was worth the risk" also violates the quantity maxim, that is, do not make your contribution more informative than is required. Bryce gives more information than the conversation required, which also shows Bryce's irresponsibility and cowardice. After Juli found out that he had been throwing eggs as garbage, he directly shifted the responsibility to his father. (2) Bryce: Hi.You look nice. Juli: I heard you and Garrett making fun of my uncle in the library. And I don't wanna speak to you. Not now, not ever. … Juli: I'm sorry I was so angry when we first came in. I think everyone had a good time. Your mom was really nice to invite us. See you. This dialogue happened when Bryce's mother Patsy invited Juli's family to have a dinner together. Before that, Juli had just heard Bryce discuss Juli and her uncle with her friend Garrett in the school library. Bryce wanted to deny that he had liked Juli, so he echoed Garrett's ridicule of Juli' uncle. Therefore, when she was at Bryce's home, Juli ignored Bryce's praise and did not express her gratitude. Instead, she expressed her dissatisfaction to Bryce and said that she did not want to talk to him, which obviously violates the relation maxim of cooperative principle: what you said should be relevant. However, after the dinner, Juli took the tact maxim of politeness principle: to minimize the damage to others, expressed her apology and remedied the previous violation of the cooperative principle, which also shows Juli's character of daring to love and hate, being kind and straightforward. Maxim Juli: Hi, Bryce. Brought you some more eggs. Juli: Did your family like the first batch? Bryce: Do you even have to ask? When Juli delivers eggs to Bryce, Juli wants to ask Bryce whether his family likes the eggs he has sent before, but Bryce's answer is very vague, "do you even have to ask," which can be interpreted as like or dislike the eggs she sent. This answer does not explicitly answer Julie's question, which obviously violates the manner maxim of the cooperative principle: avoid ambiguity. Nevertheless, it also embodies the sympathy maxim of politeness principle: to minimize antipathy between self and other, so as to save Julie's face, trying to avoid directly expressing their family's antipathy to her eggs and avoid making her sad. Here we can see Juli's bright, outgoing character, but can also see Bryce's kind and warmhearted side. Conclusion To sum up, this paper analyzes the characters' dialogues in the film "Flipped" by applying the cooperative principle and politeness principle, and further interprets the characters' personalities based on the two theories in the film, which is helpful to deeply understand the content of the film and feel its language style and artistic essence. At the same time, through this analysis, we may draw the following conclusions: the use of cooperative principle and politeness principle to analyze the dialogues of characters in the movie can help us better understand the conversational implicature and may also improve our English language understanding level.
2020-10-28T18:21:01.747Z
2020-09-11T00:00:00.000
{ "year": 2020, "sha1": "181e704f8e4850e4e88ebe9bbb14faf67ec84a2f", "oa_license": "CCBYNCSA", "oa_url": "https://ijssei.in/index.php/ijssei/article/download/229/112", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "a33aabd9d331507a9326fa68ede32c75de5c647a", "s2fieldsofstudy": [ "Linguistics" ], "extfieldsofstudy": [ "Sociology" ] }
15759585
pes2o/s2orc
v3-fos-license
The BH3 mimetic HA14-1 enhances 5-fluorouracil-induced autophagy and type II cell death in oesophageal cancer cells Background: Resistance to chemotherapeutic agents has been associated with a failure of cancer cells to induce apoptosis. Strategies to restore apoptosis have led to the development of BH3 mimetics, which inhibit anti-apoptotic Bcl-2 family members. We examined the sensitivity of three oesophageal cancer cell lines to 5-fluorouracil (5-FU) alone and in combination with the BH3 mimetic HA14-1. Methods: Clonogenic assays, morphology, markers of autophagy and apoptosis were used to assess the involved death mechanisms. Results: In response to 5-FU treatment, OE21 cells induce apoptosis, KYSE450 and KYSE70 cells are more resistant and induce autophagy accompanied by type II cell death. Autophagy induction results in ineffective treatment as substantial numbers of cells survive and re-populate. HA14-1 did not improve 5-FU treatment or reduce colony re-growth in the apoptosis deficient KYSE70 cells. However, the sensitivity of OE21 (apoptotic) and KYSE450 cells (apoptosis deficient/type II cell death) was significantly improved. In OE21 cells, treatment with 5-FU and HA14-1 resulted in augmentation of apoptosis. In KYSE450 cells, the reduction in recovering colonies following combination treatment was due to the enhancement of type II cell death. Conclusion: The efficacy of HA14-1 is cell line dependent and is not reliant on apoptosis induction. The prognosis of oesophageal cancer remains poor because of the absence of molecularly targeted therapies and resistance to conventional DNA-damaging chemotherapeutics. This resistance to therapy has been associated with a failure to induce apoptosis in response to DNA damage (O'Donovan et al, 2011). Disruption of apoptotic signalling can be achieved by loss of tumour-suppressor activity and upregulation of survival signalling pathways (Adams and Cory, 2007). Ultimately, many signalling pathways that determine cell fate converge at the Bcl-2 family. In cancer, the balance between Bcl-2-negative regulators (Bcl-2/Bclx L /Mcl-1/A1/BCL-w) and positive regulators (Bax/Bak/BH3-only proteins) are disturbed such that initiation of apoptosis is difficult to achieve. Strategies to restore apoptosis have led to the development of BH3 mimetics (a novel class of therapeutics -now in clinical trials) designed to inhibit the interaction between Bcl-2 family members at BH3 domains (Zhang et al, 2007;Lessene et al, 2008;Kang and Reynolds, 2009). Although this strategy is designed to engage the apoptosis pathway in otherwise resistant cells, both Bcl-x L and Bcl-2 have also been implicated in the autophagy process. Both proteins can bind Beclin 1 (an autophagy regulator with a BH3 domain). Disruption of this interaction with ABT-737 (BH3 mimetic) induces autophagy (Maiuri et al, 2007). The BH3-only proteins Bid, Bad and BNIP3 have also been reported to induce autophagy in cancer cells (Lamparska-Przybysz et al, 2005;Hamacher-Brady et al, 2007;Maiuri et al, 2007). ApoL1, an autophagy regulator has a BH3 domain and its overexpression induces autophagic cell death (AuCD) (Wan et al, 2008). Autophagy is a survival mechanism that enables cells to tolerate adverse conditions such as starvation or withdrawal of survival signalling. However, it also has the potential to drive type II cell death/AuCD (Berry and Baehrecke, 2007;Wan et al, 2008). It is possible therefore, that the cytotoxic effects of BH3 mimetics may not be limited to apoptosis induction. In this study, we evaluated the effects of HA14-1 (BH3 mimetic) on three human squamous oesophageal cancer cell lines. Two of these cell lines fail to undergo apoptosis in response to 5-fluorouracil (5-FU) but instead induce an autophagic response, which is accompanied by type II cell death. Despite the presence of type II cell death morphologies, these populations will retain a substantial number of surviving autophagic cells that will re-populate, thus rendering treatment ineffective (O'Donovan et al, 2011). In this study, we evaluated the potential of HA14-1 to reduce this survival and assessed the involved death mechanism. Morphological examination of cell death Morphological features of cells treated with 5-FU and the BH3 mimetic HA14-1 (Sigma-Aldrich) were examined by light microscopy. Cytospun cells were stained with Rapi-Diff (Braidwood Laboratories, Ringmer, East Sussex, UK). Apoptosis is characterised by cell shrinkage, chromatin condensation, DNA fragmentation into 'apoptotic bodies' within an intact plasma membrane. Type II cell death was identified by clear elevation of cytoplasmic vesicles, loss of cytoplasmic material, pyknosis of the nuclear material and an intact nuclear membrane (Clarke, 1990). Genechip array analysis Affymetrix (High Wycombe, UK) gene array analysis of OE19, OE33, OE21 and KYSE450 was conducted in triplicate. RNA was extracted from untreated cells (RNeasy kit -Qiagen, Crawley, West Sussex, UK). RNA sample quality was tabulated by bioanalyser and biophotometric quality control criteria. In vitro cDNA synthesis, biotin labelling, transcription, fragmentation and hybridisation to the Affymetrix GeneChip Human Genome U133 Plus 2.0 array was carried out by Almac Diagnostics (Craigavon Co., Armagh, UK) (www.almac.com). Colony formation assay/statistical analysis Colony formation assay determines whether cells can recover from treatment. Following treatment, viable cells were re-seeded in fresh media (without drug) in a six-well plate (in triplicate) and allowed to grow for 12 -14 days. Colonies were fixed in 96% ethanol and stained with ProDiff solution C (Braidwood Laboratories) and subsequently counted (presented as mean±s.e.m.). Cell death induction with 5-fluorouracil We investigated the cellular response of oesophageal cancer cell lines to 5-FU (40 mM) for 48 h and evaluated morphological features of apoptosis or non-apoptotic (type II) cell death (as described in the Materials and Methods section). The response of two of these cell lines (OE21 and KYSE450) to chemotherapeutics has previously been reported together with apoptosis and autophagy markers (O'Donovan et al, 2011). Bcl-2 family expression in cell lines The molecular determinants of apoptotic or autophagic responses to drug treatment in these cells are unknown. It is possible that high expression of negative regulators of apoptosis may impede apoptosis induction and autophagy is then induced as a default response to cellular damage. The concept of re-opening apoptotic signalling with a BH3 mimetic could therefore be explored, providing these Bcl-2 family members and a key positive effector of apoptosis (Bax or Bak) is expressed. We therefore evaluated basal expression levels of key Bcl-2 family members. The antiapoptotic proteins Mcl-1, Bcl-2 and Bcl-x L were expressed in all cell lines as was the pro-apoptotic protein Bax. Bcl-2 and Bcl-x L expression was slightly lower in the OE21 (apoptosis-sensitive) cell line and Bax was slightly higher, suggesting that this imbalance may be important for apoptosis susceptibility ( Figure 2A). Expression was also evaluated following treatment with 5-FU (40 mM) for 24 and 48 h ( Figure 2B). There was no further loss of anti-apoptotic proteins in the OE21 cell line until 48 h when apoptosis would be initiated resulting in the degradation of many proteins. There was also no significant change in Bcl-2 family expression in the KYSE70 and KYSE450 cells. Expression and induction of NOXA We undertook Affymetrix array analysis (GeneChip Human Genome U133 Plus 2.0 arrays) to compare gene expression patterns in two apoptosis competent (OE21 and OE33) and two apoptosis incompetent oesophageal cancer cell lines (KYSE450 and OE19) (cell lines previously described in (O'Donovan et al, 2011)). This analysis included two cell lines from this study (OE21 and KYSE450). Multi-domain Bcl-2 family members were not identified as being differentially expressed in the two groups. The only BH3only protein flagged was NOXA. Noxa is a stress responsive BH3only protein that can interact with Mcl-1 and with lower affinity to Bfl1/A1 to induce apoptosis (Ploner et al, 2008). We therefore analysed NOXA transcript levels in KYSE70, KYSE450 and OE21 cell lines as absence of NOXA expression could be a major factor in their failure to undergo apoptosis. Real-time PCR analysis indicated that KYSE70 and KYSE450 (which undergo autophagy) have lower basal NOXA expression. As NOXA is a damage inducible gene, we also evaluated expression following treatment with 5-FU. Although NOXA expression was inducible in all cell lines in response to 5-FU (B4-fold) at 48 h, its expression in KYSE70 and KYSE450 was still well below the basal expression levels in OE21 cells ( Figure 2C) suggesting there may be a deficiency in BH3-only signalling. We therefore evaluated the possibility of enhancing apoptosis with a mimetic that can inhibit the activities of Bcl-2 family members. HA14-1 is a small molecule inhibitor of Bcl-2 and has been shown to disrupt Bax and Bcl-2 interactions (Wang et al, 2000). HA14-1 has previously been shown to enhance apoptosis induction in various tumour cell lines (Kang and Reynolds, 2009). Effect of HA14-1 (BH3 mimetic) on cytotoxicity and recovery of drug-treated populations We have previously shown that cells that induce apoptosis (OE21) fail to recover from 5-FU treatment, but cells that induce autophagy (KYSE450) recover when the drug is withdrawn (O'Donovan et al, 2011). In this study, we evaluated the effects of HA14-1 on both cell viability at 24 and 48 h and the capacity of cells to recover in assays of clonogenic growth. Treatment times and drug concentrations were lowered in the more sensitive cell lines to achieve moderate recovery from 5-FU treatment alone and to enable comparison with combination treatment. All cell lines were treated with HA14-1 (20 mM) in the presence and absence of 5-FU. As a single agent, HA14-1 (20 mM) has minimal effects on clonogenic recovery in all cell lines (Figure 3). KYSE70 cells were treated for 24 h at a range of concentrations of 5-FU (10 -30 mM). The combination of 5-FU (10 -30 mM) and HA14-1 (20 mM) did not further sensitise these cells, nor did it alter their ability to recover compared with 5-FU alone ( Figure 3A). KYSE450 cells are more drug resistant and show considerable recovery following 48-h treatment with 40 mM 5-FU. The combination of 5-FU (20 -40 mM) and HA14-1 (20 mM) significantly impeded the recovery of KYSE450 cells compared with 5-FU treatment alone ( Figure 3B). OE21 is the most drug-sensitive cell line; however, when the duration of drug treatment is reduced to 24 h, limited recovery can be observed following treatment at 10, 20 and 30 mM 5-FU. When 5-FU (10 -30 mM) and HA14-1 (20 mM) are combined, OE21 cells are further sensitised and recovery is significantly reduced ( Figure 3C). It is important to note that following treatment with either 5-FU or HA14-1 alone or in combination, the numbers of viable cells (for both KYSE450 and OE21 cells) were not significantly reduced by the combination treatment at the 24-and 48-h time points (Supplementary figure); yet, their ability to recover and form colonies is compromised. These data suggest that there are clear benefits to combining the chemotherapeutic 5-FU with HA14-1 in both the KYSE450 and OE21 cell lines; however this combination regime is of no benefit in KYSE70 cells. Effects of HA14-1 (BH3 mimetic) on apoptosis and type II cell death morphologies It is currently unclear whether HA14-1 re-activates apoptosis in previously apoptosis incompetent cells or induces an alternative death mechanism. We therefore looked for evidence of apoptosis, autophagy and type II cell death in the cell lines that respond to combination treatment. Cells were treated with HA14-1 (20 mM) in the presence and absence of 5-FU for 24 and 48 h (Figure 4). In KYSE450 cells, HA14-1 (20 mM) alone resulted in minor accumulation of cytoplasmic vesicles ( Figure 4A Figure 4A(i), upper right panel), which is enhanced in combination with HA14-1 (20 mM) ( Figure 4A(i), lower right panel). These morphological features were quantified by counting cells treated for 48 h (40 mM 5-FU±20 mM HA14-1) ( Figure 4A(ii)). These data demonstrate that the addition of HA14-1 to 5-FU treatment rarely induced apoptosis and the principal morphology in affected cells was type II cell death. The more drug-sensitive OE21 cells respond to 5-FU treatment (30 mM) by inducing apoptosis (Figure 4B(i), upper middle and right panel). HA14-1 (20 mM) alone caused mild induction of cytoplasmic vesicles in a small number of cells ( Figure 4B(i), lower left panel) but the combination treatment of both 5-FU and HA14-1 at 24 and 48 h, resulted in persistence of apoptosis ( Figure 4B(i), lower middle and right panel and 4B(ii)). These morphological features were quantified by counting cells treated for 24 h (30 mM 5-FU ± 20 mM HA14-1) ( Figure 4B(ii)). Levels of apoptosis were low at 24 h (o5%) and are not significantly different between 5-FU and the combination of 5-FU and HA14-1. However, the recovery of treated OE21 cells (previous section) was clearly affected, suggesting an augmented apoptotic death in the presence of HA14-1. Morphology at 48 h again shows predominance of apoptosis with similar levels in both 5-FU and the HA14-1 combination (B30%). OE21 cells treated for 48 h with 5-FU (30 mM) alone will not recover in clonogenic assays (data not shown). It is noteworthy that if exposure to 5-FU can be maintained for a sufficient length of time, there will be no benefit to adding HA14-1, as apoptosis will be induced in these cells by the chemotherapeutic alone. Markers of autophagy and apoptosis To confirm the presence of early autophagy in KYSE450 cells, we examined LC3 distribution in drug-treated cells ( Figure 5A). Both 5-FU (40 mM) and HA14-1 (20 mM) alone resulted in a punctuate distribution of LC3 indicating the formation of early autophagosomes. The combination of 5-FU and HA14-1 resulted in a modest enhancement of the number of cells with LC3 staining ( Figure 5A, lower middle and right panel). In OE21 cells, HA14-1 (20 mM) alone induced early features of autophagy, an effect that was confirmed by redistribution of LC3. However, LC3 staining was not enhanced in combination-treated cells (data not shown). Levels of active caspase 3 were quantified in KYSE450 and OE21 cell lines ( Figure 5B). There were no significant activation of caspase 3 in the KYSE450-treated cells ( Figure 5B(i)). A modest increase in active caspase 3 was detected in the OE21 cells treated with the combination of 5-FU (20-30 mM) and HA14-1 (20 mM) ( Figure 5B(ii)), which reflects apoptosis levels observed at 24 h ( Figure 4B(ii)). These data indicate that when cancer cells are apoptosis competent, HA14-1 can further enhance the early induction of this program. However, if 5-FU is present for sufficient time, apoptosis will be induced and may be adequate to prevent recovery of the cells. In apoptosis incompetent cells, there is either no response (KYSE70) or there is an enhancement of autophagy and type II cell death (KYSE450). Our current analysis of the expression of Bcl-2 family members could not predict which cell lines would respond and clearly an enhanced cell death response can be independent of apoptosis. DISCUSSION The design of BH3 mimetics was based on the rationale that by acting as inhibitors of the anti-apoptotic Bcl-2 family members, these compounds would activate apoptosis. However, it is currently unclear whether this is their mode of action in all cancer cells or whether they could also induce alternative forms of cell death. In addition, recent reports of interaction with autophagy regulators underscore the need for a re-evaluation of their activity -as autophagy can be both protective and detrimental to cell viability. We have previously shown how cell death mechanisms can influence the chemotherapeutic response and recovery of oesophageal cancer cells (O'Donovan et al, 2011). In this study, we investigated whether the addition of a BH3 mimetic (HA14-1) could influence cellular response to 5-FU. In apoptosis competent OE21 cells, the combination of HA14-1 and 5-FU promoted early apoptosis and reduced clonogenic survival compared with 5-FU alone. Of the two apoptosis incompetent/type II cell death inducing cells lines, only KYSE450 was sensitised by the combination of HA14-1 and 5-FU, resulting in reduced recovery. The cell death observed in KYSE450 cells was type II and this was preceded by elevated autophagy. These data suggest that HA14-1 can enhance the toxicity of 5-FU in the absence of apoptosis by advancing autophagy and type II cell death. However, as is the case with KYSE70 cells, not all apoptosis incompetent/type II cells will be susceptible to HA14-1. Other studies have reported that HA14-1 induces apoptosis in cancer cell lines. It has been shown to enhance the cytotoxicity of several compounds (Kang and Reynolds, 2009). The precise mechanism of action of HA14-1 is unclear. HA14-1 is an inhibitor of Bcl-2 and is thought to disrupt Bax and Bcl-2 interactions (Wang et al, 2000;Manero et al, 2006). Expression of Mcl-1 may impact on the effectiveness of HA14-1 but this is poorly understood and may be cell line dependent (Simonin et al, 2009). To the best of our knowledge, no study has examined the consequences of combination treatment of HA14-1 and 5-FU. Several BH3 mimetics are currently in either pre-clinical development or have advanced to clinical trials (Zhang et al, 2007;Ghiotto et al, 2010). These include gossypol and its analogues (targets Bcl-2, Bcl-x L and Mcl-1), GX15-070 (Obatoclax -Gemin X Biotechnologies (Montreal, QC, Canada), which binds to all antiapoptotic Bcl-2 family members) and ABT-737 (binds to Bcl-2, Bclx L and Bcl-w but not to Mcl-1). ABT-737 is one of the most advanced Bcl-2 inhibitors in clinical development (oral version is ABT-263 -Abbott Laboratories, Chicago, IL, USA). Cell lines and tumours expressing high Mcl-1 levels are resistant to ABT-737 (van Delft et al, 2006;Lessene et al, 2008). In lymphoblastic and small cell lung cancers, ABT-737 was found to enhance apoptosis (Leber et al, 2010). The majority of studies indicate that ABT-737 does not directly induce cell death and has limited value as a mono-therapy. It can, however, enhance susceptibility to death and has been shown to be synergistic with chemotherapeutics and radiation (Oltersdorf et al, 2005). The precise mechanism of action of mimetics is unclear as some have weak affinities for their putative targets (Zhai et al, 2006). ABT-737 has also been shown to induce autophagy. It disrupts the interaction between Bcl-2/Bcl-x L and Beclin 1, thereby releasing Beclin 1 to initiate autophagy (Maiuri et al, 2007). Both HA14-1 and ABT-737 have been reported to stimulate multiple proautophagic signal transduction pathways and to activate the nutrient sensors Sirtuin 1 and AMP-dependent kinase, inhibit mammalian target of rapamycin, deplete cytoplasmic p53 and trigger the IB kinase (Malik et al, 2011). A new analogue of HA14-1 (sHA 14-1) has also been reported to induce ER stress and calcium release, which may contribute to its mechanism of action (Hermanson et al, 2009). In keeping with these and our findings, recent reports have shown that treatment with HA14-1 alone can induce autophagy in leukaemic, osteosarcoma, ovarian and cervical cancer cell lines (Kessel and Reiners, 2007;Simonin et al, 2009;Malik et al, 2011). HA14-1, gossypol and GX15-070 have been reported to promote caspase-independent cell death and do not require Bax/Bak for their cytotoxicity (van Delft et al, 2006;Ghiotto et al, 2010). In the MCF-7 cell line model, gossypol (a natural BH3 mimetic) induced both Beclin 1 dependent and Beclin 1-independent cytoprotective autophagy (Gao et al, 2010). In contrast, in malignant glioma cells gossypol potentiated the cell death induced by temozolomide and autophagy contributed to this type of cell death (Voss et al, 2010). Also, analysis of androgen-independent prostate cancer cells showed that gossypol interrupted Beclin 1 and Bcl-2/Bcl-x L interactions and that gossypol-induced autophagy was dependent on Beclin 1 and Atg5 (Lian et al, 2011). In acute lymphoblastic and acute myeloid leukaemia cells, combination treatment with GX15-070 activated both apoptosis and autophagy (Heidari et al, 2010;Wei et al, 2010). In non-small cell lung cancer cell line models, GX15-070-induced Atg7dependent autophagy independent of Beclin 1 and Bax/Bak (McCoy et al, 2010). GX15-070 has been used in combination with chemotherapeutics for in vitro studies of oesophageal cancer. It has been reported that GX15-070-induced autophagy and inhibited the growth of oesophageal cancer cells. It was also found to synergise with either carboplatin or 5-FU through enhanced apoptosis. In their model, inhibiting autophagy increased the levels of apoptosis suggesting that autophagy induced by GX15-070 treatment may have a cyto-protective effect (Pan et al, 2010). This is in contrast to this study in which we found that combining HA14-1 with 5-FU increased the levels of autophagy and subsequent type II cell death suggesting that the observed reduction in clonogenic survival in KYSE450 cells is due to the enhancement of this form of cell death. Collectively these studies indicate that activation of apoptotic signalling cascade may not be the sole activity of BH3 mimetics. In our study, combining HA14-1 with 5-FU in KYSE450 cells resulted in enhanced autophagy and type II cell death. However, not all chemo-resistant cells (KYSE70) benefited from this type of treatment and a deficiency of the DNA damage responsive BH3only protein-Noxa was not predictive of response. Clearly, unless we understand the molecular determinants of response, these compounds will be used clinically without adequate predictive markers. The consequences of autophagy induction also seem to vary with some studies, like ours, finding enhanced type II cell death, whereas others report a protective effect. In this study, HA14-1 was found to be of benefit in 2 out of 3 oesophageal cancer cell lines when used in conjugation with 5-FU. Induction of apoptosis was not required for chemo-sensitisation. It could be argued that apoptosis competent tumours will be more treatable with chemotherapeutics alone and that the real benefit of these mimetics may be in enhancing type II cell death in apoptosis incompetent cells. The combination of HA14-1 and 5-FU could be a potential treatment for oesophageal cancer if the problems pertaining to HA14-1's poor solubility and stability could be overcome. Newer analogues are already showing promise in this regard (Tian et al, 2008). In addition, molecular markers truly predictive of response need to be established. Alternatively, combining 5-FU with a more potent inducer of type II cell death may be an alternative treatment strategy for oesophageal cancer. This work is published under the standard license to publish agreement. After 12 months the work will become freely available and the license terms will switch to a Creative Commons Attribution-NonCommercial-Share Alike 3.0 Unported License.
2017-11-08T17:28:04.217Z
2012-01-12T00:00:00.000
{ "year": 2012, "sha1": "10eb7894bffc7524e3bcb84faadccdf876e51f34", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/bjc2011604.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "10eb7894bffc7524e3bcb84faadccdf876e51f34", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
246716338
pes2o/s2orc
v3-fos-license
Bacterial Antagonistic Species of the Pathogenic Genus Legionella Isolated from Cooling Tower Legionella pneumophila is the causative agent of Legionnaires’ disease, a severe pneumonia. Cooling towers are a major source of large outbreaks of the disease. The growth of L. pneumophila in these habitats is influenced by the resident microbiota. Consequently, the aim of this study was to isolate and characterize bacterial species from cooling towers capable of inhibiting several strains of L. pneumophila and one strain of L. quinlivanii. Two cooling towers were sampled to isolate inhibiting bacterial species. Seven inhibitory isolates were isolated, through serial dilution plating and streaking on agar plates, belonging to seven distinct species. The genomes of these isolates were sequenced to identify potential genetic elements that could explain the inhibitory effect. The results showed that the bacterial isolates were taxonomically diverse and that one of the isolates may be a novel species. Genome analysis showed a high diversity of antimicrobial gene products identified in the genomes of the bacterial isolates. Finally, testing different strains of Legionella demonstrated varying degrees of susceptibility to the antimicrobial activity of the antagonistic species. This may be due to genetic variability between the Legionella strains. The results demonstrate that though cooling towers are breeding grounds for L. pneumophila, the bacteria must contend with various antagonistic species. Potentially, these species could be used to create an inhospitable environment for L. pneumophila, and thus decrease the probability of outbreaks occurring. Introduction Legionella pneumophila is the causative agent of Legionnaires' disease, a severe and potentially fatal pneumonia. This organism is an aquatic bacterium ubiquitously found in engineered water systems (EWS) where it can colonize, survive, and grow. Examples of EWS that are known sources of L. pneumophila include water distribution systems, cooling towers, water reservoirs, misters, shower heads, and water faucets [1]. These systems usually produce aerosols that can be inhaled by people in their vicinity. If contaminated with L. pneumophila, inhaled aerosols can lead to the dissemination and colonization of the bacteria in the lungs, resulting in an atypical pneumonia known as Legionnaires' disease. L. pneumophila is one of the more significant causes of waterborne diseases in developed countries [2,3]. The number of cases of Legionnaires' disease has been on the rise in recent years. For instance, the United States reported an increase of more than fivefold in incidences from 2000 to 2017, and a 1.5-fold increase from 2013 to 2017 was observed in the European Union [4,5]. The number of cases of Legionnaires' disease is believed to be underreported due to the lack of a common definition of the disease, efficient diagnostic ponds, rivers, wells, and drinking water, and demonstrated that 178 of them had anti-Legionella activity [30]. Their findings also demonstrated that a high diversity of waterborne bacteria, mainly from the Gammaproteobacteria and Firmicute groups, can have antimicrobial activity against L. pneumophila. The latter two phyla are known to produce a lot of secondary metabolites and antimicrobial peptides. This suggests that there may be a large pool of different antimicrobials that have yet to be characterized in these environments. So far, the presence of antagonistic bacterial species towards L. pneumophila in cooling towers has not been studied. We have previously analyzed the bacterial community of 18 cooling towers and demonstrated that several bacterial taxa were negatively correlated with Legionella [20]. Higher levels of these bacterial taxa were associated with lower levels of Legionella spp. Therefore, our goals were to isolate bacterial species from these cooling towers and test their antimicrobial activity towards L. pneumophila and Legionella quinlivanii. The second goal was to determine the potential genetic elements that could cause the inhibition through whole genome sequencing. Table 1 shows the characteristics of the L. pneumophila and L. quinlivanii strains used in this study. The strains are characterized by a strain number, a strain name, the species, the sequence base type (SBT), and the environment from where they were isolated. Moreover, ATCC33152 was isolated during the first outbreak of Legionnaires' disease in Philadelphia in 1976, and ID120292 caused the 2012 outbreak in Quebec City, Canada [31]. The other strains were either isolated from the environment (E) or from patients (P) and were obtained from the Laboratoire de santé publique du Québec (LSPQ), Canada. Patient strains were either outbreak strains or sporadic human cases and environmental strains were obtained from contaminated cooling towers typed in a previous study [32]. The L. quinlivanii strain was obtained from a bronchoalveolar lavage specimen from a patient [15]. The strains were grown on ACES-buffered CYE (Charcoal Yeast Extract) plates (yeast extract 10 g, ACES buffer 10 g, activated charcoal 2.0 g, L-cysteine 0.4 g, ferric pyrophosphate 0.25 g in 1 L of water, pH 6.90) at 30 • C for 4 days. Isolation of Inhibitory Bacterial Strains of Legionella pneumophila Cooling tower water samples were examined for their potential to contain bacterial species that could inhibit L. pneumophila on plate. Briefly, water was collected in sterile 1 L bottles from the basins of two cooling towers in Montreal, Canada, and one model cooling tower built in our lab [33]. The water samples were vigorously shaken and serially diluted in filter sterilized sample water, to a dilution of 10 −6 . CYE agar plates were layered with 5 mL of soft agar (0.5% agar in distilled water), inoculated with 100 µL of 0.2 OD 600nm of L. pneumophila suspension in AYE (Yeast extract 10 g, ACES buffer 10 g, L-cysteine 0.4 g, ferric pyrophosphate 0.25 g in 1 L of distilled water, pH 6.90). CYE medium was chosen as it is the only agar medium known to support growth of Legionella species. The soft agar was let to solidify for 15 to 30 min in a biological safety cabinet. The dilutions were spread on the CYE agar by gently flooding 1 mL of solution on to the soft agar layer. The dilution was spread by gently shaking and tilting the agar plate, after which the excess liquid was aspirated with a pipette. The plates were left to dry for 30 min in a biological safety cabinet and then incubated at 30 • C for 4 days. The inhibiting colonies could be visualized by the formation of an inhibition zone on the L. pneumophila lawn. These colonies were re-streaked three times on CYE plates to obtain pure cultures. Stock cultures of these isolates were made in 15% glycerol in AYE medium. Identification of Bacterial Isolates Bacterial isolates were first identified by sequencing their 16S rRNA gene. Briefly, bacterial DNA was extracted by lysing a single colony in 25 µL of 0.5 NaOH. The suspension was incubated at room temperature for 10 min and then neutralized with 25 µL of 1 M tris-HCL, pH 7.5 and diluted with 450 µL of sterile distilled water. The 16S rRNA gene was amplified by PCR using the bacterial primers 27F (5 -AGAGTTTGATCMTGGCTCAG-3 ) and 1492R (5 -TACGGYTACCTTGTTACGACTT-3 ). The PCR product was then sent for Sanger sequencing at the Plateforme Génomique de l'Université Laval, Canada. The sequences were then compared to the NCBI nucleotide database using BLAST [34]. Testing Inhibition of Isolates with Different Legionella Strains We further tested the isolates' antimicrobial activity towards eight different strains of L. pneumophila and one strain of L. quinlivanii. The Legionella strains were inoculated on CYE agar using the soft-agar approach described above. Pure cultures of the inhibiting isolates were suspended in AYE at 0.2 OD 600nm and 10 µL was spotted in the centre of the agar plates. The spots were left to dry for 15 to 30 minutes and the plates were then incubated at 30 • C for 4 days. After incubation, the inhibition zone diameters were measured to compare antimicrobial activity and compare susceptibility levels between Legionella strains. Whole Genome Sequencing of Anti-Legionella Isolates Genomic DNA was extracted from the isolates using the Wizard genomic DNA purification kit (Promega, Madison, WI, USA). The genomic DNA quality was verified on a 0.8% agarose gel and the quantity was measured using the Quant-iT PicoGreen dsDNA assay kit (Thermofisher, Waltham, MA, USA). The DNA library for whole genome sequencing was prepared using the Nextera XT DNA library prep kit (Illumina, San Diego, CA, USA). The manufacturer's instructions were followed. The library was run on an Agilent Technology 2100 Bionalyzer (Agilent, Santa Clara, CA, USA) to evaluate proper DNA fragment size. After evaluation of proper fragmentation, the library was manually normalized to 2 nM and then pooled together. The pooled library was denatured with 0.2 N (normality) NaOH and incubated for 5 minutes at room temperature. The solution was neutralized with 200 mM of Tris-HCl (pH 7.0). The denatured library was diluted to 20 pM with HT1 buffer and diluted again to a loading concentration of 12 pM. PhiX was diluted to 4 nM in HT1 buffer and denatured with 0.2 N NaOH at room temperature for 5 min. The denatured PhiX was then diluted to 20 pM with HT1 buffer (Illumina, San Diego, CA, USA). The denatured library was spiked-in at 1% with PhiX control. The solution (600 µL) was loaded into a MiSeq Reagent kit V3 (600 cycles) and sequenced on a MiSeq platform (Illumina, San Diego, CA, USA). The raw reads have been uploaded to NCBI's Sequence Read Archive (SRA) under the bioproject accession number PRJNA787617. The read quality was evaluated using FastQC [35]. The forward and reverse sequences were removed using Trimmomatic (v0.39) with the following commands: LEADING: 10 TRAILING: 10 SLIDINGWINDOW: 5: 20 MINLEN: 36 [36]. The forward and reverse reads were assembled de novo using Spades (v3.13) for each isolate [37]. The reads were first corrected using the "only-error-correction" option and assembled using the "onlyassembler" option. When assembling the reads, the k-mer length was set to 21, 33, 55, 77, 99, and 127. The assembled genomes were uploaded to MiGA (Microbial Genome Atlas, v0.3.12) server, and the NCBI Prok module was used to identify the taxonomy and novelty of the isolate [38]. Of note, when analyzing with MIGA, one of the bacterial isolates, SPF474, was observed to have a high percentage of contamination in its genome, but phylogenetic analysis was still able to identify it at a high percentage level and with high confidence. Thus, contaminating reads were removed from SPF474. This was done by mapping all reads to the genome of Bacillus amyloliquefaciens IT-45 (RefSeq NC_020272.1 from NCBI) using BWA-mem and Samtools [39,40]. The unmapped reads were removed, and the mapped reads were de novo assembled using Spades (v3.13). This new assembly was uploaded to MiGA for phylogenetic analysis. The B. amyloliquefaciens IT-45 strain was used as it is considered a representative genome according RefSeq and sanger sequencing with the MiGA results indicated that SPF474 was highly related to Bacillus amyloliquefaciens. The assembled genomes were also uploaded to Antismash (v5.0) in order to identify, annotate, and analyze secondary metabolite biosynthesis gene clusters, using the relaxed detection strictness parameter [41]. The ClusterBlast, KnownClusterBlast, and SubClusterBlast options were used to evaluate homology with known antimicrobial sequences. Inhibition Assay Seven bacterial colonies inhibiting L. pneumophila growth on plates were isolated from cooling towers and a model cooling tower. Their ability to inhibit different SBTs of L. pneumophila and L. quinlivanii was tested. Figure 1 shows photographic examples of the inhibition assay when testing with L. pneumophila strain LpS256P. Figure 2 shows the diameter of the inhibition zone created by each bacterial isolate for each strain of Legionella tested. As expected, the inhibition of L. pneumophila varied according to the bacterial isolate tested. For instance, B. amyloliquefaciens and B. subtilis created large inhibition zones ranging from 7 cm to 9 cm (total inhibition). On the other hand, Chryseobacterium sp. and B. paralicheniformis both created intermediate inhibition zones, between 2.5 cm and 4 cm in diameter. The other isolates (Cupriavidus sp., S. epidermidis, and Stenotrophomonas sp.) created small inhibition of 2 cm or smaller. The results also suggest that the susceptibility to the anti-Legionella bacterial isolates varied according to the SBT and the source from which the L. pneumophila strain was isolated from, i.e., patient or environment. Thus, the variation with SBT was most observable with the Cupriavidus sp. isolate, which inhibited around half of the L. pneumophila strain tested. As a result, SBT37 (LpPhili), SBT1 (LpS1E and P), and SBT213 (LpS213P) had no susceptibility to Cupriavidus sp., whereas SBT256 (LpS256E and LpS256P) and SBT62 (LpS62E and LpS62P) were susceptible, creating around 2 cm inhibition zones. In another example, B. paralicheniformis created an inhibition zone of between 2.2 to 2.5 cm for SBT1 (LpS1E and P), but the inhibition was between 3.1 to 3.3 cm for the other L. pneumophila strains. Interestingly, Chryseobacterium sp. created different sized inhibition zones for LpS1E and LpS1P. In this case, the environmental strain was less susceptible, creating a 2.6 cm inhibition zone, than the patient isolated strain, which created a 4 cm zone. The L. quinlivanii strain had larger inhibition diameters than the L. pneumophila strains, for most of the bacterial isolates tested. This suggests that different strains within the same SBT may have variability in susceptibility to certain antimicrobials. Though the strains of L. pneumophila and L. quinlivanii tested did not come from the cooling towers sampled, an interesting experiment would be to evaluate if these same cooling towers had a Legionella population and assess their susceptibility levels to the anti-Legionella bacterial isolates to determine if the laboratory findings have a role in the real world. As expected, the inhibition of L. pneumophila varied according to the bacterial isolate tested. For instance, B. amyloliquefaciens and B. subtilis created large inhibition zones ranging from 7 cm to 9 cm (total inhibition). On the other hand, Chryseobacterium sp. and B. As expected, the inhibition of L. pneumophila varied according to the bacterial isolate tested. For instance, B. amyloliquefaciens and B. subtilis created large inhibition zones ranging from 7 cm to 9 cm (total inhibition). On the other hand, Chryseobacterium sp. and B. Taxonomic Classification of Bacterial Isolates After sequencing and assembly of the anti-Legionella bacterial isolates, taxonomy was inferred using the Microbial Genome Atlas [38]. Table 2 shows the results from this analysis. The taxonomic classification of three of the isolates could be identified with a high level of confidence. Indeed, SPF437, SPF476, SPF497 had average nucleotide identities (ANI) above 99.5% at p-values below 0.01. The p-value indicates the probability of our query genome being wrongly classified with the reference genome from NCBI's RefSeq database [38]. Consequently, SPF437, SPF476, SPF497 were respectively classified as Bacillus subtilis, Staphylococcus epidermidis, and Bacillus paralicheniformis. As mentioned previously, SPF474 had some level of contamination in the whole genome sequencing results according to MiGA, however, the ANI was at 99.91% with Bacillus amyloliquefaciens LL3NC017190 with a p-value of 8.02 × 10 −5 . After clean-up of the genome, the MiGA results showed that SPF474 had a 99.98% ANI with B. amyloliquefaciens LL3NC017190 at a p-value 8.02 × 10 −5 , and very low contamination levels. Furthermore, the additional sequencing of its 16S rRNA gene also indicated that SPF474 was related to Bacillus amyloliquefaciens. Indeed, the reverse and forward sequence aligned to B. amyloliquefaciens strain RESI-50 (Accession: MT542326.1; e-value: 0.0; percent identity: 99.88%; query cover: 100%) and B. amyloliquefaciens strain 3820 (Accession: MT538668.1; e-value: 0.0; percent identity: 99.62%; query cover: 100%), respectively. Consequently, the results suggest that the contamination had little to no effect in identifying the isolate. Though not shown here, when inputting the genome in Antismash before clean-up, several sequences detected in its genome were similar to gene sequences found in Legionella species. Thus, this may indicate that the foreign sequences came from a cross-contamination event during one of the preparation steps for whole genome sequencing. The taxonomic classification of SPF498, SPF499, and SPF475 was less confident, as their ANI percentage varied between 86% and 95% at p-values sometimes above 0.05. Indeed, there is a high probability that SPF475 and SPF498 belong to the genera ascribed, as their p-values were below 0.01 when comparing at the genus level (SPF498 p-value = 0.0063 for Stenotrophomonas; SPF475 p-value = 0.0085 for Chryseobacterium), but they probably belong to species not represented in the database. On the other hand, SPF499 had an ANI of 86.84% with Cupriavidus pauculus NZCP033969 at p-values of 0.468 and 0.0338 at the species level and the genus level, respectively. Though still debatable, the species boundaries using ANI percentage is usually set at a cut-off of 96% or higher [42]. The ANI being smaller than this cut-off suggests that SPF499 might be a new species, and even a new genus, depending on the cut-off used for the ANI% result and the p-value. Identification of Putative Secondary Metabolites In order to identify putative antimicrobial compounds produced by the different bacterial isolates, the assembled genomes were analyzed using AntiSMASH server (Antibiotics and Secondary Metabolite Analysis Shell) [41]. This tool allows the identification and analysis of biosynthetic gene clusters (BGCs) within bacterial genomes. Some of these BGCs may allow the production of antimicrobial compounds, such as antibiotics or bacteriocins [41]. AntiSMASH will also BLAST the identified clusters to known antimicrobial sequence databases. Table 3 represents a summary of the results obtained from AntiSMASH for the different bacterial isolates. The BGCs identified were categorized by their similarity percentage to known BGCs in the database. Thus, BGCs with more than 70% were categorized in the high similarity group, BGCs with less than 70% similarity were categorized in the low similarity group, and BGCs with 0% similarity were categorized as unassigned. Overall, the genomes were found to contain several BGCs. The numbers in each isolate varied greatly from 3 to 16 BGCs. The B. paralicheniformis and B. subtilis isolates had the most BGCs and some were highly homologous to known antimicrobials, such as bacitracin or fengycin. On the other hand, Cupriavidus sp. (SPF499) and Chryseobacterium sp. (SPF475) also possessed high numbers of BGCs, but these BGCs had low similarity levels to any of the antimicrobials gene products in the database. AntiSMASH only detected three BGCs for the Stenotrophomonas (SPF498) and Staphylococcus (SPF436) isolates. We examined the diversity of the BGCs present throughout the different genomes by counting the total sum of each type of BGC identified. The results showed that a total of 17 different types of BGCs could be identified. These can be visualized in Figure 3. Unsurprisingly, non-ribosomal peptides synthetase (NRPS) were the most abundant antimicrobial clusters found in the different genomes. NRPSs have a wide range of biological activity and are known to produce several antibiotics, such as penicillin or cephalosporins [43]. Polyketide synthase (PKS), terpenes, and bacteriocins were the next most abundant BGCs. Finally, the rest of the BGCs were found at abundance levels of less than 5 counts, and 7 BGCs (Lassopeptide, CDPS, Ladderane, phenazine, phosphonate, microviridin, and resorcinol) were only counted once. Discussion In this study, we isolated and characterized seven bacterial species from cooling tower water samples capable of inhibiting L. pneumophila and L. quinlivanii on CYE plates. Their genomes were sequenced to get a better understanding of the potential antimicrobials that could be produced, and the genes associated with these antimicrobials. So far, research has shown that a wide variety of organisms from EWS can inhibit L. pneumophila. Research has not looked into antagonistic species of L. quinlivanii. Our study shows that cooling towers can harbour anti L. pneumophila species and that these species can also inhibit other Legionella species (such as L. quinlivanii), indicating that these inhibitory strains could potentially be used against several Legionella pathogens. The findings confirm that a wide variety of bacterial species can inhibit L. pneumophila in water systems. Indeed, Firmicutes, Bacteroidetes, and Proteobacteria were all identified in this study, with most species belonging to the Firmicutes. This is in agreement with Corre et al. [30]. It is important to note that since we only tested inhibition on CYE growth medium, the actual diversity of inhibitory organisms may be underrepresented. Indeed, the Discussion In this study, we isolated and characterized seven bacterial species from cooling tower water samples capable of inhibiting L. pneumophila and L. quinlivanii on CYE plates. Their genomes were sequenced to get a better understanding of the potential antimicrobials that could be produced, and the genes associated with these antimicrobials. So far, research has shown that a wide variety of organisms from EWS can inhibit L. pneumophila. Research has not looked into antagonistic species of L. quinlivanii. Our study shows that cooling towers can harbour anti L. pneumophila species and that these species can also inhibit other Legionella species (such as L. quinlivanii), indicating that these inhibitory strains could potentially be used against several Legionella pathogens. The findings confirm that a wide variety of bacterial species can inhibit L. pneumophila in water systems. Indeed, Firmicutes, Bacteroidetes, and Proteobacteria were all identified in this study, with most species belonging to the Firmicutes. This is in agreement with Corre et al. [30]. It is important to note that since we only tested inhibition on CYE growth medium, the actual diversity of inhibitory organisms may be underrepresented. Indeed, the number of unculturable microorganisms is far higher than the number of culturable organisms [44,45]. In our case, the methodology created a bias for the selection of non-fastidious mesophiles due to the incubation at 30 • C on nutrient rich media. An alternative strategy would be to isolate microbes on different media and slowly acclimate them to CYE agar before doing the Legionella-inhibition assay. Alternatively, it may be more representative, but challenging, to evaluate Legionella survivability in co-infection models (host/Legionella/isolate), in lab grown biofilm experiments, or creating new growth media, other than CYE, and testing for inhibition at different growth temperatures. However, the isolation and identification of SPF499 suggests that novel and uncharacterized nonfastidious bacterial species can still be discovered. The AntiSMASH results revealed that the bacterial isolates contained an array of BGCs potentially coding for a wide variety of antimicrobials. Over 17 different BGC types were uncovered, suggesting that Legionella spp. could be inhibited through diverse mechanisms. More specifically, the data suggests that the inhibition could be done through direct mechanisms. For instance, NRPS, PKS, terpenes, and bacteriocins were the most abundant antibacterial identified in the isolates. These compounds usually directly act on the bacterial cells, targeting specific elements and causing bactericidal or bacteriostatic effects [46][47][48][49]. On the other hand, the identification of several siderophore clusters could indicate that L. pneumophila may be inhibited indirectly, such as through competition for nutrients. For instance, staphyloferrin genes were identified in the S. epidermidis (SPF476) isolate. Staphyloferrin is a powerful siderophore used by Staphylococcus species and thus could prevent L. pneumophila from acquiring iron for growth [50]. The variety of BGCs identified could also indicate that a combination of different mechanisms could explain the inhibition of L. pneumophila. For instance, certain species may be able to out-compete L. pneumophila through production bactericidal agents, faster acquisition of nutrients, and faster growth rates. A potential follow-up experiment would be to knock-out the suspected bactericidal BGCs from the different isolates to assess if isolates are not using a variety of mechanisms to cause inhibition of L. pneumophila. Some of the BGC sequences identified were previously shown to inhibit L. pneumophila on plate. For instance, Loiseau et al. showed that surfactin produced by B. subtilis can create an inhibition zone on a lawn of L. pneumophila [22]. However, our isolates of B. subtilis created a much larger inhibition zone. This could be due to methodological difference, such as the use of a soft agar layer in the present study allowing for better diffusion of the surfactin on the growth plate. It is also possible that our strain may produce other antimicrobials that work in synergy to inhibit L. pneumophila growth. As shown in Table 3, our B. subtilis isolate contains BGCs related to surfactin, bacilysin, and bacillibactin. Potentially these compounds could work in combination to inhibit L. pneumophila. Notably, bacilysin is an antibiotic, which works against a wide range of bacteria, and bacillibactin is a siderophore capable of chelating iron [51,52]. Of note, the B. paralicheniformis isolate was shown to contain a BGCs associated with the production of lichenysin, a lipopeptide surfactant almost identical to surfactin [22,53]. Pure lichenysin has antibacterial activity against several species, such as Acinetobacter sp., Bacillus sp., and Pseudomonas sp., but this has not been tested in Legionella species [53]. However, due to the high similarity of surfactin and lychenisin, it is of our opinion that this is most likely the compound that causes the inhibition of L. pneumophila on plate. Further, research is required to confirm this phenotype. Several of the BGCs identified have very low similarity or did not relate to any known antimicrobial biosynthetic clusters from the MiBIG database used in AntiSMASH. For instance, the Chryseobacterium sp. (SPF475) contained several BGCs that were similar to desferrioxamine, flexirubin, and caratenoid, but at very low similarity levels (below 50%). Similarly, only low similarity BGCs were identified in Stenotrophomonas sp. Therefore, the Legionella-inhibition phenotype of these isolates could be due to novel compounds, suggesting that cooling towers are a rich source of novel and uncharacterized antimicrobial compounds that could potentially be used for clinical or industrial purposes. In conclusion, several bacterial isolates that showed anti-Legionella activity were isolated from different cooling tower water samples. This study confirms that cooling towers can harbour a diversity of anti-Legionella bacterial species from different phyla. Whole genome sequencing revealed that several known antimicrobials were potentially causing the inhibition of L. pneumophila on plate; however, the number of uncharacterized antimicrobials was much greater, suggesting a potential pool of antimicrobials. As these inhibitory species of L. pneumophila could be used for industrial purposes, several interesting follow up studies could be pursued. First, evaluating the specificity of these antimicrobials within the Legionella genus, either focusing on different strains of L. pneumophila or for different species of Legionella, and to examine if these same antimicrobials can inhibit other pathogens would be of value. Some of our results indicate that there was variability with the different strains tested as well with L. quinlivanii, suggesting some specificity. Additional research is required to get a better overview. Furthermore, consideration should be focused on evaluating the pathogenicity of the isolates towards humans, as pathogenic species would probably cause unwanted consequences for downstream applications. Finally, more practical knowledge concerning how to directly use inhibitory species would be more beneficial for industrial purposes. Thus, questions such as evaluating the efficiency of these inhibitory isolates at reducing the L. pneumophila load in different water systems, gauging if seeding is better than tweaking physical chemical parameters of the water system so that these species can colonize naturally the water system, analyzing if these species are harmful to humans, or assessing if using directly the antimicrobial is cost effective along with routine biocides, are all important questions that would be interesting to pursue. Author Contributions: Conceptualization, K.P. and S.P.F.; methodology, K.P. and S.P.F.; analysis, K.P., S.P.F. and S.L.; writing-original draft preparation, K.P.; writing-review and editing, K.P., S.P.F. and S.L.; funding acquisition, S.P.F. All authors have read and agreed to the published version of the manuscript.
2022-02-11T16:05:18.720Z
2022-02-01T00:00:00.000
{ "year": 2022, "sha1": "4a8a4a2cac16b303d8ce8bf2451b9d4976f0a0be", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-2607/10/2/392/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b81dde9fc12545dec53b62dc98e02699cb03ead9", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
48354536
pes2o/s2orc
v3-fos-license
Carbon Emissions in China’s Construction Industry: Calculations, Factors and Regions The production of construction projects is carbon-intensive and interrelated to multiple other industries that provide related materials and services. Thus, the calculations of carbon emissions are relatively complex, and the consideration of other factors becomes necessary, especially in China, which has a massive land area and regions with greatly uneven development. To improve the accuracy of the calculations and illustrate the impacts of the various factors at the provincial level in the construction industry, this study separated carbon emissions into two categories, the direct category and the indirect category. The features of carbon emissions in this industry across 30 provinces in China were analysed, and the logarithmic mean Divisia index (LMDI) model was employed to decompose the major factors, including direct energy proportion, unit value energy consumption, value creation effect, indirect carbon intensity, and scale effect of output. It was concluded that carbon emissions increased, whereas carbon intensity decreased dramatically, and indirect emissions accounted for 90% to 95% of the total emissions from the majority of the provinces between 2005 and 2014. The carbon intensities were high in the underdeveloped western and central regions, especially in Shanxi, Inner-Mongolia and Qinghai, whereas they were low in the well-developed eastern and southern regions, represented by Beijing, Shanghai, Zhejiang and Guangdong. The value creation effect and indirect carbon intensity had significant negative effects on carbon emissions, whereas the scale effect of output was the primary factor creating emissions. The factors of direct energy proportion and unit value energy consumption had relatively limited, albeit varying, effects. Accordingly, this study reveals that the evolving trends of these factors vary in different provinces; therefore, overall, our research results and insights support government policy and decision maker’s decisions to minimize the carbon emissions in the construction industry. Introduction The issue of global climate change has attracted increasing attention in recent years, largely because of its serious consequences with respect to the natural and human environment [1]. Moreover, China, which became the world's largest carbon emitter in 2006, was responsible for 64.8% of the increase in global carbon emissions between 2007 and 2012 [2]. In 2013, China emitted approximately 10.2 million kilo tons (kt) carbon emissions, which made up 28.6% of the world's total carbon emissions and was nearly double that of the United States (5.2 million kt) [3]. As China is facing great pressure from the international community on the issue of carbon emissions, the government is making effect to control the carbon emissions. In 2014, China pledged, in the US-China Joint Announcement on Climate Change, that it would reach its peak of carbon emissions around 2030 [4]. 2 of 17 The construction industry is one of the key industries contributing to carbon emissions in China [5]. The important role of the construction industry in the Chinese economy and the large amount of resources consumed by the industry have resulted in a substantial amount of carbon emissions. Chuai et al. [6] estimated that the carbon emissions from the construction industry made up 27.9% to 34.3% of the overall carbon emissions in China between 1995 and 2010. Reducing the construction industry carbon emissions can significantly help China reach its emissions reduction target. Moreover, the construction industry is a carbon-intensive industry that consumes a considerable amount of energy from other industries. Therefore, to provide a basis for Chinese policymakers to formulate appropriate policies to reduce the industry-wide carbon emissions, it is important to accurately calculate the carbon emissions and identify the driving factors of the carbon emissions in the Chinese construction industry. Some of the earlier literature has discussed the driving forces of China's construction industry carbon emissions on a national scale [7][8][9][10]. However, it should be noted that the increase in China's national carbon emissions are collectively shaped by the dynamics of the emissions from all provinces, municipalities and autonomous regions that comprise the country. Whether the increase in carbon emissions in the various provinces can be effectively mitigated directly affects the achievement of the national emissions reduction targets. Moreover, China's provinces differ markedly in terms of economic development levels, industrial structures, energy consumption patterns, and many other factors. For example, while some provinces have entered the post-industrial age, others are experiencing an acceleration in the metaphase of industrialization [11]. Furthermore, there are some provinces whose industrial structures are dominated by high-tech and tertiary industries, whereas others remain heavily reliant on heavy industry [12,13]. Some provinces have gradually enhanced their levels of clean energy utilization, while others are still heavily dependent on coal consumption [14]. Hence, it is evident that a one-size-fits-all emissions reduction policy will not suffice to address the carbon emissions problems for all of China's provinces and regions. To address this gap, this paper separates construction industry carbon emissions into two categories, the direct category and the indirect category, to improve the accuracy of calculations based on the panel data of 30 provinces in China between 2005 and 2014. The factors that influence the construction industry carbon emissions are then analyzed and identified. The results promote a better understanding of the relationship among construction industry emissions quantities and economic development, which can contribute to developing more effective policies for each province engaged in the construction industry. Literature Review As China's construction industry has played an important role in mitigating global climate change, increasingly more researchers have focused on the calculations of carbon emissions and their impact on the construction industry in China. The literature is divided into two categories, calculation of construction industry carbon emissions and carbon decomposition analysis. Calculation of Construction Industry Carbon Emissions The construction industry is projected to contribute more than 31% of the total carbon emissions by 2020 and 52% by 2050 [1] (IPCC. Mitigation; 2011). In Europe, the construction industry accounts for over 40% of the total energy consumption [15] and contributes approximately 50% of the carbon emissions released into the atmosphere [16]. Meanwhile, the carbon emissions from construction industry in Korea comprise 23% of the country's total carbon emissions [17], and in 2011, the UK's construction industry-related activities accounted for an estimated 47% of its total carbon emissions [18] and 42.6 mega tons of carbon emissions [19]. To limit carbon emissions and save energy in the construction industry, a series of assessments have been established [20]. At the macro level, input-output modelling and life cycle assessments have been most commonly used [21,22]. For instance, Nassen and Holmberg [23] accessed energy use and carbon emissions in Sweden using input-output modelling, and Chen and Zhang [24] analysed the carbon emissions of China in 2007 based on a multi-scale, input-output approach [25]. However, existing studies do not focus on the characteristics of the construction industry, nor do they provide adequate insights for aggregated carbon emission policy making at the level of the construction industry. Furthermore, very limited research has been performed to estimate the construction industry carbon emissions at the provincial level in China. Considering the characteristics of the construction industry, which has close relationships with other industries, the calculation was divided into two parts, i.e., the direct carbon emissions and the indirect carbon emissions, the latter of which are from industries related to the construction industry. As our calculations are based on the data from 30 provinces in China, our results can guide future development and direct effective low-carbon policy formulation for managing carbon emissions in the construction industry at the provincial level. Carbon Decomposition Analysis Three main groups of researchers have examined the relationship among carbon emissions, energy consumption and economic growth in the literature [26]. The first group, Grossman and Krueger [27] investigated the relationship between economic growth and environmental pollution using urban area data of 42 countries. A second group of the extant literature has investigated the relationship between energy use and economic growth and presents distinct hypotheses [28,29]. For example, the unidirectional causality running from energy use to economic growth is called the growth hypothesis, which asserts that energy performs a key role in promoting economic activity and that a reduction in the energy supply will reduce economic growth [30,31]. A third group of existing studies has inspected the causal relationships among carbon emissions, energy consumption and economic growth, including Ang [32] who studied Malaysia; Jalil and Mahmud [33] and Zhang and Cheng [34] who examined China; Soytas and Sari [35] who studied the USA; Ocal et al. [36] who reported on Turkey; Chindo et al. [37] who examined Nigeria; Kuo et al. [38] who studied Hong Kong; and Albiman et al. [39] who examined Tanzania. However, their studies focused on the description of the relationship and did not reveal the combined effect of economic factors on carbon emissions. Some studies have discussed the economy-wide assessments of the factors shaping national carbon emissions, whereas earlier studies have discussed several techniques to decompose carbon emissions. The popular decomposition methods can be divided into two groups, namely, methods linked to the Laspeyres index and methods linked to the Divisia index. One advantage of the LMDI method, which is a Divisia index method, is its ability to satisfy the factor-reversal test and the lack of unexplainable residuals in the results [40]. Based on the advantages, the LMDI has been widely applied to the study of carbon emissions at national levels. For instance, Lv et al. [41] used the LMDI to decompose the volume of historical carbon emissions in China and to analyse the country's carbon intensity from 1980 to 2010. Li [42] used a distance function approach to decompose the change of carbon emissions in China. Gonzalez et al. [27] tracked the European Union carbon emissions through the LMDI decomposition analysis of changes in carbon emissions from 2001 to 2010. To date, there have been some attempts to use the LMDI in China's construction industry, such as the study by Zhao et al. [43], which decomposed China's urban residential energy consumption, Zha et al. [44], who used the Individual Development Account (IDA) to investigate the driving forces of residential carbon emissions in China, and Cai et al. [12], who decomposed China's construction industry energy consumption. However, all of these studies focused on similar factors and found that the main factors were GDP per capita, industrial structure, population and technology level. They did not analyse the factors based on the industry characteristics. Therefore, this study selects the LMDI as a decomposition tool to investigate the construction industry carbon emissions based on the calculation method of carbon emissions and the characteristics in the construction industry. We decompose five major factors that directly or indirectly affect the carbon emissions of construction activities, including the direct energy proportion, unit value energy consumption, value creation effect, indirect carbon intensity, and scale effect of output. These five factors illustrate the economic effects, energy effects and carbon emissions effects, which, in turn, characterize the relationship between economic development and the carbon emissions of the construction industry. This paper enriches the existing provincial characteristics analyses by synthetically considering the features of the provincial construction industry carbon emissions and their underlying driving forces. Our results and insights can be used to better deploy provincial efforts in the construction industry to abate emissions at the national level. Calculation of Construction Industry Carbon Emissions In this paper, the construction industry carbon emissions are divided into two categories. The first category is for those emissions that are directly generated by the construction industry. The second category is for carbon emissions from industries related to the construction industry. These industries include the mining and washing of coal, extracting of petroleum and natural gas, mining and processing of metal ores, refining of petroleum, coking and nuclear fuel processing, manufacturing of raw chemical materials and chemical products, manufacturing of non-metallic mineral products, smelting and pressing of metals, manufacturing of metal products, as well as the transporting, storage and postal services of products. The calculation process of direct carbon emissions of the construction industry is as follows: where C D denotes the direct carbon emissions of the construction industry, i is the type of energy, E i represents the consumption of energy i, NCV i represents the average lower-order calorific value of energy i, A i is the carbon content per unit heat of energy i, O i represents the oxidation rate of energy i, and the 44/12 is the molecular weight ratio of CO 2 to carbon. The calculation of indirect carbon emissions from construction-related industries is usually divided into two steps. First, we select nine industries related to the construction industry, such as mining and washing of coal; extracting of petroleum and natural gas; mining and processing of metal ores; petroleum refining; coking and nuclear fuel processing; manufacturing of raw chemical materials and chemical products; manufacturing of non-metallic mineral products; smelting and pressing of metals; manufacturing of metal products; transporting, storage and postal services of products. The equation for the direct carbon emissions of industry j, which is similar to Formula (2), is written as follows: where C D,j denotes the direct carbon emissions consumed by industry j, E i,j represents the use of energy i for industry j. Next, based on the input-output analysis, the indirect carbon emissions of the construction industry are calculated using the following equation [45]: where C D,j denotes direct carbon emissions of industry j, j is the category of industries, IOV j refers to the total output values of industry j, CIOV represents the construction industry output values, and y j is the total consumption coefficient of industry j from the construction industry, which can be derived from the input-output tables. Finally, total carbon emissions of construction industry are as follows: This study then uses the construction industry carbon intensity as the dependent variable, which includes panel data of China's 30 provinces and municipalities from 2005 to 2014 (Hong Kong, Macao, Taiwan, and Tibet are not included due to a lack of data). The construction industry carbon intensity is defined as follows in Equation (5): where CI represents the construction industry carbon intensity, CIOV refers to the construction industry output values, and C T denotes the total carbon dioxide emissions of the construction industry. The provincial construction industry output values are obtained from the China Statistical Yearbook. The Decomposition of the Construction Industry Carbon Emissions: The LMDI Method An index decomposition analysis, which has been widely applied to investigate the driving forces of carbon emissions, decomposes an aggregate indicator into several related driving factors and quantifies their respective contributions to the change in the aggregate indicator. Carbon emission factors can be decomposed into many elements. In this paper, these elements were reorganized into five major factors that directly or indirectly affect the carbon emissions of construction activities, including the direct energy intensity effect (I), the direct energy proportion (H), the unit value energy consumption (F), the value creation effect (N), the indirect carbon intensity (G), and the scale effect of output (P). The carbon emissions are decomposed as in Formula (3), and corresponding notations and meanings of each factor are presented in Table 1: where E dir denotes direct energy consumption of construction, E is the total energy consumption of construction, IG refers to the total output values of the construction industry, and S is the completed area of the construction industry, which can be derived from the China Statistical Yearbook. When carbon emissions from the construction industry are decomposed, the carbon emissions factor in the construction industry remains constant and is constant according to the calculation rules. Therefore, the direct energy intensity effect (I) has no effect on the construction industry carbon emissions, which this study does not continue after the calculation. Factor Decomposition Factors Said Factors Explain In the LMDI, the difference in carbon emissions between a target year (t + 1) and the datum year (t) is decomposed as follows: This study uses the multiplicative form for the decomposition as denoted in Formulas (8) to (12); the multiplicative form is as follows: Construction Industry Carbon Emissions Characteristics in Different Provinces Based on the calculated direct and indirect construction industry carbon emissions of 30 provinces in China from 2005 to 2014, the detailed results and discussions are as follows ( Table 2). There are significant differences in the carbon emissions of the construction industry at the provincial level, where indirect carbon emissions of the construction industry for all provinces accounted for 90% to 95% of all carbon emissions ( Figure 1). Table 3 indicates that the provinces with the highest construction industry carbon emissions were Jiangsu and Zhejiang between 2005 and 2014 and that the highest construction industry carbon emissions were approximately 500 million tons in 2014. The carbon emissions from the construction industry will increase as the development level of economic expansion increases. It is evident that the rapid economic expansion is the dominant driving force behind the acceleration of the growth of the carbon emissions of the construction industry at this stage. In contrast, Sichuan, Liaoning, Shanxi and Shandong, which are located in central China, have a low efficiency level with respect to energy utilization; therefore, the construction industry carbon emissions range between 100 and 200 million tons. Meanwhile they have relatively high construction industry carbon intensity. Specifically, the construction industry carbon emissions have fluctuated between 2005 and 2014. For example, in Shanghai, the construction industry carbon emissions peaked in 2010, reaching 81.69 million tons. The carbon emissions of the construction industry then declined by 21.5% in 2014, at 64.12 million tons. Additionally, Liaoning, Shandong, Hebei, Guangdong, Shannxi, the Inner-Mongolia Autonomous Region, Gansu and Guangxi all exhibited a similar trend as the construction industry carbon emissions were lower in 2014 than they were in 2013. The primary cause for the reductions is due to the technological progress, energy conservation and emissions reduction policies of these provinces. The above identified areas of construction industry carbon emissions accounted for more than half of the total construction industry carbon emissions, and the rate of increase accounted for more than 60% compared to the other provinces. Because the northwest and central regions, including Qinghai, Hunan and Ningxia, are located inland, their technical levels and resource utilization, factors that generally result in high construction industry carbon emissions and high construction The above identified areas of construction industry carbon emissions accounted for more than half of the total construction industry carbon emissions, and the rate of increase accounted for more than 60% compared to the other provinces. Because the northwest and central regions, including Qinghai, Hunan and Ningxia, are located inland, their technical levels and resource utilization, factors that generally result in high construction industry carbon emissions and high construction industry carbon intensity, were relatively low. These results suggest that it is necessary to control environmental problems and enhance carbon emission efficiency in these regions. In contrast, as the eastern coastal areas, such as Shanghai, Jiangsu and Zhejiang, are characterized by high levels of economic development, foreign capital utilization, population quality, energy efficiency and environmental management standards, they exhibit high construction industry carbon emissions and low construction industry carbon intensity ( Figure 2). Hence, these differences should be considered when developing policies for low-carbon development. industry carbon intensity, were relatively low. These results suggest that it is necessary to control environmental problems and enhance carbon emission efficiency in these regions. In contrast, as the eastern coastal areas, such as Shanghai, Jiangsu and Zhejiang, are characterized by high levels of economic development, foreign capital utilization, population quality, energy efficiency and environmental management standards, they exhibit high construction industry carbon emissions and low construction industry carbon intensity ( Figure 2). Hence, these differences should be considered when developing policies for low-carbon development. The National Level Construction Industry Carbon Emissions Decomposition Results The output scale effect factor plays a significant role in increasing construction industry carbon emissions, with a relative contribution of 35.36. In contrast, the indirect carbon intensity factor, which is the primary factor responsible for reducing construction industry carbon emissions, contributes to decreasing the growth of construction industry carbon emissions, with an integrated contribution of However, the construction industry carbon emissions fell due to the economic crisis, which resulted in the decline of the contribution of the output scale effect. As illustrated in Figure 3, the effects of indirect carbon intensity, which mitigate emissions, occurred during most years, and thus, they contributed to the −27.46 of the total growth. The greater inhibiting effects of carbon emissions composition of the construction industry also led to a greater reduction in carbon emissions for the period 2005 to 2014, which is the most important cause of the The National Level Construction Industry Carbon Emissions Decomposition Results The output scale effect factor plays a significant role in increasing construction industry carbon emissions, with a relative contribution of 35.36. In contrast, the indirect carbon intensity factor, which is the primary factor responsible for reducing construction industry carbon emissions, contributes to decreasing the growth of construction industry carbon emissions, with an integrated contribution of −27.46. The other factors, such as direct energy proportion, unit value energy consumption and value creation effect, contribute to offsetting the growth of these factors, even though the inhibitory effects of these factors are quite small, with integrated contributions of −0.11, −0.6 and −1.2, respectively. Furthermore, the factor contributions to the construction industry carbon emissions in China dynamically change over time. From 2005 to 2014, the growth in carbon emissions was 5.99, while that of the output scale effect was 35.36. It is clear that the rapid expansion of the construction industry was the dominant force behind the acceleration of the growth of carbon emissions during this stage. In addition, the annual combined effects of these factors reveal an increasing trend during the 2005 to 2014 period. Specifically, the output scale effect demonstrates an obvious upward trend in 2008, and meanwhile, this factor's contribution increased from 9.71 in 2007 to 24.30 in 2008. However, the construction industry carbon emissions fell due to the economic crisis, which resulted in the decline of the contribution of the output scale effect. As illustrated in Figure 3, the effects of indirect carbon intensity, which mitigate emissions, occurred during most years, and thus, they contributed to the −27.46 of the total growth. The greater inhibiting effects of carbon emissions composition of the construction industry also led to a greater reduction in carbon emissions for the period 2005 to 2014, which is the most important cause of the slowdown in the growth of carbon emissions in this factor. This suggests that an increase in indirect emissions will promote a reduction in carbon emissions under the condition of a certain construction industry output value. Additionally, during the 2005 to 2014 period, the contributions of the factors resulted in a growth trend that peaked in 2008. This peak is attributed to the global economic crisis of 2008 and the subsequent slowdown in economic development in China. Thus, the industry output value and carbon emissions of the construction industry, which is a main industry of China's national economy, are also significantly lower. Accordingly, the contribution of indirect carbon intensity increased in 2008. Output Scale Effect The growth of the output scale effect, which is the most important factor, is closely related to the output of construction industry. The positive effects of this factor demonstrate an obvious upward trend and relatively stable development in the provinces that characterized by high levels of economic development, energy efficiency and environmental management standards, such as Beijing, Tianjin, Zhejiang and Jiangsu, as presented in Figure 4. In contrast, this factor showed a fluctuating trend in other provinces, such as Jiangxi, Liaoning and Yunnan. This phenomenon may be affected by the national economy and provincial policies in this provinces which the construction industry development in these provinces is immature. In particular, the output scale effect has been prominent in 2010. The reason is, in order to withstand the impact of the global economic crisis in 2010, the government strongly supported the construction industry development. Therefore, the development scale of the construction industry in 2010 is relatively prominent, that is, the output scale effect effects peaks. Furthermore, the annual average contribution of the output scale effect increases between 2005 and 2014, and the contributions of this factor exhibit a growth trend that peaks in 2008, as do the national trends. On the other hand, this factor's contributions fluctuate from an average of 6 to 8 in the lowcarbon provinces established by the state. These provinces, which include Hubei and Shaanxi, are located inland, and their technical levels and resource utilization are relatively low, factors that result in low carbon emissions efficiency. Therefore, the output of the scale effect of output of the construction industry source has a great influence in these provinces. Output Scale Effect The growth of the output scale effect, which is the most important factor, is closely related to the output of construction industry. The positive effects of this factor demonstrate an obvious upward trend and relatively stable development in the provinces that characterized by high levels of economic development, energy efficiency and environmental management standards, such as Beijing, Tianjin, Zhejiang and Jiangsu, as presented in Figure 4. In contrast, this factor showed a fluctuating trend in other provinces, such as Jiangxi, Liaoning and Yunnan. This phenomenon may be affected by the national economy and provincial policies in this provinces which the construction industry development in these provinces is immature. In particular, the output scale effect has been prominent in 2010. The reason is, in order to withstand the impact of the global economic crisis in 2010, the government strongly supported the construction industry development. Therefore, the development scale of the construction industry in 2010 is relatively prominent, that is, the output scale effect effects peaks. Furthermore, the annual average contribution of the output scale effect increases between 2005 and 2014, and the contributions of this factor exhibit a growth trend that peaks in 2008, as do the national trends. On the other hand, this factor's contributions fluctuate from an average of 6 to 8 in the low-carbon provinces established by the state. These provinces, which include Hubei and Shaanxi, are located inland, and their technical levels and resource utilization are relatively low, factors that result in low carbon emissions efficiency. Therefore, the output of the scale effect of output of the construction industry source has a great influence in these provinces. Indirect Carbon Intensity The indirect carbon intensity presents the largest rate and range of the decline in the growth of construction industry carbon emissions. As presented in Figure 5, the significant fluctuations in the contributions of indirect carbon intensity are primarily concentrated in the central, northeast and western regions, including the provinces of Shanxi, Inner-Mongolia and Gansu. These provinces, along with the low carbon provinces, exhibit low levels of technology and resource utilization, which are factors that result in low carbon emissions efficiency. The construction industry is undergoing a qualitative change from traditional to industrialization in China in 2010. Therefore, according to the requirements of most provinces, they adjusted their industrial structure and energy structure. These modifications brought about the turning point in the contributions of indirect carbon intensity. Especially in Shanxi, Inner-Mongolia and Gansu, which located in the central and northern regions, they adjustments in construction methods have brought about drastic changes in indirect carbon intensity. We expect that indirect carbon intensity will provide strong support for carbon reduction in the provincial construction industry in the future. As the regions located in the eastern coastal areas, however, such as Guangdong and Shanghai, exhibit high levels of economic development, this factor makes a moderate contribution in these provinces. Between 2005 and 2014, the indirect carbon intensity contributed greatly to decreasing emissions, and meanwhile, the contribution to the decrease in the fluctuations of the total carbon emissions of the construction industry was considerable during this period. Additionally, in 2010, the rate of the increase improved dramatically, but this trend was reversed in 2011. Finally, the contributions generated an annual average contribution of −20% to the total growth. Indirect Carbon Intensity The indirect carbon intensity presents the largest rate and range of the decline in the growth of construction industry carbon emissions. As presented in Figure 5, the significant fluctuations in the contributions of indirect carbon intensity are primarily concentrated in the central, northeast and western regions, including the provinces of Shanxi, Inner-Mongolia and Gansu. These provinces, along with the low carbon provinces, exhibit low levels of technology and resource utilization, which are factors that result in low carbon emissions efficiency. The construction industry is undergoing a qualitative change from traditional to industrialization in China in 2010. Therefore, according to the requirements of most provinces, they adjusted their industrial structure and energy structure. These modifications brought about the turning point in the contributions of indirect carbon intensity. Especially in Shanxi, Inner-Mongolia and Gansu, which located in the central and northern regions, they adjustments in construction methods have brought about drastic changes in indirect carbon intensity. We expect that indirect carbon intensity will provide strong support for carbon reduction in the provincial construction industry in the future. As the regions located in the eastern coastal areas, however, such as Guangdong and Shanghai, exhibit high levels of economic development, this factor makes a moderate contribution in these provinces. Between 2005 and 2014, the indirect carbon intensity contributed greatly to decreasing emissions, and meanwhile, the contribution to the decrease in the fluctuations of the total carbon emissions of the construction industry was considerable during this period. Additionally, in 2010, the rate of the increase improved dramatically, but this trend was reversed in 2011. Finally, the contributions generated an annual average contribution of −20% to the total growth. Value Creation Effect The value creation effect is second only to the indirect carbon intensity as a contributing factor to the reduction construction industry carbon emissions. As presented in Figure 6, this contribution is quite different in the eastern coastal areas, such as Shanghai, Jiangsu, Zhejiang and Guangdong, from that of the Beijing and Tianjin provinces. Furthermore, the indirect carbon intensity that contributes to the decrease in the construction industry carbon emissions assumes a greater value in the Beijing and Tianjin provinces. The reason is that these provinces react quickly to policies regarding the economic development and environmental resources following the global economic crisis in 2010, the contributions of the value creation effect have experienced similar trends in other provinces, with the annual average integrated contributions presenting an obvious upward trend between 2005 and 2014. As indicated in Figure 6, the contribution of the value creation effect plays a remarkable role in decreasing the carbon emissions in the northeast and western regions, such as Gansu, Inner-Mongolia, where the increments account for −4.72 of the total increment. These provinces are located inland, and their technical levels and resource utilization are relatively low, which results in a single industrial structure. In other regions, the annual average integrated contributions are approximately 1.0 out of the total increment, although fluctuating upward trend occurred between 2005 and 2014. Value Creation Effect The value creation effect is second only to the indirect carbon intensity as a contributing factor to the reduction construction industry carbon emissions. As presented in Figure 6, this contribution is quite different in the eastern coastal areas, such as Shanghai, Jiangsu, Zhejiang and Guangdong, from that of the Beijing and Tianjin provinces. Furthermore, the indirect carbon intensity that contributes to the decrease in the construction industry carbon emissions assumes a greater value in the Beijing and Tianjin provinces. The reason is that these provinces react quickly to policies regarding the economic development and environmental resources following the global economic crisis in 2010, the contributions of the value creation effect have experienced similar trends in other provinces, with the annual average integrated contributions presenting an obvious upward trend between 2005 and 2014. As indicated in Figure 6, the contribution of the value creation effect plays a remarkable role in decreasing the carbon emissions in the northeast and western regions, such as Gansu, Inner-Mongolia, where the increments account for −4.72 of the total increment. These provinces are located inland, and their technical levels and resource utilization are relatively low, which results in a single industrial structure. In other regions, the annual average integrated contributions are approximately 1. Unit Value Energy Consumption Carbon emissions from the unit value energy consumption factor increased by 172.94% (the ratio of peaks and valley in the Figure 7) over the 2005 to 2014 period, which reflects the huge impact of changes in energy structure on the construction industry carbon emissions, as illustrated in Figure 7. The contributions of the unit value energy factor with respect to the declining carbon emissions were primarily concentrated in the regions of Shanxi, Henan, and Heilongjiang, i.e., regions whose technical levels and resource utilization are relatively low. Because these provinces were experiencing rapid development, as evidenced by large construction completion areas, the contributions of the unit value energy factor was substantial. In the other regions, Chongqing, Liaoning and Xinjiang, i.e., the contributions of this factor fluctuate at −0.1. As indicated in Figure 7, the annual combined effects of the unit value energy factor indicate an increasing trend during the 2005 to 2014 period. However, in 2010, the positive effects of the unit value energy factor changes resulted in a dramatic reduction of emissions, which is likely because the provinces began to adjust the structure of industrial construction in 2010. In general, the unit value energy factor contributes significantly to the reduction in carbon emissions and the subsequent growth trend. Figure 7. The contributions of the unit value energy factor with respect to the declining carbon emissions were primarily concentrated in the regions of Shanxi, Henan, and Heilongjiang, i.e., regions whose technical levels and resource utilization are relatively low. Because these provinces were experiencing rapid development, as evidenced by large construction completion areas, the contributions of the unit value energy factor was substantial. In the other regions, Chongqing, Liaoning and Xinjiang, i.e., the contributions of this factor fluctuate at −0.1. As indicated in Figure 7, the annual combined effects of the unit value energy factor indicate an increasing trend during the 2005 to 2014 period. However, in 2010, the positive effects of the unit value energy factor changes resulted in a dramatic reduction of emissions, which is likely because the provinces began to adjust the structure of industrial construction in 2010. In general, the unit value energy factor contributes significantly to the reduction in carbon emissions and the subsequent growth trend. Direct Energy Proportion As indicated in Figure 8, compared with other identified factors, the direct energy proportion factor exhibits the smallest growth range regarding the reduction of carbon emissions. Regions where the contribution value was positive are located in the northeast and western areas, which include the provinces of Heilongjiang, Gansu, Fujian, Hunan, Guangxi and Shanxi. These provinces are characterized by relatively low technical levels and low levels of resource utilization, which, in turn, result in low carbon emissions efficiency. Accordingly, as the development leads to a reduction in carbon emissions, the contribution becomes negative. The primary contributions that reduce the carbon emissions were primarily concentrated in the eastern coastal areas, where the provinces are characterized by high levels of economic development, foreign capital utilization, population quality, energy efficiency and environmental management standards. Hence, these areas served as examples to surrounding areas be demonstrating how to control environmental problems and enhance carbon emissions efficiency. As indicated in Figure 8, this factor has a fluctuating contribution of −0.1, which indicates an increasing trend during the 2005 to 2014 period. This increasing trend is due to the effects of energy intensity and structure changes, to the development of new energy and to the reduction in on-site construction carbon emissions. Accordingly, it is concluded that the direct energy proportion is a factor the contributes to the reduction of carbon emissions. Direct Energy Proportion As indicated in Figure 8, compared with other identified factors, the direct energy proportion factor exhibits the smallest growth range regarding the reduction of carbon emissions. Regions where the contribution value was positive are located in the northeast and western areas, which include the provinces of Heilongjiang, Gansu, Fujian, Hunan, Guangxi and Shanxi. These provinces are characterized by relatively low technical levels and low levels of resource utilization, which, in turn, result in low carbon emissions efficiency. Accordingly, as the development leads to a reduction in carbon emissions, the contribution becomes negative. The primary contributions that reduce the carbon emissions were primarily concentrated in the eastern coastal areas, where the provinces are characterized by high levels of economic development, foreign capital utilization, population quality, energy efficiency and environmental management standards. Hence, these areas served as examples to surrounding areas be demonstrating how to control environmental problems and enhance carbon emissions efficiency. As indicated in Figure 8, this factor has a fluctuating contribution of −0.1, which indicates an increasing trend during the 2005 to 2014 period. This increasing trend is due to the effects of energy intensity and structure changes, to the development of new energy and to the reduction in on-site construction carbon emissions. Accordingly, it is concluded that the direct energy proportion is a factor the contributes to the reduction of carbon emissions. Policy Implications Considering the significant discrepancies across provinces, improving the accuracy of the calculations of the construction industry carbon emissions is critical for achieving China's national action targets with respect to climate change. It is also critical to identify the contributing factors of these construction industry carbon emissions at the provincial level. Thus, the major policy proposals are summarized herein. Carbon emissions in the construction industry in China increased by 55.6% from 2005 to 2014, with an absolute increment of 140,776 tons. Provinces such as Zhejiang, Jiangsu, and Guangdong have high construction industry carbon emissions and low carbon intensity. Other provinces, such as Shanxi, Guizhou, and Inner-Mongolia, are located inland. They exhibit high construction industry carbon emissions and low carbon emissions efficiency. These results suggest that policy makers should improve the technical knowledge and promote the coordination of economic development and environmental resources. Hence, the existing policies and measures for energy savings and emissions reduction must be steadfastly promoted to achieve a further reduction in China's carbon intensity level, including financial incentives for energy-saving technological transformation, energy savings assessments for enterprises, and the obligatory targets for provincial-level energy consumption intensity. Meanwhile, greater effort should be made to exploit emissions reduction potential by the optimization of current energy systems. Particularly, the implementation plan and technology roadmap for realizing China's 2050 renewable energy development goals must be detailed and executed in the near future. The driving forces of the various provinces regarding the increase in China's national carbon emissions changed dynamically over time. The direct carbon emissions ratio and the unit value energy consumption have fluctuating effects on carbon emissions at the national level, while the value creation effect has a negative effect on carbon emissions. Furthermore, the enhancement of indirect carbon intensity offsets the carbon emissions in all factors. Meanwhile, the output scale is the primary influencing factor with respect to the increasing emissions in China and its provinces, Policy Implications Considering the significant discrepancies across provinces, improving the accuracy of the calculations of the construction industry carbon emissions is critical for achieving China's national action targets with respect to climate change. It is also critical to identify the contributing factors of these construction industry carbon emissions at the provincial level. Thus, the major policy proposals are summarized herein. Carbon emissions in the construction industry in China increased by 55.6% from 2005 to 2014, with an absolute increment of 140,776 tons. Provinces such as Zhejiang, Jiangsu, and Guangdong have high construction industry carbon emissions and low carbon intensity. Other provinces, such as Shanxi, Guizhou, and Inner-Mongolia, are located inland. They exhibit high construction industry carbon emissions and low carbon emissions efficiency. These results suggest that policy makers should improve the technical knowledge and promote the coordination of economic development and environmental resources. Hence, the existing policies and measures for energy savings and emissions reduction must be steadfastly promoted to achieve a further reduction in China's carbon intensity level, including financial incentives for energy-saving technological transformation, energy savings assessments for enterprises, and the obligatory targets for provincial-level energy consumption intensity. Meanwhile, greater effort should be made to exploit emissions reduction potential by the optimization of current energy systems. Particularly, the implementation plan and technology roadmap for realizing China's 2050 renewable energy development goals must be detailed and executed in the near future. The driving forces of the various provinces regarding the increase in China's national carbon emissions changed dynamically over time. The direct carbon emissions ratio and the unit value energy consumption have fluctuating effects on carbon emissions at the national level, while the value creation effect has a negative effect on carbon emissions. Furthermore, the enhancement of indirect carbon intensity offsets the carbon emissions in all factors. Meanwhile, the output scale is the primary influencing factor with respect to the increasing emissions in China and its provinces, because the rapid expansion of construction production is the leading force behind the increase in carbon emissions. Several reasons contributed to this variation, including the level of economic development, the characteristics of the regional industrial structures, and the special policies of some provinces and cities, such as the low carbon provinces of Shaanxi, Guangxi, Hubei, Liaoning, and Yunnan. Furthermore, the fluctuations of the various factors were more consistent between 2005 and 2014, with the peaks concentrated primarily in 2008 and 2010. In 2008, influenced by the global economic impact, the direct carbon emissions, indirect carbon emissions, completed area and total output value of the construction industry were greatly affected, which indicating the fluctuations in the various indicators. From 2010 to 2012, the change in the energy structure and the development of prefabricated construction greatly influenced the output value of the construction industry and the associated carbon proportion, thus impacting the contribution level of every index. Accordingly, the important problem regarding the future development of China is to coordinate the relationship between economic growth and carbon emissions. In addition, to reduce carbon emissions, the promotion of low-carbon building technology and the reduction of high carbon consumption with respect to building materials are essential. Conclusions This study separated construction industry carbon emissions into two categories, the direct category and the indirect category and adopted the LMDI decomposition method to systematically examine the contributing factors of each province. Furthermore, corresponding strategies to achieve emissions reduction advice are proposed for the different provinces based on their specific characteristics and underlying driving forces of construction industry carbon emissions. It was concluded that carbon emissions increased, whereas carbon intensity decreased dramatically, and indirect emissions accounted for 90% to 95% of the total emissions from the majority of the provinces between 2005 and 2014. The carbon intensities were high in the underdeveloped western and central regions, especially in Shanxi, Inner-Mongolia and Qinghai, whereas they were low in the well-developed eastern and southern regions, represented by Beijing, Shanghai, Zhejiang and Guangdong. Overall, the driving forces behind the carbon emissions of the construction industry in various provinces and the contributions of these forces to the increase in China's national construction industry carbon emissions differ greatly among each other and change dynamically over time. Moreover, it should be noted that when considering provincial differences in the driving forces of the emissions, the proposed classification of China's provinces presented in this study varies significantly from previous studies that rely primarily on provincial geographical positions, economic developments and superficial features of carbon emissions. Therefore, the formulation of China's emissions reduction strategies should consider both the features of provincial carbon emissions and the underlying forces shaping those features. Accordingly, these strategies must be refined in a timely manner to respond appropriately to the different developmental stages. Furthermore, the calculation and analysis methods used in this paper can also be applied to carbon emission studies in other countries. Our conclusions can also provide guidance for policies and development strategies for countries which has a massive land area and regions with greatly uneven development.
2018-06-30T00:51:45.456Z
2018-06-01T00:00:00.000
{ "year": 2018, "sha1": "3e2bf8838acbaff0ead9b27ed8b732d0cde1b6aa", "oa_license": "CCBY", "oa_url": "https://res.mdpi.com/d_attachment/ijerph/ijerph-15-01220/article_deploy/ijerph-15-01220.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3e2bf8838acbaff0ead9b27ed8b732d0cde1b6aa", "s2fieldsofstudy": [ "Environmental Science", "Engineering", "Economics" ], "extfieldsofstudy": [ "Environmental Science", "Medicine" ] }
125659025
pes2o/s2orc
v3-fos-license
Dichotomy in ultrafast atomic dynamics as direct evidence of polaron formation in manganites Polaron transport, in which electron motion is strongly coupled to the underlying lattice deformation or phonons, is crucial for understanding electrical and optical conductivities in many solids. However, little is known experimentally about the dynamics of individual phonon modes during polaron motion. It remains elusive whether polarons have a key role in materials with strong electronic correlations. Here we report the use of a new experimental technique, ultrafast MeV-electron diffraction, to quantify the dynamics of both electronic and atomic motions in the correlated LaSr2Mn2O7. Using photoexcitation to set the electronic system in motion, we find that Jahn-Teller-like O, Mn4+ and La/Sr displacements dominate the lattice response and exhibit a dichotomy in behaviour—overshoot-and-recovery for one sublattice versus normal behaviour for the other. This dichotomy, attributed to slow electronic relaxation, proves that polaron transport is a key process in doped manganites. Our technique promises to be applicable for specifying the nature of electron–phonon coupling in complex materials. Ultrafast interactions between electrons and the lattice of correlated materials can now be accessed with a new electron diffraction method. Knowing how electrons interact with the lattice as they travel through a material is crucial for understanding properties such as electrical conductivity. A team led by Yimei Zhu at Brookhaven National Laboratory showed that ultrahigh-energy electron diffraction can be used to quantitatively follow changes in the crystal structure and electronic system of a layered manganite on picosecond timescales, unveiling strong evidence for the formation of polarons — a quasiparticle that arises from strong electron-lattice interactions. This technique reveals the important role of electron-lattice interactions in doped magnanites, and could be used to study such interactions in a range of correlated materials. INTRODUCTION The temporal correlation between electron motion and the atomic lattice distortion [1][2][3][4][5][6][7] is considered essential to electronic transport, in behaviour ranging from resistance-free flow 2 to selftrapping, 3 however, obtaining direct experimental information about the dynamics of electron-lattice coupling on the picosecond timescale is a daunting problem. As atomic and electronic masses differ by 3-5 orders of magnitude, it has been generally assumed (the Born-Oppenheimer approximation) that the motions of atomic nuclei and electrons can be considered as separate, with the electrons in their eigenstates for any given positions of the underlying atomic lattice. In condensed matter systems with strong electron-electron or electron-lattice interactions, 7,8 however, the effective masses of the conduction electrons can be significantly increased. In other words, the electrons are slowed down and the atoms in the lattice can adjust their positions in a timely manner in response to changes in the electronic states, resulting in a strongly coupled system or 'polarons'. 1 Recently, the use of ultrafast pump-probe techniques to drive electronic systems out of equilibrium has emerged as a powerful means for altering electronic states, e.g., to yield photoinduced transient superconductivity in cuprates 4,9,10 and an antiferromagnetic to ferromagnetic transition in manganites. 11 This opens the possibility of observing the correlation of the electronic and atomic motions that occurs on entry into a nonequilibrium state. 10 Here we pursue this idea through the quantitative ultrafast and ultrahigh-energy electron diffraction (MeV-UED) characterisation of the prototypical half-doped bi-layer manganite, LaSr 2 Mn 2 O 7 . We are able to quantitatively follow, on the picosecond timescale, the changes in the crystal structure and electronic system of this material in response to photoexcitation, and from those observations find an unanticipated correlation of the two systems that illustrates the power of this new technique in understanding the electronic properties of condensed matter systems. LaSr 2 Mn 2 O 7 belongs to the family of A-site-doped layered manganites A n+1 Mn n O 3n+1 (where A is a rare-earth element and n = 2). Below its charge and orbital ordering (OO) temperature T CO-OO = 210 K, the two types of Mn ions order in the double MnO 2 layers in a checkerboard pattern of nominal Mn 3+ and Mn 4+ (Figure 1a), yielding a 'charge ordered' (CO) state. 12 At the same temperature, the excess electron present on the Mn 3+ undergoes an ordering transition such that it occupies, alternatingly, the 3x 2 -r 2 and 3y 2 -r 2 orbitals on adjacent atoms along the crystallographic b direction, 13,14 a phenomenon known as OO. The coexisting CO-OO state in the half-doped manganites has been extensively studied because it can yield insight into the colossal magnetoresistance effect observed in perovskite-related manganites. As the CO is coupled to breathing-mode lattice distortions and the OO is coupled to Jahn-Taller-mode lattice distortions, [15][16][17] it has been suggested that the electrons in doped manganites are heavily dressed with these lattice distortions, forming polarons, and that the formation, melting and ordering of the polarons are the key to the colossal magnetoresistance effect. 3,[18][19][20][21][22] Thus, this system is ideal for studying polarons, with the accompanying dynamics of the interplay of electronic and lattice degrees of freedom in a complex solid. The OO and CO in the manganites are directly measurable as superlattice peaks in X-ray and electron diffraction. To discover the dynamics of atom-specific lattice distortions during polaron formation through time-dependent structural refinement, simultaneous retrieval of all relevant reflections in a single diffraction pattern is highly desirable, but challenging. Recent ultrafast X-ray diffraction measurements revealed that photoexcitation suppressed the CO and OO in half-doped manganites La 0.5 Sr 1.5 MnO 4 (n = 1; refs 23, 24) and Pr 0.5 Ca 0.5 MnO 3 (n = ∞; ref. 25). However, these experiments involved rotation of the samples and changes in the incident X-ray energy and flux in order for different classes of peaks to be measured, making it not only hard to precisely clock the 'time-zero', but, more importantly, also difficult to properly renormalise the intensities of those separately measured reflections. In contrast, the UED technique typically using tens-of-keV electrons can simultaneously measure CO and OO superlattice peaks, but the number of reflections is usually limited for structural refinement. [26][27][28][29] It was recently demonstrated 30,31 that 2.8 MeV electrons can double the number of accessible reflections than 50 keV electrons. 26 Such an expansion of high-order reflections, which are very sensitive to atomic displacement, is crucial to the accurate detection of phonons. Here, we use the recently commissioned accelerator-based relativistic MeV-UED system that is achievable of 130 fs temporal resolution 30,31 to directly measure the dynamic paths of atoms during the suppression of the OO and CO states in the LaSr 2 Mn 2 O 7 single crystal under 800 nm (1.55 eV) laser excitation. The pump-probe set-up is illustrated in Figure 1a and described in Methods, along with additional advantages of MeV-UED. Eighty-six reflections with many associated to the OO and CO in the (001) zone are recorded simultaneously, enabling us to extract previously undisclosed atomic dynamics that strongly complements the earlier studies of photoinduced phase transitions in manganites. RESULTS A typical (001) diffraction pattern of LaSr 2 Mn 2 O 7 at 77 K is shown in Figure 1b. The appearance of a series of sharp satellite spots indicates the existence of the superstructure modulation ( Figure 1c). The superlattice spots (h k 0) with k = 4n ± 1 and k = 4n ± 2 represent the OO and the CO, respectively. The notation is based on space group Bbmm with the lattice constants a = 0.5443 nm, b = 1.0194 nm and c = 1.9816 nm. The reflections (210) and (220) correspond, respectively, to (1/4,1/4,0) and (1/2,1/2,0) defined in the one-Mn Brillouin zone convention. 13 Photoinduced suppression of the CO and OO Figure 2a and b shows the evolution of the peak intensity I(t) of the CO and OO superlattice reflections as a function of time at 77 K, normalised by the intensities before time zero (I 0 ) at pump fluence of 4 mJ/cm 2 . Both the CO and OO intensities drop quickly upon photoexcitation, indicating a reduction of both charge and orbital order parameters. After 5 ps, the OO intensities decrease by ~28% over the maximum time delay studied of 200 ps. Similar behaviour was observed for the CO peaks. By fitting the curves to an exponential decay, the time constants associated with the CO and OO phase transition dynamics were derived, being τ CO = 1.97 ± 0.30 ps for CO and τ OO = 1.86 ± 0.12 ps for OO, identical within the measurement errors. In contrast, the decay of the Bragg peak intensity (the inset of Figure 2a), representing the average structure, exhibits a time constant of τ Bragg = 2.59 ± 0.45 ps with a much weaker,~2.5%, reduction. Figure 2c presents relative reduction of the OO superlattice peak as a function of pump fluence up to 10 mJ/cm 2 . It clearly displays a saturation point at 5 mJ/cm 2 : the OO intensity linearly decreases as the pump fluence increases from zero to 5 mJ/cm 2 and then remains unchanged at − 37% for higher fluences. Note that qualitatively similar results were observed in ultrafast X-ray diffraction experiments on other half-doped A n+1 Mn n O 3n+1 manganites. 23,25 Yet, the dynamics appears to be much faster in Pr 0.5 Ca 0.5 MnO 3 (n = ∞; τ OO~0 .5 ps in ref. 25) with complete suppression of OO at the fluence of 5 mJ/cm 2 . The slow dynamics in LaSr 2 Mn 2 O 7 sets a clearer stage that the atomic motion may be cooperative, driven non-thermally through the rearrangement of charges. The much weaker photoinduced suppression of the Bragg peak intensity suggests that the mechanism of an increased Debye-Waller (DW) factor, i.e., increased disorder in the superlattice due to laser heating, is unlikely to account for the observed large suppression of the CO/OO peak intensities. This is because the increase of DW would basically cause the same percentage drop of the intensities for both Bragg peak and its satellite CO/OO peaks. To have a proper understanding of the impact of laser heating, we performed the MeV-UED experiments at different base sample temperatures for the 3.4 mJ/cm 2 fluence. As shown in Figure 2d, the OO remains substantial at 200 K; however, it would have disappeared if the sample was heated to T CO-OO = 210 K. Hence, the overall heating of the sample must be less than 10 K. If all absorbed laser energies convert to thermal energy, the temperature increased by a single pulse is estimated to be less than 90 K (Supplementary Note 2). However, the absorbed energy also dissipates, resulting in a stable temperature distribution across the sample and a much smaller increase in temperatureat negative time delay with respect to the next pump-probe measurement. The residue heating in our set-up was measured to be~5 K (Figure 1d), which agrees with the calculated upper bound temperature of 12 K (Supplementary Note 2 and Supplementary Figure S4). Therefore, the sample temperature is estimated to be lower than 87 K during the measurements. In comparison, to thermally induce the observed large suppression of the OO peak intensity, the sample needs to be heated to above 160 K. 32 Hence, we conclude that the observed OO dynamics is due to photoinduced electronic excitation. Structural evolution and the associated phonon modes We proceed to present our most significant findings of this work achieved by quantitatively analysing the simultaneously observed Bragg, CO and OO reflections in the (001) pattern in a series of time delays. The key features of the crystal structure are illustrated in Figure 3a Jahn-Teller, breathing, rotation and La/Sr), we calculated the diffraction patterns and compared them with the observations (see Methods). We found that, despite the clear changes in the OO and CO reflection intensities, the crystal symmetry of the system remains (space group Bbmm). Furthermore, we identified that the 32 OO superlattice peaks dominate the fitting because of the generally much weaker CO peak intensities (Figure 1b) and the little change in the Bragg peak intensities (Figure 2a). Below we first show in Figure 3c our analysis of the atom-specific lattice distortions at one specific time, t = 14 ps after the photoexcitation at which the system is in quasiequilibrium. Then, we present the results for the series of time delays in Figure 3d. Supplementary Table S2). (d) Distinctive time dependences of the Jahn-Teller-like (O, Mn 4+ ) and La/Sr displacements. The former (black circles) exhibits a single exponential decay with a time constant of 2.72 ps, while the latter (blue squares) possesses a two-step behaviour, namely, a 3.62 ps decay followed by a 4.32 ps recovery. The displacements are normalised to their respective values before time 0. The error bars were obtained from the normalised JT and La/Sr displacement that yields 1 standard deviation ± σ from the mean value of the OO peak intensities shown in Figure 2b. Figure 3a). An excellent fit (with χ 2 almost one order of magnitude smaller than for any individual mode) is found to result from cooperative Jahn-Teller-like (O and Mn 4+ ) and La/Sr lattice distortions, with similar suppressions of 23% from their original ones, indicating the intimate relationship between the Jahn-Teller lattice distortion and the OO. The refined atomic positions before and after the photoexcitation are listed in Supplementary Table S1. The result that the best fit comes from cooperative Jahn-Tellerlike (O, Mn 4+ ) and La/Sr displacements holds for all time delays (see Supplementary Figure S3). Any other combination of two individual modes yields worse results, whereas three and four combinations show little improvement of the goodness of the fit (see caption of Supplementary Figure S3b) for 10-14 ps time delays we tested. Therefore, we conclude that the Jahn-Teller-like O, Mn 4+ and La/Sr displacements dominate the lattice response to the photoexcitation. Figure 3d shows the time dependence of these lattice distortions. It is remarkable that these Jahn-Tellermode-related lattice distortions have different dynamic behaviours upon photoexcitation, as can be seen from the evolution of the atomic displacements in time. The O and Mn 4+ lattice distortions show a single exponential drop with a time constant of 2.72 ps. By contrast, the La/Sr lattice distortion decay is completely different, described by a two-step behaviour, namely first a 3.62 ps exponential decay followed by a 4.32 ps recovery. In other words, the La/Sr lattice distortion overshoots during the first 5 ps, before changing course to result in all the distortions converging to reach a quasiequilibrium state at 14 ps. This dichotomy of overshoot and recovery versus normal dynamics is robust, but hidden in the averaged OO peak intensity (Figure 2b). It becomes visible only after the more rigorous analysis of the large data set accessible with the MeV-UED method. Our dynamical structural refinement can also help to separate the Debye-Waller effect. The atomic displacement corresponding to the observed 28% decrease of OO peak intensity yields a 1% increase in the Bragg peak intensity non-thermally (unrelated to the DW factor). As we experimentally observed a 2.5% drop of the Bragg peaks at 14 ps, the change of the DW factor is likely to contribute a 3.5% intensity drop of the Bragg peaks. Thus, we attribute the decay of Bragg intensities mostly to the lattice heating, and the suppression of the OO reflections is driven nonthermally through the rearrangement of charges. DISCUSSION A likely scenario for the overshoot-recovery versus normal dichotomy observed is the unequal involvement of the electron dynamics in determining the behaviour of different parts of the atomic system during the first 5 ps. To consider this in more detail, first it is necessary to determine what microscopic electronic process is induced by the 1.55 eV photons. The Jahn-Teller distortion is known to split the twofold degenerate e g levels on the Mn 3+ site (marked A in Figure 4a) into a lower locally occupied 3z 2 -r 2 (or z 2 for shorthand notation) and higher unoccupied x 2 -y 2 levels 33 (see Figure 4b). Recent first-principles electronic structure study of a half-doped manganite 15 showed that the Jahn-Teller energy gain on the Mn 3+ site is E JT = 0.226 eV, the breathing mode energy gain on a pair of Mn 3+ and Mn 4+ sites is E BM = 0.084 eV and the effective intersite Coulomb repulsion is V eff = 0.44 eV. On the basis of these theoretical results, as shown in Figure 4b, an on-Mn 3+ -site d-d transition costs an energy of~2E JT = 0.46 eV. On the other hand, as shown in Figure 4a and c for an intersite d-d transition, the energy cost is~3V eff +E JT +E BM = 1.63 eV. The band dispersion will modify the threshold for the photoinduced intrasite and intersite d-d transitions by about ± 0.25 and ± 0.45 eV, respectively. A value of 1.55 eV falls into the excitation energy range for the intersite d-d transition only (1.63 ± 0.45 eV) and it is close to the 1.63 eV from the centre of band mass consideration, which implies high transition rates. Hence, an intersite d-d transition is much more likely to be induced by the 1.55 eV optical photons than the other transitions. To be more specific, the photoexcitation causes an intersite z 2 -z 2 transition because the matrix element between neighbouring z 2 and x 2 -y 2 orbitals is vanishing by symmetry (Figure 4d). The initial z 2 state of the electron pumped on the Mn 4+ site will relax into a hybrid z 2 plus x 2 -y 2 orbital to fit the new non-Jahn-Teller-distorted oxygen surroundings. Depending on the speed of this electronic relaxation, as illustrated in Figure 4e and f, the La/Sr overshoot behaviour may (for slow relaxation) or may not (for fast relaxation) be exhibited, when the electron relaxation is slow or, relatively speaking, the lattice response is fast, the lattice can display two distinct behaviours corresponding to the two different electronic states before and after the relaxation. In strong contrast, the Mn 4+ displacement is dragged by the on-site electronic relaxation, leading to its normal behaviour. After the electronic relaxation is complete, the Jahn-Teller O and Mn 4+ , and La/Sr lattice distortions, can converge. Summary Our experimental results and theoretical analyses consistently reveal that the relaxation of the electron pumped away from the Mn 3+ site in LaSr 2 Mn 2 O 7 is slow and can be caught up by even La and Sr, the heaviest elements in the system. This is direct evidence for polaron formation, and shows that the motion of electrons within the cloth of atomic lattice distortions dominates the behaviour of this system, and thus is likely to be the key to the understanding of the doped manganites. In addition to the intriguing coupled phenomenologies of OO, CO and colossal magnetoresistance in the manganites, many exotic physical properties emerge in materials based on transition metal elements. 8 Strong electron-electron interactions on the transition-metal ions themselves are known to be a key driving force in determining those properties; however, the critical relevance of the lattice degrees of freedom is often a highly debated issue. Intrinsically strong electron-electron and electronlattice interactions can in principle slow down the electrons and favour the formation of polarons. By using the manganite LaSr 2 Mn 2 O 7 as a test bed, we have demonstrated here the capability of MeV-UED to quantify very short timescale correlations of the atomic and electronic systems in a complex material. Its strength is that it can simultaneously measure a large number of reflections and thus provide time-dependent quantitative analysis of the atom-specific lattice dynamics on the picosecond and subpicosecond timescale. We anticipate wide application of the technique to correlated materials in general and the cuprate superconductors in particular, where, for example, transient superconductivity approaching room temperature was reported to be induced by pump photons close in energy to a specific lattice oscillation mode. 4,9,10 METHODS Sample and experimental set-up LaSr 2 Mn 2 O 7 single crystals were grown in a floating zone furnace. The crystal sample was cut along the layer-stacking direction and thinned to 80 nm thickness with mechanical polishing and low-energy Ar + ion-milling to allow electron transmission and ensure that the whole probed volume is properly pumped by optical pulses with the penetration depth of 120 nm. Then, the flake was transferred to a Cu grid and characterised at room temperature and 77 K using a 300 keV field-emission transmission electron microscope. In the MeV-UED experiments, optical pulses with duration of 100 fs and a centre wavelength of 800 nm (1.55 eV) were focused down to 1.5 mm on the sample to trigger electronic excitations and crystal structure evolution. At a specific time delay, well-synchronised 2.8 MeV electron pulses with the time resolution of 130 fs were collimated to 200 μm in the pumped area. Nearly 10,000 electron diffraction patterns were recorded for various pump-probe time delays, sample temperatures and pump fluences. The sample was not damaged by the high-energy electron pulses as the experimental results were highly reproducible. The high-quality electron beams (10 6 electrons per bunch, of length 100 fs, with a longitudinal and transverse coherence length of~2 and 10 nm, respectively) were produced by using a unique Brookhaven National Laboratory-type photocathode radio frequency gun with a deflecting cavity. 30,31 The ultrahigh electron energy significantly minimises space-charge effects, allowing for a high flux of electrons in extremely short pulses. MeV electrons can also penetrate thicker samples and significantly reduce multiple scattering effects in favour of quantitative analyses due to their longer mean free path than~50 keV electrons typically used in DC-UED. Moreover, as electrons interact with matter more strongly than X-rays, 34,35 the pump-probe approach with electrons yields a large number of elastic scattering events and enables observations that are sensitive to electrons and atomic motions. Diffraction analysis For time-resolved crystal structure refinement, the Bloch wave method, a well-established quantitative dynamical diffraction approach in which the multiple scattering effects are taken into account, was used to calculate the electron diffraction pattern of the crystal for the various lattice distortion modes considered in this paper. 36 The results were compared with the experimental OO, CO and Bragg diffraction intensities to determine the roles of various lattice distortion modes at each pumpprobe time delay. In addition, the sample geometry (80 nm in thickness with a 0.6°bending angle along the [040] direction) was determined by matching the intensities of~40 Bragg and OO spots in the (001) zone before time zero using the atomic positions based on neutron diffraction 14 and refined by electron diffraction experiments.
2019-04-22T13:03:58.813Z
2016-11-25T00:00:00.000
{ "year": 2016, "sha1": "e8b692c8950865c24d0c9f156c61abe78f6a3f16", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/npjquantmats201626.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "c8d3e4c1d5b87f680bcfddc614c39771149f8dc2", "s2fieldsofstudy": [ "Physics", "Materials Science" ], "extfieldsofstudy": [ "Physics" ] }
266811631
pes2o/s2orc
v3-fos-license
Determination of the Probabilistic Properties of the Critical Fracture Energy of Concrete Integrating Scale Effect Aspects : This paper presents an extension of the validation domain of a previously validated three-dimensional probabilistic semi-explicit cracking numerical model, which was initially validated for a specific concrete mix design. This model is implemented in a finite element code. The primary objective of this study is to propose a function that enables the estimation of the critical fracture energy parameter utilized in the model and validate its effectiveness for various concrete mix designs. The model focuses on macrocrack propagation and introduces significant aspects such as employing volume elements for simulating macrocrack propagation and incorporating two key factors in governing its behavior. Firstly, macrocrack initiation is linked to the uniaxial tensile strength ( f t ) . Secondly, macrocrack propagation is influenced by a post-cracking dissipation energy in tension. This energy is taken equal to the mode I critical fracture energy ( G IC ) based on the linear elastic fracture mechanics theory. Importantly, both f t and G IC are probabilistic properties influenced by the volume of concrete under consideration. Consequently, in the numerical model, they are dependent on the volume of the finite elements employed. To achieve this objective, numerical simulations of fracture mechanical tests are conducted on a large double cantilever beam specimen. Through these simulations, we validate the proposed function, which is a crucial step towards expanding the model’s applicability to all concrete mix designs. Introduction Concrete, commonly treated as homogeneous in macroscopic numerical models for simplicity, is inherently heterogeneous.Inner defects arise from cement paste hydration and restrained shrinkages, causing cracks even before external loads are applied.Given the inherent formation of cracks, modeling their initiation and propagation presents a critical challenge in predicting concrete behavior.The importance of crack formation has instigated a variety of studies, resulting in diverse constitutive models.Techniques for simulating the cracking process in concrete structures fall into two broad approaches, implicitly or explicitly addressing kinematic discontinuity, resulting in continuum or discrete models. In continuum models, cracks are implicitly represented, and the failure process is considered by the degradation of material stiffness, altering its constitutive equation.Some models in this field are damaged models [1,2], the smeared crack model [3,4], and the plasticity model [5].Conversely, in discrete cracking models, cracks are explicitly treated as geometrical entities, manifesting as discontinuities of displacement at interfaces between finite elements or integrated into the finite element formulation.Some discrete models are the cohesive crack model or fictitious crack model [6,7], extended finite element method (XFEM) [8], embedded finite element method (EFEM) [9] and lattice models [10]. Additionally, probabilistic models address the significant scale effect in concrete structure cracking by employing random distribution functions of material properties to explicitly consider concrete heterogeneity.In this work, a semi-explicit probabilistic cracking numerical model, based on the finite element approach, developed and validated in previous studies [11,12], is employed.This model specifically focuses on the macrocrack propagation problem, incorporating several crucial characteristics. One significant feature of the model is its representation of macrocrack propagation using volume elements, which offers a realistic portrayal of the phenomenon.The model employs two criteria governing macrocrack propagation, as follows: (1) macrocrack initiation is linked to the uniaxial tensile strength, f t and (2) macrocrack propagation is influenced by post-cracking dissipation energy in tension.The complete propagation of the macrocrack occurs when all post-cracking dissipation energy has been consumed. The evolution of post-cracking dissipation is governed by a simple damage approach.A distinctive aspect of the model's damage approach is that the post-cracking dissipation energy is derived from linear elastic fracture mechanics (LEFM), specifically utilizing the mode I critical fracture energy, referred to as G IC .Both f t and G IC are probabilistic mechanical characteristics that depend on the size of the mesh elements.While the mean value of G IC is considered an intrinsic characteristic of concrete independent of the mesh element size, its standard deviation is influenced by the size effects. Previous experimental and numerical studies [21][22][23][24] have successfully determined and validated the probabilistic properties of f t (mean and standard deviation values) for concretes with a compressive strength of up to 130 MPa, depending on the size of the mesh elements.However, acquiring equivalent information regarding G IC has proven to be a challenge.This critical parameter, which is directly linked to concrete's crack resistance [25], poses complexity in accurate estimation within brittle heterogeneous materials due to their nonlinear behavior.This complexity is evident due to the substantial fracture process zone, whose size is considerably large compared with the specimen's dimensions and resulting in the manifestation of the size effect [26].Consequently, deriving an accurate value of G IC , for concrete, unaffected by these factors is demanding.Typically defined as the energy consumption during crack propagation in an infinite specimen, obtaining a size-independent assessment necessitates tests on specimens substantially larger than the fracture process zone size [27,28]. Therefore, the primary objective of this paper is to determine the probabilistic properties of G IC based on the size of the mesh elements.By addressing this knowledge gap, this research aims to contribute to a comprehensive understanding of the probabilistic properties of G IC in relation to macrocrack propagation in concrete.The findings obtained from this study provide a substantial contribution to the field of concrete structure modeling. Determination of the Probabilistic Properties of G IC In previous research [12], the standard deviation of G IC was determined using an inverse analysis for a specific concrete mix design, where the mean value of G IC was known.To further investigate the probabilistic properties of G IC , an analytical relation was proposed to establish a connection between the standard deviation, σ, of G IC , and the degree of heterogeneity of the mesh element.In this work, the degree of heterogeneity, r e , is defined as follows: where V a is the volume of the largest aggregate size present in the concrete, and V e denotes the volume of the mesh element.The analytical relation proposed to establish this connection between σ and r e is as follows: where A = −8.538;B = 70.88 and µ(G IC ) is the mean value of G IC .The relation described in Equation (2) has been proposed specifically for G IC = 1.25 × 10 −4 MN/m and for the size of the largest aggregate equal to 12 mm (in terms of diameter).These concrete parameters were derived from experimental research, which enabled the determination of the intrinsic value of G IC [29,30].It is crucial to note that in the model, the volume of finite elements must exceed the largest aggregate volume. Based on Equation (2), it is natural to propose the following relation for the coefficient of variation of G IC : It is crucial to note that the use of Equation ( 2) does not lead to the determination of intrinsic values of σ(G IC ).These values are inherently linked to the specific mechanical model proposed and the chosen type of finite elements, such as linear elements in the present case.Consequently, Equation (2) cannot be indistinctively applied within the framework of other mechanical models. From Equation (3), it can be observed that the coefficient of variation of G IC becomes negligible when the degree of heterogeneity (r e ) reaches a value of 4000.Notably, it is important to reiterate that as G IC is an intrinsic material property and, being size-effect independent, it is not dependent on the r e in terms of its mean value. In previous research [29], focusing on the probabilistic properties of f t , which is dependent of the material heterogeneity degree and, therefore, can be expressed as f t (r e ); Equations ( 4) and ( 6) were proposed to evaluate the mean value, µ( f t (r e )), and coefficient of variation, σ µ ( f t (r e )), respectively. in Equation ( 4), a = 6.5 MPa and y is provided by Equation (5), where f c represents the concrete compressive strength in MPa and α = 1 MPa. in Equation ( 6), c = 0.35 and d are provided by Equation ( 7), as follows: The validity of Equations ( 4)-( 7) has been confirmed for concrete with a compressive strength of f c ≤ 130 MPa and a maximum aggregate size of 10 mm or larger.Consequently, by establishing a correlation between Equations ( 3) and (7) and Equations ( 4)- (7), it is feasible to estimate the coefficient of variation of G IC as a function of r e for concrete that satisfy the aforementioned criteria. This estimation can be accomplished by establishing an expression through algebraic manipulation using Equations ( 3)- (7).Based on this procedure, considering the same heterogeneity degree associated with the coefficient of variation of tensile strength and critical fracture energy, Equation ( 8) is proposed.The deduction of this equation is presented as a Supplementary Material. Validation 3.1. Experimental Test Chosen for the Validation To validate the proposed strategy for estimating the coefficient of variation of G IC , a structural problem that has been previously studied by the author [29,30] was selected to be simulated.This specific experiment was previously used to determine Equation (2) for a regular concrete mix design [12,21], defined as Concrete 1.However, for this validation process, a high-strength concrete mix design was employed, defined as Concrete 2. Details regarding the compositions of both concrete mixtures can be found in Table 1, while the properties of Concrete 2 are outlined in Section 3.3.The test entails inducing a macrocrack propagation in a large double cantilever beam (DCB) concrete specimen.Widely acknowledged as a method to measure Mode I fracture toughness in unidirectional composites under both static and cyclic loading conditions [31,32], it involves applying a tensile load normal to the specimen's notch surface.In the field of LEFM, one of the major challenges lies in experimentally determining the critical fracture energy (G IC ) of concrete.It has been widely acknowledged that such tests need to be conducted on large concrete specimens to obtain accurate results [6,29,30,[33][34][35][36][37][38][39][40].This is primarily due to the higher degree of homogeneity achieved in concrete with larger aggregate sizes.Consequently, the process zone at the tip of the propagating macrocrack, which is crucial for determining G IC , extends to approximately 30 cm [30]. The distinguishing feature of this double cantilever beam (DCB) specimen lies in its considerable dimensions: 3.5 m length, 1.1 m width, and 0.3 m thickness, rendering it suitable for simulation purposes.The specimen's geometric details and applied loading conditions are depicted in Figure 1.During the test, crack propagation occurred from the bottom to the top.The load application point (P) was positioned 0.175 m from the beam's lower side, where the crack opening measurements were taken.Initially, section thinning was employed to guide the crack and maintain it in the median plane.However, this method was found to be inadequate, leading to the introduction of longitudinal prestressing through post-tension using multiple cables.The value of the applied prestressing force was 1230 KN. An interesting aspect of the experimental study developed by [30] was the evaluation of the process zone using acoustic emission techniques.This assessment revealed that the process zone had dimensions of approximately 30 cm in length and 12 cm in width, estimating the volume of the process zone (V pz ) at around 3600 cm 3 .The size of this process zone was associated with a maximum aggregate size of 12 mm, resulting in a maximum aggregate volume (V a ) of 1.13 cm 3 .Furthermore, the determination of G IC in [30] provided a mean value of µ(G IC ) = 1.25 × 10 −4 MN/m and a standard deviation of σ(G IC ) = 0.073 × 10 −4 MN/m.Remarkably, these experimental findings aligned with the theoretical values discussed in Section 2, specifically indicating a standard deviation of G IC reaching zero for a r e approximately equal to 4000. Probabilistic Numerical Model The three-dimensional semi-explicit probabilistic model, extensively described in [12], is developed in the finite element method (FEM) context and integrates heterogeneity and volume effects using a probabilistic approach.The code is written in FORTRAN language.The model belongs to the fracture mechanics family of models and primarily deals with the propagation of mode I macrocracks.However, it does not take into account mode II fracture propagation in its current version.Although it shares similarities with linear elastic fracture and non-linear fracture models, it distinguishes itself from damage or smeared crack models by not attempting to simulate the microcracking process. The model utilizes three-dimensional (3D) linear tetrahedral elements to simulate macrocrack propagation.It employs a criterion based on the mode I critical fracture energy, G IC .For each volume element, the dissipation of the cracking energy, following the linear elastic behavior, was modeled through a softening behavior that initiated when the tensile strength, f t , was reached.This softening behavior, exhibiting a descending branch, was depicted by a linear relationship between the principal tensile stress and strain.The governing principle behind this linear relation is a classical isotropic damage law, uniquely characterized by the random assignment of G IC and f t to the mesh elements due to the the model's probabilistic nature.The basic steps of the FEM code can be seen in Algorithm 1. Once the dissipative energy associated with this softening behavior reached the value of G IC , the stiffness matrix of the element reduced to zero.As a result, macrocrack propagation was modeled through a sequence of fully damaged elements, rather than the opening of interface elements as traditionally performed in fracture mechanics models.This characteristic defines the model as a non-explicit cracking model, as opposed to an explicit cracking model.It is important to note that in this numerical model, the utilization of a simplistic damage approach was solely aimed at dissipating the energy related to the softening behavior until reaching the G IC value, and it did not hold any physical significance.The objective of the present model did not involve explicitly modeling the microcracking process. It is worth emphasizing that in the model, all mechanical criteria were assessed at the centroid of the linear volume elements.Besides, the rationale behind considering f t and G IC as probabilistic lay in accounting for the material's inherent heterogeneity and integrating scale effects, directly linked to this material's characteristic.As illustrated in [21], the variation in tensile strength values stemmed from this phenomenon.Thus, the intensity of the scale effect diminished with a higher material quality (higher f c values) and reduced heterogeneity (measured as the ratio of the specimen's volume to the volume of the maximum aggregate).end while 17: end while Furthermore, given the probabilistic nature of the numerical model, a Monte Carlo (MC) technique was employed to ensure statistically robust results.The core principle of this approach entailed running numerous numerical simulations of a particular structural problem, encompassing varied spatial distributions of mechanical material properties defined by identical parameters of the probability distributions.The resulting outcomes were subsequently subjected to comprehensive statistical analysis. An overview of the model's formulation, highlighting its key aspects, is presented in Figure 2. As depicted in Figure 2a, the material heterogeneity was represented by each finite element and was quantified through the heterogeneity degree r e , as illustrated in Figure 2b. Figure 2c represents the random distribution of the tensile strength and fracture energy to mesh elements, and the energy dissipation resulting from the cracking process, which is governed by an isotropic damage law.This constitutive law takes into account the tensile strength and the volumetric density of dissipated energy, symbolized as g IC .The value of g IC is determined using an energetic regularization technique [41], calculated as follows: g IC = G IC /l e , with l e being the elementary characteristic length, determined in this context as l e = (V e ) 1/3 .Finally, as portrayed in Figure 2d, the model yielded global structural responses through the implementation of a Monte Carlo approach.Additional details about the model can be found in [12]. Distribution of Random Material Properties For the tensile strength, the material behavior was represented using the Weibull distribution.The probability density function, f w (x, b, c), for a random variable x ≥ 0 is presented in Equation (9).The terms b > 0 and c > 0 are the shape and scale parameters of the distribution, related to the dispersion and mean value of x, respectively.The mean µ w and standard deviation σ w of the distribution are evaluated, respectively, according to Equations ( 10) and (14). For the critical fracture energy, the lognormal distribution was chosen to describe the material behavior.Its probability density function, f L (x, µ L , σ L ), is presented in Equation (12), where µ L is the mean and σ L is the standard deviation of the variable's natural logarithm.The expected mean value E L (X) and variance Var L (X) of the distribution are presented in Equation ( 13) and Equation ( 14), respectively. Estimation of the Model Parameters To ensure a consistent application of the model, it is crucial to precisely determine the parameters governing both the Weibull and lognormal distributions.These distributions typically involve two parameters each.However, considering fracture energy as an intrinsic material property implies a constant mean value.Consequently, once its mean value is known, the task entails determining the scale and shape parameters of the Weibull distribution and the standard deviation of the lognormal distribution. The assessment of Weibull distribution parameters involves an iterative numerical procedure designed to solve a nonlinear system of equations.This system merges equations that define the distribution's mean and standard deviation, described in Equations ( 10) and ( 14), with the analytical scale law introduced by [21], described in Equations ( 4)- (7).This scale law estimates the expected mean and standard deviation values for a specified concrete volume; here, it is applied to the finite element scale.This formulation originates from an experimental investigation intended to establish a relationship between concrete heterogeneity and the phenomenon of the scale effect.Through this procedure, each finite element received specific parameters (b, c) defining the Weibull distribution that characterizes its behavior.Additional information about the analytical expressions and the implementation of the iterative procedure can be found in [12].Conversely, the methodology outlining the estimation of standard deviation for the lognormal distribution is detailed in Section 2, while the approach to estimate its mean value is described in Section 4. Numerical Simulations The numerical simulations conducted in this work for validation purposes focused on a high strength concrete with the following properties: f c = 105 MPa, E = 53.4GPa, maximum aggregate size = 20 mm, and G IC = 1.52 × 10 −4 MN/m.These concrete parameters were obtained from [42] and were suitable for performing the numerical simulations using the present model (as discussed in Section 2).It is worth noting that the mechanical characteristics and maximum aggregate size of this particular concrete differed significantly from those used in the simulations [12] from which Equation (2) was derived.Therefore, if Equation ( 8) was validated for this new high-strength concrete, it could be considered valid for a wide range of typical concretes. Figure 3 displays both frontal and 3D perspectives of the finite element mesh utilized in the simulation, pinpointing the locations where the prestressing force and imposed displacements were applied.The mesh comprised 19,564 tetrahedral solid elements with linear interpolation.The simulation of the DCB test involved several boundary conditions to accurately represent its behavior.These conditions included the restriction of displacements along the X axis within the YZ plane, along the Y axis within the XZ plane, and along the Z axis at the central nodes in the XZ plane.Additionally, there were restrictions on Z axis displacements at nodes where prescribed displacements were applied.Moreover, the simulation incorporated the application of prescribed forces specifically in the Y direction, exerted on the elements situated on the specimen's bottom surface. The Monte Carlo simulation consisted of the execution of 30 independent finite element analyses.As shown in [12], this number of MC samples was sufficient to produce a consistent outcome concerning the variability of the average curve from the numerical simulations.For this level of mesh refinement, Monte Carlo simulations employing 30 or more finite element analyses did not exhibit significant variability in the average curve.The loading force versus notched opening displacement curves obtained from the Monte Carlo simulation results are presented in Figure 4.These numerical curves were then compared with the experimental data obtained from [42] for a comprehensive evaluation.Additionally, Figure 5 provides an example of the crack pattern obtained from the numerical simulations, with cracked elements represented in red and uncracked elements in blue.Upon examining the graph presented in Figure 4, several significant observations emerge.Firstly, the peak loads evident in the numerical curves consistently exhibited lower values in contrast with those observed in the experimental curve.This aligned with the findings reported in [12].A depiction of the Monte Carlo (MC) outcome conducted for 100 samples is presented in Figure 6, further supporting this observation.As explained in detail in [12], this difference can be attributed to the fact that the proposed model primarily focused on macrocrack propagation rather than the localization process, which was responsible for the peak load.Therefore, for a valid comparison, it was important to consider the behavior of the descending branch of the curves, representing macrocrack propagation. Moreover, for simplification purposes, the numerical modeling of the notch tip was represented as a line.In contrast, the actual DCB specimen featured a notch tip thickness of 0.5 mm, as depicted in Figure 3.This discrepancy led to higher stress concentrations at the numerical front tip compared with the experimental values.Additionally, it is noteworthy that, after a small notch opening, the experimental curve aligned within the range of the numerical curves.This observation concurred with the findings detailed in [12].Consequently, based on these outcomes, it can be inferred that Equation (8) was validated for this specific high-strength concrete, and by extension, for other usual concretes. Determination of the Mean Value of G IC In Section 2, it was mentioned that in order to calculate the standard deviation of G IC , it was essential to know its mean value.However, as discussed in Section 3, obtaining an intrinsic value of G IC through direct testing was challenging and required large-scale fracture mechanics tests, which were time-consuming and costly.Therefore, to address this issue, it is crucial to develop a strategy for determining the intrinsic value of G IC using simpler tests and establishing relationships between G IC and other readily measurable mechanical characteristics of concrete.A recent work was performed to determine this intrinsic value of G IC from the knowledge of the compressive strength, f c , or the tensile splitting strength, f ts [42].From this work, the following relations were proposed: in Equations ( 15) and ( 16), the unit of G IC is in J/m 2 and f ts and f c are in MPa.It is important to recall that these relations were determined for concretes with the following: 4 ≤ f ts ≤ 6.5 MPa and 50 ≤ f c ≤ 105 MPa.They could be considered valid only when the compressive and tensile splitting tests were conducted on cylindrical specimens with dimensions of 16 × 32 cm 2 (standard tests).The direct link between toughness and compressive or tensile strength may be considered overly simplistic.However, this link was both possible and relevant due to the similar underlying physical mechanisms governing these mechanical characteristics. The transition from diffuse microcracking to localized macrocracking is responsible for the development of compressive and tensile strengths.In the case of G IC , it is associated with the existence of a process zone at the front tip of the macrocrack.The macrocrack can propagate only when the total dissipative energy in this process zone (microcracked zone) was reached, indicating a process of cracking localization and, therefore, macrocrack propagation. There are no inherent physical or mechanical limitations that restrict the applicability of Equations ( 15) and ( 16) to concretes with lower compressive and tensile strengths.However, it should be noted that these relations are not valid for fiber-reinforced concretes [30].Additionally, considering that Equations ( 4)- (7) were established for concretes with a maximum aggregate diameter greater than or equal to 10 mm, and Equations ( 15) and ( 16) were established for concretes with maximum aggregate diameters of 12 mm and 20 mm, the presented equations were satisfactorily applicable to concrete mixtures with maximum aggregate diameters between 10 mm and 20 mm.In the case of larger aggregates, it is necessary to verify their applicability. Conclusions and Discussion In this study, our primary objective was to enhance the applicability of a semi-explicit macroscopic probabilistic model by devising strategies to estimate both the mean and standard deviation of G IC , which are parameters of the model.For this aim, an equation to estimate the coefficient of variation of G IC in relation to r e , the mean and standard deviation of the tensile strength was proposed.The methodology's validation involved a simulation of an experimental DCB test using high-strength concrete with f c = 105 MPa.The simulation confirmed the main assumption of the study, enabling an extension of the model's applicability to diverse concrete mixtures.Additionally, a strategy to estimate the mean value of G IC was introduced, enabling the assessment of this value using more readily available data, such as compressive strength, f c , or tensile splitting strength, f ts .Thus, this paper offers an approach to address an ongoing issue in the literature regarding the definition of material inputs for modeling concrete cracking, considering the size effects and material heterogeneity. The numerical model employed in this study, based on finite element theory, is designed to analyze the propagation of macrocracks in concrete structures.It incorporates the random distribution of material properties over the mesh, accounting for crack propagation through energy dissipation.The random mechanical properties considered in the model are the tensile strength, f t , and the mode I critical fracture energy, G IC .The model assumes that the mean value of G IC remains constant regardless of scale, while its standard deviation varies based on the volume of the mesh elements. In conclusion, this study extends the applicability of a 3D probabilistic semi-explicit cracking numerical model to concrete mixtures with compressive strengths below 130 MPa and the largest aggregate diameter ranging between 10 mm and 20 mm.The findings contribute to describing the macrocrack propagation in concrete elements.However, for effective application of the model in real concrete structures, further advancements are necessary, particularly in modeling steel rebars and concrete/steel bond.Moreover, future research should focus on extending the model to simulate macrocrack propagation in fiber-reinforced concrete. Figure 1 . Figure 1.Detail of the geometry and of the loading conditions related to the DCB specimen. Algorithm 1 : 9 : Basic steps of the FEM program 1: Variables initialization 2: Read input data 3: Distribute random tensile strength ; // according to Weibull distribution 4: Distribute random critical fracture energy ; // according to lognormal distribution 5: istep = 0 ; // load step counter 6: while the number of load steps is not achieved do while the balance between external and internal forces is not achieved do Figure 2 . Figure 2.An overview of the formulation of the 3D probabilistic macroscopic model for "semiexplicit" cracking of concrete is provided in this figure.(a) illustrates the material heterogeneity, while (b) shows the correlation between the degree of heterogeneity, volume effects, and the utilization of random mechanical properties distribution.(c) presents the random distributions and the elementary behavior of energy dissipation during damage evolution.Finally, (d) demonstrates an example of the global behavior obtained using the Monte Carlo method. Figure 3 . Figure 3. 3D Finite element mesh of the DCB specimen-frontal and 3D view. Figure 4 . Figure 4. Loading force versus notched opening displacement curves -numerical and experimental results. Figure 5 . Figure 5. Example of numerical crack propagation pattern. Table 1 . Description of the mixtures used to determine Equation (2) and perform the validation.
2024-01-07T16:29:27.587Z
2024-01-04T00:00:00.000
{ "year": 2024, "sha1": "41f5efdc9511e3ac90429d61c4a242bbeabf6db9", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3417/14/1/462/pdf?version=1704363519", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "06fc4fb2534cb8635e1adc59023898ef32a1ad54", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "extfieldsofstudy": [] }
265336856
pes2o/s2orc
v3-fos-license
Influence of Ultracapacitor and Plug-In Electric Vehicle for Frequency Regulation of Hybrid Power System Utilizing Artificial Gorilla Troops Optimizer Algorithm , Introduction Load frequency control (LFC) is a mission which is to confirm frequency anomalies within a specified range that goes beyond what many people would think is feasible [1].To keep the frequency within a reasonable range, a comprehensive planned power system must address this crucial issue discovered with LFC [2].The review of the literature shows that the LFC problem has been the subject of numerous prior analyses.The majority of LFC and flow research on LFC focuses on the frequency regulation of a two-area linked power system. The effect of an electric vehicle on the LFC application is explained in [3].In [4], the utilization of different distributed energy sources for the LFC application is explained.The application of static synchronous series compensator (SSSC) and capacitive energy storage (CES) inside the LFC is explained in [5].Surprisingly, few people have analysed traditional energy sources, and even fewer have thought about how distributed sources may affect the LFC strategy.A two-area LFC with a two-degree freedom TID controller is explained in [6].A robust conditional value at risk (CVaR) tuning method is proposed in [7] to make the day ahead home energy management system (HEMS) protective alongside the uncertainty in solar power generation and energy price volatility.Arya [8] depicts the expansion from the two to the five-area LFC.Once more, [9] explains the implementation of a few renewable energy-based single-/multisource twoarea interconnected systems.Next, [10] explains how the energy storage technology is applied to a three-area LFC.The incorporation of renewable energy sources in a threearea power system, however, has not received considerable research [11].Again, [2] shows the impact of ultracapacitors in LFC.In addition, the LFC problem does not take the impact of ultracapacitors and plug-in electric vehicles (PEV) into account combinedly.As a result, a three-area power system is considered in this work, with diverse distributed energy sources such as wind generators, solar generators, fuel cells, microturbines, and diesel engine generators along with a plug-in electric vehicle and an ultracapacitor in each area taken into consideration. The LFC problem has been resolved by many controllers like the proportional integral, and derivative application has also been used for these issues [12], and researchers have also used types 1 and 2 fuzzy PID controllers [13], tilt integral derivative remote controls [14], cascade tilt-integral-tilt-derivative controllers [15], (1 + PD)-PID cascade controller [16], etc.In contrast, the use of a fractional order tilt integral derivative (FOTID) controller for AGC applications has not been studied by any of the researchers in the literature. One of the typical methods to resolve the problem regarding LFC is known to be the utilization of an evolutionary algorithm (EA).The ability to manage nonlinear functions is the major aim of EA [17].Other few observations of the EA application are GA [18], PSO [19], equilibrium optimization technique [20], artificial bee colony optimization [21], GWO [22], cuckoo search algorithm [22], adaptive cuckoo search algorithm [23], bat algorithm [24], water cycle algorithm [25], African vulture optimization algorithm [26], parasitism predation algorithm [27], wild horse optimizer [28], dingo optimization algorithm [29], etc. EA has been employed to successfully implement the LFC design.Although these methods offer a strong execution, their rate of convergence is slow and they commonly become stuck in global optimization. The GTO algorithm has been intensively used in numerous optimization problems [30,31].The previously described GTO calculation, the mathematical formulation of the daily relationships of gorillas, and the development of novel mechanisms for exploration and exploitation are all examples of this [32,33].Now that the other points have just been considered, a novel approach has been made by including a UC and a PEV in each area of the AGC system.Again, for the load frequency regulation of the said hybrid power system, a GTO algorithm based on the FOTID controller has also been created.(1) The effects of UC and PEV integration into the three-area AGC operation have seldom ever been studied Research Gap and Contribution (2) According to the author's knowledge, there is no execution of a fractional order TID controller in the LFC applications based on distributed power generation (3) In the existing investigations, comprehensive analysis considering several worthwhile analyses has not been possible Proposed Hybrid Power System Figure 1 shows the line diagram of the proposed three-area power system, and Figure 2(a) shows the detailed structure of the said hybrid three-area hybrid power system which is interconnected with each other.Distributed energy sources (DER) are shown in Figure 2(b).For the three-area system mentioned earlier, several system parameters are shown in Table 1 [34]. Component Modelling of the Hybrid System (A) Thermal power system: for generating power in a thermal power plant, we use a turbine (G T s ), generator (G PS s ), governor (G TG s ), and reheater (G RH s ).The transfer functions (TF) for this system are given as follows [6]: Turbine with GRC 2 International Journal of Energy Research (B) Hydropower plant modelling: the hydropower plant's key components are mainly a hydraulic governor (G GH s ) and a hydroturbine (G HT s ) expressed as below [13]: (C) Wind turbine generator (WTG) system: the TF of WTG can be defined as [17] G WTG s = K WTG 1 + sT WTG 6 (D) Photo voltaic (PV) system: this system comprises a panel, MPPT charge controller, boost converter, and one filter circuit.Its TF can be defined as [17] G PV s = K PV 1 + sT PV 7 (E) Microturbine generator (MTG) system: the MTG, usually referred to as miniature turbines, can produce both heat and power as [22] G Controller-3 International Journal of Energy Research (F) Fuel cell (FC) system: the FC is an integral part owing to its increased production and lower pollution, which are expressed as [22] G FC s = K FC 1 + sT FC 9 (G) Diesel engine generator (DEG) system: the DEG can deliver dependable power whenever and wherever it is needed, which can be expressed as [26] G DEG s = K DEG 1 + sT DEG 10 (H) Hydroaqua electrolyzer (HAE) system: in a typical operation, the HAE is utilised to produce hydrogen (H 2 ) by electrolyzing water with electricity and then storing the hydrogen in a tank after compression which can be expressed as [30] G HAE s = K HAE 1 + sT HAE 11 (I) Power system and load modelling: by 1st-order TF, we can model the load expressed as power system (Eq.( 2)) and again given by (J) Plug-in electric vehicle (PEV): a PEV is a vehicle that has an externally rechargeable battery with a maximum capacity of about 4 kilowatts per hour, as shown in [26] G PEV s = K PEV 1 + sT PEV ΔF i s 13 (K) Ultracapacitor (UC): a UC has a high value of capacitance when contrasted with an ordinary electrolytic capacitor.The enormous number of attributes of UC like modest in size, ability to store huge proportions of energy, etc. makes it sensible for an updated AGC in an interconnected power system.Mathematically, the UC can be shown as Fractional Order Tilt Integral Derivative Controller As shown in Figure 3, a FOTID controller structure is like a FOPID controller with a fractional order integrator and differentiator.ACE is input to the controller and output is ΔP C s .This can be written via the following equations [35]: where ΔF 1 shows the change in frequency and the ΔPtie ik shows the tie-line power deviation.The tie-line power deviation equation between areas 1 and 2 is given by ΔPtie 12 = 2πT 12 ΔF 1 dt− ΔF 2 dt , where T 12 is synchronizing power coefficient and ΔF 1 and ΔF 2 are incremental frequency changes of areas 1 and 2, respectively.For the problem, an optimization problem can be formulated as Minimize J 18 Subject to K P , K I , and K D stands for the proportional, integral, and derivative parameters of the controller with their minimum (Min) and maximum (Max) ranges.However, n is kept between 1 and 50, and λ and μ are the range of the controller parameters (Min and Max) between -2 and +2. Artificial Gorilla Troops Optimizer (GTO) The collective activities of the gorillas inspired the development of an intelligent algorithm namely GTO.It requires a few parameters to be optimized for obtaining the global solution which makes it simple for the implementation in engineering applications.The three important parts in GTO such as initialization, exploration, and exploitation are based on the different strategies of gorillas which include movement to the unknown area, migrating to known locations, moving to the other gorillas, following the decisions of the silverback, and competing for the adult female gorillas.Once the initialization phase is over, the exploration phase depends on the three behaviors including migrating to the unknown area, migrating to the identified locations, and moving to other gorillas.Similarly, the exploitation phase in GTO is designed by employing two behaviors of gorillas [29] as shown in Figure 4.The three phases of GTO are described as follows. 5.1.Initialization Phase.The position of the n th gorilla is defined as where n ∈ N is the number of gorillas present in D dimensional search space.The position vector of gorillas can be written as X = X 1 , X 1 , ⋯ ⋯ X n , ⋯ , X n . Exploration Phase. At each stage, all N gorillas are considered as candidate solutions and the best solution is supposed to be the silverback.Migration to unknown locations enhances the exploration in GTO, whereas the balance between exploitation and exploration is obtained by following the strategy such as moving to the other gorillas.Migrating to the identified position implies a diverse optimization search space.Based on those three strategies, the exploration phase is mathematically formulated as where it represents the current iteration; X n it is the current position vector of the n th gorilla; G n it + 1 is the candidate gorilla position in next iteration; r 1 , r 2 , r 3 , and r 4 are the random values ranging from 0 to 1; and X A it and X B it represent the randomly selected position vector at it th iteration.The parameter a is also a random number between 0 and 1.The variables C, P, and Q can be mathematically computed as where cos denotes the cosine function, r 5 is the random number ranging from 0 to 1, and it max represents the maximum iteration taken in the optimization algorithm.Similarly, the candidate solution G n it + 1 is evaluated for all N.After the completion of an exploration phase, fitness functions obtained from G n it + 1 and G n it are evaluated.If F G n it + 1 < F X n it , then the fitness function obtained from G n it + 1 is better than the fitness function obtained from G n it .Hence, G n it + 1 replaces the original vector G n it .The optimal solution obtained from the above computation is referred to as the silverback, i.e., X silverback . Exploitation Phase.This phase is based on two strategies; those are following the silverback and competition for adult females.Let z be the constant parameter which decides to switch between these two strategies.The silverback gorilla's decision is followed if C ≥ z.The mathematical expression representing the above behavior can be shown as where X silverback is the best solution obtained so far.The parameter M is calculated as The second strategy is chosen if C < Z which is represented as The behavior of young gorillas competing violently over selecting the adult female gorillas is represented in equations (26a), (26b), and (26c).I signify the impact force, where r 6 is a random value.j represents the violence intensity, and φ is a constant.r 7 is a random value between 0 and 1. After the completion of the exploitation phase, the fitness functions are evaluated.If F G n it + 1 < F G n it + 1 , G n it + 1 replaces the original vector X n it .The best solution is referred to as the X silverback . Result and Discussion 6.1.Implementation of GTO Algorithm.By running the simulation and using Eq. ( 18) to get the TID, PIDF, and PID regulator limitations, it is possible to design the hybrid power system's objective function and the regulatory parameters are shown in Table 2. A conclusion can be drawn from Table 2 that in comparison to GTO-based TID with UC and PEV, GTO-based TID with UC, and a standard GTO-based PID, the rate improvement in J with the GTO-based FOTID with the effect of UC and PEV controller is 3.47%, 17.32%, 32.91%, and 50.07%, respectively.This data supports the usage of the suggested methodology. The convergence characteristic for the GTO algorithm along with some other existing algorithms like grey wolf optimizer (GWO) and whale optimization algorithm (WOA) for the Schwefel multimodal standard benchmark function is shown in Figure 5.The suggested GTO algorithm performs significantly better than previous algorithms, which supports the use of GTO approach.The following disturbances are now considered in a three-area power system: 6.2.Condition 1: Wind and Solar Disturbances in Areas 1 and 2, Respectively.Both regions of the system exposed to self-assuredly varying loading designs as stated in Figures 6(a) and 6(b) were done to demonstrate the effectiveness of the suggested regulator against variety in electrical power interest.These signals were generated randomly by considering a particular disturbance.By considering the nominal parameters shown in Table 1, this simulation is performed.When analyzing the preceding aggravation, the three-area power system's response is depicted in Figures 7(a)-7(c).The proposed GTO-based FOTID regulator for the UC-and PEVbased hybrid power system exhibits stable operation under dynamically varying wind and sun patterns, as shown in Figure 7. Condition 2: Area 1 Is Disturbed by Wind Disturbance. At a later stage, a wind-unsettling influence in region 1 is applied to test the proposed UC-and PEV-based hybrid three-area hybrid power system Figure 6(a).The frequency response of regions 1 and 2 (ΔF 1 and ΔF 2 ) and the change of tie-line power (Ptie ij ) after experiencing annoyance with various proposed controllers are shown in Figures 8(a)-8(c).It is frequently noticed that the proposed GTO-based FOTID regulator for the said UC-and PEV-based hybrid power system stands in calculable in contrast to the other approaches.3, the abovementioned examination is carried out [37].Figures 10(a)-10(c) illustrate how the sources of RESs were changed effectively during the action.The remark made it clear that real frequency discrepancies could undoubtedly be noted.It indicates robustness and superior behavior of the suggested method. Conclusions and Future Work This finding indicates the application of an ultracapacitor and plug-in electric vehicle and a FOTID controller for frequency regulation in a hybrid three-area hybrid power system that uses the GTO algorithm.The correlation chart typically demonstrates that the performance index value of the system with GTO algorithm rapidly declines in comparison to the existing algorithms.This justifies the use of the suggested technique.Further, FOTID controller boundaries are then designed for a UC-and PEV-based power system for frequency regulation using the GTO technique.The simulation output shows that the application of a GTO-based FOTID regulator for a UC-and PEV-based hybrid power system is more successful in controlling the system frequency compared to PID and TID regulators.Future work on the distributed system may focus on testing the use of several other sources with numerous other controllers and new algorithms. 2. 1 . Research Gap.The following are the research gaps identified by the literature review: Figure 2 : Figure 2: (a) Structure of three-area hybrid power system and (b) distributed energy sources (DER). 6. 4 . Condition 3: Area 2 Is Disturbed by Solar Disturbance.The penetration of solar energy in zone 2 varies at that precise moment, as depicted in Figure 6(b).Figures 9(a)-9(c)show the response in areas 2 and 3 and the tie-line power change for the three-area system (ΔPtie 23 ) in response to the same incident.It can be said that using the GTO-based FOTID controller with UC and PEV will significantly reduce the oscillation of the system after a perturbation. Table 3 : Parameters for the analysis.GTO based TID GTO based TID with UC GTO based TID with UC & PEV Proposed GTO based FOTID with UC & PEV (a) ΔF of area 1 GTO based TID with UC GTO based TID with UC & PEV Proposed GTO based FOTID with UC & PEV (b) ΔF of area 3
2023-11-22T16:12:11.319Z
2023-11-20T00:00:00.000
{ "year": 2023, "sha1": "36f85f76b43e5f053cef208d9d0e5bdf10733cc7", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/ijer/2023/6689709.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "8004e99f6456b48515e1644d7cfb565278bdf97e", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [] }
226286786
pes2o/s2orc
v3-fos-license
Diversity of SCCmec elements and spa types in South African Staphylococcus aureus mecA-positive blood culture isolates Background The prevalence of Staphylococcus aureus varies depending on the healthcare facility, region and country. To understand its genetic diversity, transmission, dissemination, epidemiology and evolution in a particular geographical location, it is important to understand the similarities and variations in the population being studied. This can be achieved by using various molecular characterisation techniques. This study aimed to provide detailed molecular characterisation of South African mecA-positive S. aureus blood culture isolates by describing the SCCmec types, spa types and to lesser extent, the sequence types obtained from two consecutive national surveillance studies. Methods S. aureus blood culture isolates from a national laboratory-based and enhanced surveillance programme were identified and antimicrobial susceptibility testing was performed using automated systems. A real-time PCR assay confirmed the presence of the methicillin-resistance determinant, mecA. Conventional PCR assays were used to identify the SCCmec type and spa type, which was subsequently analysed using the Ridom StaphType™ software. Multilocus sequence typing was performed on selected isolates using conventional methods. MRSA clones were defined by their sequence type (ST), SCCmec type and spa type. Results A detailed description of findings is reported in this manuscript. SCCmec type III predominated overall followed by type IV. A total of 71 different spa types and 24 novel spa types were observed. Spa type t037 was the most common and predominated throughout followed by t1257. Isolates were multidrug resistant; isolates belonging to all SCCmec types were resistant to most of the antibiotics with the exception of type I; isolates with spa type t045 showed resistance to all antibiotics except vancomycin. The most diverse SCCmec-spa type complex was composed of the SCCmec type IV element and 53 different spa types. Conclusion Although ST data was limited, thereby limiting the number of clones that could be identified, the circulating clones were relatively diverse. Introduction Staphylococcus aureus bacteraemia is an important cause of morbidity and mortality in both healthcare-associated (HA) and community-associated (CA) infections worldwide [1,2]. S. aureus is responsible for an extensive range of human diseases, including bloodstream infections, pneumonia, endocarditis, food poisoning, toxic shock syndrome, skin and soft tissue infections, and bone and joint infections [3,4]. The prevalence of S. aureus varies depending on the healthcare facility, region and country. Furthermore, the prevalence of methicillinsusceptible S. aureus (MSSA) and methicillin-resistant S. aureus (MRSA) may also differ. In order to understand the genetic diversity, transmission, dissemination, epidemiology and evolution of MSSA and MRSA clones in a particular geographical location, it is important to acquire knowledge on the similarities and variations in the population being studied. This is not only important for epidemiological surveys but also for infection prevention and control policies [5]. This can be achieved by employing the use of various molecular characterisation techniques [2]. Reliable molecular techniques that have been used for typing S. aureus include Pulsed-field Gel Electrophoresis (PFGE), Multilocus Sequence Typing (MLST), Stapylococcal protein A (spa) typing and Staphylococcal Cassette Chromosome mec (SCCmec) typing [2,6]. PFGE is based on the DNA banding pattern obtained after digesting the bacterial genome with a restriction enzyme [7]. MLST and its clustering algorithm, Based Upon Related Sequence Type (BURST) classifies isolates according to nucleotide variations in seven housekeeping/reference genes (loci) [5]. These genes are sequenced and a unique allele number is assigned using an online programme specific to the MLST scheme. A combination of the allele numbers (i.e. allelic profile) produces a particular sequence type (ST) for a bacterial strain. Those with similar STs are grouped together in a single clonal complex (CC) [6,8]. Spa typing sequences the S. aureus-specific staphylococcal protein A (spa) gene which is one of the virulence factors on the surface of the organism preventing phagocytosis by the immune system [9]. Spa typing and its clustering algorithm, Based Upon Repeat Pattern (BURP) is based on the sequencing of a polymorphic 24 bp region of the spa gene. This is a variable-number tandem repeat (VNTR) sequence within the 3′ coding region [4]. The repeat regions are assigned a numerical code and the spa type is determined by the order of specific repeats [3]. Studies have shown that spa typing produced results that are notably comparable with that of MLST [6,10]. Due to lower implementation costs and that only a single locus needs to be sequenced, spa typing has shown to be more efficient and results are consistent across different settings, specimen type and patient age [6]. Therefore spa typing has been shown to be appropriate for use in evolutionary and macro-epidemiology studies [4,6,11,12]. However, as recombination events in a single locus can distort clonal relationships, there is the question of how a method that sequences only a single locus can be used for macro-epidemiology studies [13]. SCCmec typing classifies SCCmec elements according to their structural differences [5]. It involves the typing of the staphylococcal cassette chromosome mec, which is a mobile genetic element and harbours the methicillinresistance determinant gene. This element is genetically diverse with many types, subtypes and variants being reported [14]. The molecular organisation of the cassette is complex, but it can be broken down into three structural components, which include: i) the cassette chromosome recombinase (ccr) gene complex, ii) the mec gene complex and iii) the joining (J) regions [15,16]. The ccr gene complex encodes site-specific recombinases for the excision and insertion of the element into the chromosome [14,16,17]. This complex therefore affords the SCCmec element mobility and thus facilitates its transfer to other staphylococcal species [16]. The mec complex confers methicillin resistance as it consists of the mec gene, its regulatory genes, the mecI and the mecR1 genes and various insertion sequences [14,18]. A combination of both the ccr gene complex and the mec gene class is used to assign the specific SCCmec type. Thirteen SCCmec types (I-XIII) have been defined in MRSA based on complete sequence data [17,[19][20][21]; International Working Group on the Staphylococcal Cassette Chromosome elements (IWG-SCC) (2015) Available online: http://www.sccmec.org). Although we have previously described the MRSA population in South Africa [22][23][24][25], a detailed description of the SCCmec types and spa types is lacking. This study therefore reports on the various clones present in our MRSA study population by SCCmec and spa type combinations (SCCmec-spa type complexes). Moreover, although MLST data was lacking for the majority of our sample population, the predominating circulating clones (ST-SCCmec-spa type) based on the most common spa types were described. Bacterial strains and phenotypic methods A case of S. aureus bacteraemia was defined as the isolation of S. aureus from a blood culture. Blood culture isolates, which formed part of the GERMS-SA laboratory-based and enhanced antimicrobial resistance surveillance studies from sentinel centres in South Africa were submitted and participation was voluntary. The first was a two-year laboratory-based surveillance study (June 2010 to July 2012); sites represented 13 sentinel centres from the Gauteng, KwaZulu-Natal, Free State and Western Cape provinces. The second was an enhanced surveillance study (August 2012 to December 2017); sites represented five sentinel centres from six large academic hospitals from the Gauteng and the Western Cape provinces. A 21-day exclusion period was applied to avoid duplicate isolates of the organism from the same patient. In total, 5820 viable isolates [MSSA (n = 3801) and MRSA (n = 2019)] were submitted on Dorset transport media (Diagnostic Media Products (DMP), National Health Laboratory Service (NHLS), Johannesburg, South Africa). Each isolate was plated onto a 5% blood agar plate (DMP, NHLS, Johannesburg, South Africa) followed by organism identification and antimicrobial susceptibility testing using automated systems. Organism identification was done using VITEK® II (bioMèrieux, France) or MALDI-TOF MS (Microflex, Bruker Daltonics, MA, USA) and antimicrobial susceptibility testing (AST) was done using the MicroScan Walkaway system (Gram-positive panel PM33) (Siemens, Sacramento, CA, USA). Interpretation of susceptibility was performed according to the Clinical and Laboratory Standards Institute (CLSI) guidelines [26]. Bacterial cells were lysed at 95°C for 25 min and the DNA was extracted and used in the genotypic assays. Polymerase chain reaction (PCR) screening for mecA in MRSA isolates The LightCycler 480 II (Roche Applied Science) instrument was used for the real-time PCR of mecA and nuc, which were amplified in a multiplex assay using the LightCycler 480 Probes Master kit (Roche Diagnostics, IN, USA) with previously published primers and probes [27]. SCCmec typing All 2019 mecA-positive MRSA isolates were typed by a multiplex PCR assay using the Qiagen Multiplex PCR kit (Qiagen, Germany) and previously published primers [28]. Multilocus sequence typing (MLST) Multilocus Sequence Typing was performed on 48 isolates, which were selected randomly based on the most common spa-types. Primers [29] amplifying seven reference genes were used. Amplification was done using the Amplitaq Gold DNA Polymerase kit (Applied Biosystems, CA, USA). Purified PCR products were sequenced (Inqaba Biotech, South Africa). Sequences were assembled using the CLC Bio main workbench (Qiagen, Germany) and analysed using the online database (https://pubmlst.org/saureus/). SCCmec typing The distribution of SCCmec types per year in 2019 mecA-positive isolates is seen in Fig. 1. SCCmec type III predominated every year followed by type IV with the exception of 2011 where the opposite was seen. Type II was seen in multiple isolates throughout the study period and sporadic cases of types V and VI were noted from 2011 onwards. Only two cases of type I were seen in 2014 and 2015. A number of unknown types was noted from 2010 to 2017. We subsequently further investigated a proportion (n = 52) of the unknown types from 2013 to 2016 and found that the majority of the isolates were interpreted as type I-like, type II-like and type III-like [30]. The distribution of SCCmec types per province per year is seen in Fig. 2. Type IV predominated in KwaZulu-Natal whereas type III predominated in the remaining three provinces. All six SCCmec types including unknown types were observed in Gauteng and the Western Cape provinces. Antibiotic non-susceptible phenotypes were examined and the distribution of SCCmec types per non-susceptible phenotype is seen in Table 1. Isolates belonging to all SCCmec types were resistant to most of the antibiotics with the exception of type I. All isolates were susceptible to vancomycin. Type III predominated in azithromycin-, erythromycin-, oxacillin-, cefoxitin-, penicillin-, trimethoprim/ sulfamethoxazole-, daptomycin-, tetracycline-, ciprofloxacin-, levofloxacin-, moxyfloxacin-and gentamicin-non-susceptible isolates. Type II predominated in clindamycin-nonsusceptible isolates and type IV predominated in rifampicinnon-susceptible isolates. Spa typing Spa typing was performed on 1467 isolates; the remaining 552 isolates from the period 2010 to 2012 do not have spa types assigned. A total of 71 different spa types and 24 novel spa types were observed. Five isolates were untypable even upon repeat processing. Table 2 shows the distribution of predominating spa types over the seven and a half-year period. Spa type t037 was the most common and predominated throughout followed by t1257. Spa types t012, t045 and t064 were also constantly present over this time period. Spa type t4864 was seen only in 2014, t1467 was seen only in 2015, t718 was seen only in 2016 and t5691 emerged in 2017. The remaining spa types were seen in small numbers and not consistently throughout the seven and a half-year period. Table 3 shows the variation of spa types over the seven and a half-year period. The most number of spa types were seen in 2011 and the most number of novel spa types occurred in 2014, which also showed a high variation in the number of different spa types observed. No novel spa types were found in 2013. The Gauteng province showed the most variation with 44 different spa types and 14 novel spa types followed by the Western Cape (n = 40 and n = 14), respectively. In KwaZulu-Natal 12 different spa types were seen and in the Free State eight different spa types were observed. One novel spa type was found in both KwaZulu-Natal and the Free State provinces but these spa types differed from each other. Only t012, t045, t064 and t1257 were observed in all four provinces; t037 was seen in all provinces except in KwaZulu-Natal and t1971 was seen in all provinces except in the Free State; t9061 was seen only in the Free State and t13165, t1555, t4268 and t951 were seen only in KwaZulu-Natal. Two spa types (t209 and t2293) were found in the Gauteng and Free State provinces, which also had one novel spa type. Three spa types (t148, t451 and t891) were found in the Gauteng and KwaZulu-Natal, which also had one novel spa type. Nine spa types (t008, t018, t021, t022, t032, t1443, t1476, t304, t718) and two novel spa types were observed in Gauteng and the Western Cape provinces. Twenty-four different spa types (t10304, t105, t1096, t1107, t118, t127, t174, t186, t1943, t272, t2724, t355, t421, t4410, t463, t4833, t4864, t5961, t701, t729, t7962, t840, t913 and t932) and 10 novel spa types were seen in Gauteng alone. Twenty-two different spa types (t015, t0121, t0379, t059, t11775, t1467, t1774, t1813, t223, t230, t238, t2409, t2526, t294, t324, t432, t498, t5483, t578, t6330, t6931 and t8636) and 10 novel spa types were seen in the Western Cape alone. Antibiotic non-susceptible phenotypes were examined and the distribution of spa types representing majority of the isolates is seen in Table 4. One isolate belonging to spa type t10304 was non-susceptible to penicillin only (data not shown in table). All three isolates typed as t0379 displayed the same phenotypic profile and were non-susceptible to the fluoroquinolones and beta-lactam antibiotics only (data not shown in table). All four (15/1467, 1%). The remaining spa types within this group individually represented less than 1%. This group consisted of 55 different spa types and 18 novel spa types. Two isolates were untypeable. The predominating spa type in isolates from paediatric patients was t037 (446/1467, 30.4%) followed by t045 (115/1467, 7.8%) and t1257 (53/1467, 3.6%). The remaining spa types within this group individually represented less than 1%. This group consisted of 32 different spa types and 10 novel spa types. Three isolates were untypeable. The following spa types were seen in isolates from adult patients only: t008, t0121, t018, t021, t0379, t059, t1175, t118, t1467, t174, t1774, t1813, t2029, t223, t2293, t230, t2409, t2526, t294, t304, t324, t379, t432, t4410, t463, t4864, t578, t6931, t701, t729, t7962, t840, t8636, t9061 and t913. There were 14 novel spa types in this group. The following spa types were seen in isolates from paediatric patients only: t10304, t1096, t127, t13165, t1555, t186, t1943, t272, t355, t4286, t498, t5483, t6330 and t932; six novel spa types were observed in this group. The predominating spa types in isolates obtained from male and female patients were very similar. Furthermore, the predominating spa type could not be correctly established from isolates obtained from patients that died versus those that recovered or were discharged due to the majority of cases having unknown data. The same is applicable for diagnosis. SCCmec and spa types complexes The SCCmec-spa type combinations are referred to as complexes. A total of 1467 SCCmec-spa type complexs were obtained. The five isolates that were not typeable for spa type were excluded from the analysis; SCCmec types for each of these varied (SCCmec II, III, IV, V and unknown type). The most diverse complex was composed of the SCCmec type IV element and 53 different spa types. Next were the isolates with unknown SCCmec type; these were associated with 28 different spa types. SCCmec type III was associated with 24 different spa types and SCCmec type II was associated with 20 different spa types. There were smaller numbers of SCCmec type I, V and VI isolates and predominance was therefore inconsequential; the isolates varied with regard to spa type. The SCCmec-spa type combinations constituting the complexes are shown in Table 5. Discussion This study is a detailed description of the molecular characterisation of MRSA isolates with specific focus on SCCmec types and spa types and, to a lesser extent, sequence types. It is important to have a genetic understanding of the circulating strains in a geographical region to establish genetic diversity, transmission, dissemination, epidemiology and evolution. Antimicrobial susceptibility profiles were also reported; apart from using antimicrobial susceptibility results for treatment regimens, antimicrobial susceptibility profiles are also important in identifying a link to specific genotypes, which could potentially identify virulence patterns. Antimicrobial selection may potentially also be a key factor in the dissemination of predominating MRSA clones within a hospital environment [31]. SCCmec type III was the most predominant SCCmec type followed by type IV. Type III was also the most frequent SCCmec type in studies in Iran [32,33], Serbia [34], Brazil [35] and Europe [36]. The most prevalent t015, t186 (n = 1, 50%, each). A 2017 Chinese study on 120 MRSA isolates showed differences to the current study; 100 % of their spa type t037 isolates were resistant to clindamycin, erythromycin, ciprofloxacin, gentamicin, tetracycline and trimethoprim/sulfamethoxazole whereas only 6% of our isolates were resistant to clindamycin and 45 to 47% were resistant to the remaining antibiotics. However, in keeping with the study from China, none of our t037 isolates were resistant to rifampin and vancomycin (Table 4) [39]. Another Chinese study with 106 t037 isolates showed predominant resistance to clindamycin, erythromycin, ciprofloxacin, gentamicin, tetracycline, trimethoprim/sulfamethoxazole and chloramphenicol [40]. Of six Nigerian t037 isolates, all were resistant to clindamycin, erythromycin, ciprofloxacin, gentamicin, tetracycline and trimethoprim/sulfamethoxazole in addition to penicillin, oxacillin and moxifloxacin [41]; in the current study, almost 50% (47-48%) of the t037 isolates were resistant to penicillin, oxacillin and moxifloxacin. The study of circulating clones and clonal evolution is important because it is used to assess the relationship between clonal types, disease symptoms, antibiotic choice and clinical outcomes [42]. Clones are bacterial strains that have descended from a common ancestor and through point mutations, recombination, acquisition and deletion of mobile genetic elements they diversify resulting in wide-ranging genotypes and phenotypes [43]. In order to establish circulating clones and clonal evolution, multiple molecular tools should be employed; the combination of ST, SCCmec type and spa type would ideally be preferred. However, as MLST is more costly, we were not able to perform this technique on all isolates. Studies have shown that SCCmec typing is not a very discriminatory method and that spa typing alone was not able to clearly predict ST or PFGE type but when combined with BURP analysis producing spa CCs, it is sufficient for describing the clonal structure of S. aureus [6,10]. Although useful, it should be noted that spa typing takes only one gene into consideration in relation to the entire genome and therefore does not reflect mutational events occurring throughout the genome [5]. Nevertheless, spa typing is extremely useful and we have coupled it with SCCmec typing and sequence typing to a lesser extent, to provide information on the circulating S. aureus strains in our population. A review manuscript by Asadollahi et. al., in 2018 [5] showed that from five African studies, t037 was most associated with SCCmec type III (106 isolates) and least associated with type V (one isolate). Our study showed similar findings; t037 was mostly associated with SCCmec type III (656 isolates) and least associated with type V (one isolate). In another study of German, French, Japanese and Finnish isolates in 2007, majority of the t037 isolates (n = 8) were also associated with SCCmec type III [44]. This was also seen in seven isolates from a 2014 Iranian study but two t037 isolates were also associated with SCCmec type IV and one was associated with SCCmec type I [37]. The Asadollahi et. al., review manuscript further showed that t037 was associated with ST239 and t064 was associated with ST8 [5]. In our study, t037 was mainly associated with ST239 but one isolate was associated with ST36. The isolates belonging to t064 were mainly associated with ST612 and one isolate was associated with ST36. The review further showed that t032 was always associated with ST22 irrespective of the continent in which it was observed; one of the t032 isolates in our study also showed this finding whereas the second t032 isolate was associated with ST4122. Both ST22 and ST4121 belong to MLST CC22. As MLST was only performed on a few selected isolates, the results could have potentially differed if ST data was available for more isolates. Other publications have used ST and the SCCmec element to define clonal types [45,46]. In the current study, the Brazilian/Hungarian clone (ST239-MRSA-III) accounted for eight out of the 48 (17%) isolates typed. This is also a common MRSA strain in New Zealand, where the most common associated spa type is t037. Alternative clone names include EMRSA-1, EMRSA-4, EMRSA-11, Por/Bra, Vienna, AUS-2 EMRSA and AUS-3 EMRSA) (http://esr.cri.nz/assets/HEALTH-CONTENT/ Images-and-PDFs/MRSAdescriptions.pdf), [45]. Of the eight isolates in the current study, six were spa type t037. This clone has also been observed in Finland, Germany, Greece, Ireland, Netherlands, Poland, Portugal, Slovenia, Sweden, United Kingdom and the United States of America [45]. Another common MRSA strain in New Zealand is ST22-MRSA-IV (EMRSA-15, Barnim) (http://esr.cri.nz/assets/ HEALTH-CONTENT/Images-and-PDFs/MRSAdescriptions.pdf), most associated with spa types t032, t1401 and t5501. In our study, two of the three isolates were t032 and the remaining one being t012. This clone has also been seen in Germany Ireland, Sweden and the United Kingdom [45]. Strain ST-36-MRSA-II (EMRSA-16) also common in New Zealand and most associated with t018 (http://esr.cri.nz/assets/ HEALTH-CONTENT/Images-and-PDFs/MRSAdescriptions.pdf) was also seen in seven isolates in the current study; however none were associated with spa type t018, five were t012 and one each for t037 and t064. This clone was also seen in Finland and the United Kingdom [45]. Another clonal type observed in our study included ST5-MRSA-III (n = 1) which is a Belgian clone [45]. As spa typing was not done on all MRSA isolates and as MLST was only performed on a few selected isolates we could not confidently establish the circulating clones that are representative of entire surveillance population. We therefore cannot comment on the evolution of MRSA clones in our setting. However, although ST data was available for 48 isolates only (which also had spa and SCCmec type data), the circulating clones are relatively diverse and if the ST was omitted and only SCCmec and spa types considered, the diversity of the circulating strains increases. Nevertheless, of the 48 clones we have observed taking ST, SCCmec type and spa type into consideration, the most common were ST612-IV-t064 (n = 8), ST612-IV-t1257 (n = 6), ST239-III-t037 (n = 6) and ST36-II-t012. Multiple introductions of ST612 was observed in Western Australia in both human and equine reservoirs [47]. ST612 was also recently observed in the clone ST612-CC8-t1257-SCCmec_ IVd(2B) obtained from the poultry food chain in South Africa [48]. In addition to the studies mentioned above, the Brazilian/Hungarian clone ST239-III-t037 was commonly found over a 15 year period in a study in China beginning in 1994 [40]. The presence of this clone was also observed in various continents [49]. Therefore, this clone is very well established globally. It has been shown that the transformation from a MRSA clone to a MSSA clone can occur through the excision of the SCCmec element and consequently the loss of methicillin resistance. Therefore, it is possible for a clone to evolve from MSSA into MRSA through the acquisition of the SCCmec element or from MRSA to MSSA through the excision of the SCCmec element [50]. Molecular typing is extremely useful in studying genetic diversity and a study on a collection of isolates from 19 countries in Europe, the United Kingdom, The United States and Latin America has shown that MRSA and MSSA differ with regards to the diversity of their genetic backgrounds as MSSA has shown to be more diverse [10]. A limitation of the current study is that molecular typing was performed on MRSA isolates only; results for MSSA is therefore lacking and we cannot make any remarks on this matter. To add to genetic diversity, clones responsible for causing HA infections and CA infections may differ and the recombination between HA and CA clones does occur [50]. A detailed investigation taking into consideration aspects like virulence factors such as surface proteins, invasins, biochemical properties, membrane-damaging toxins, exotoxins e.g. Panton-Valentine Leukocidin (PVL), biofilm production, antimicrobial resistance genes and clinical syndromes [42,43,50,51] would be beneficial. Conclusion This study reports a large dataset of isolates collected from various provinces in South Africa from 2010 to 2017. A variety of spa types were observed in this study; this is in keeping with other reports showing the presence of multiple spa types in the MRSA population. Moreover, data from Africa is not abundant. It is evident that MRSA clones are diverse; they disseminate both rapidly and efficiently and it is important to understand why particular clones dominate in a specific geographical location in order to develop effective strategies to control the spread of S. aureus infections.
2020-11-10T15:18:29.785Z
2020-11-10T00:00:00.000
{ "year": 2020, "sha1": "ca7bca82ecac7b00f605b223ed7bc6540b76b1f2", "oa_license": "CCBY", "oa_url": "https://bmcinfectdis.biomedcentral.com/track/pdf/10.1186/s12879-020-05547-w", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ca7bca82ecac7b00f605b223ed7bc6540b76b1f2", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
53722548
pes2o/s2orc
v3-fos-license
Fully Automated Delineation of Gross Tumor Volume for Head and Neck Cancer on PET-CT Using Deep Learning: A Dual-Center Study Purpose In this study, we proposed an automated deep learning (DL) method for head and neck cancer (HNC) gross tumor volume (GTV) contouring on positron emission tomography-computed tomography (PET-CT) images. Materials and Methods PET-CT images were collected from 22 newly diagnosed HNC patients, of whom 17 (Database 1) and 5 (Database 2) were from two centers, respectively. An oncologist and a radiologist decided the gold standard of GTV manually by consensus. We developed a deep convolutional neural network (DCNN) and trained the network based on the two-dimensional PET-CT images and the gold standard of GTV in the training dataset. We did two experiments: Experiment 1, with Database 1 only, and Experiment 2, with both Databases 1 and 2. In both Experiment 1 and Experiment 2, we evaluated the proposed method using a leave-one-out cross-validation strategy. We compared the median results in Experiment 2 (GTVa) with the performance of other methods in the literature and with the gold standard (GTVm). Results A tumor segmentation task for a patient on coregistered PET-CT images took less than one minute. The dice similarity coefficient (DSC) of the proposed method in Experiment 1 and Experiment 2 was 0.481∼0.872 and 0.482∼0.868, respectively. The DSC of GTVa was better than that in previous studies. A high correlation was found between GTVa and GTVm (R = 0.99, P < 0.001). The median volume difference (%) between GTVm and GTVa was 10.9%. The median values of DSC, sensitivity, and precision of GTVa were 0.785, 0.764, and 0.789, respectively. Conclusion A fully automatic GTV contouring method for HNC based on DCNN and PET-CT from dual centers has been successfully proposed with high accuracy and efficiency. Our proposed method is of help to the clinicians in HNC management. Introduction Head and neck cancer (HNC) is a type of cancer originating from the tissues and organs of the head and neck with high incidence in Southern China [1]. Radiation therapy (RT) is one of the most effective therapies, which relies heavily on the contouring of tumor volumes on medical images. However, it is time-consuming to delineate the tumor volumes manually. Besides, the manual delineation is subjective, and the accuracy depends on the experience of the treatment planner. Compared to manual delineation, automatic segmentation can be relatively objective. Nowadays, there have been studies reporting the automatic segmentation of tumor lesions on magnetic resonance images of HNC using different methods [2][3][4][5][6][7][8][9][10]. Positron emission tomography-computed tomography (PET-CT) has played an important role in the diagnosis and treatment of HNC, providing both anatomical and metabolic information about the tumor. e automatic or semiautomatic segmentation of tumor lesions on PET-CT or PET images of HNC has been reported, using machinelearning (ML) methods such as k-nearest neighbor (KNN) [11,12], Markov random fields (EM-MRFs) [13], adaptive random walker with k-means (AK-RW) [14], decision tree algorithm [15], and active surface modeling [16]. e segmentation of tumor lesions on the coregistered PET and CT images has shown better results than those on solely PET or CT images [17,18]. However, the application of PET-CT has increased the amount and the complexity (multimodality) of the image data. Also, to propose a robust and practical MLbased automatic segmentation method, it is often necessary to train and test the method with heterogeneous image data from multicenter [19], which makes the training and testing of ML systems more challenging. Compared to the traditional ML methods, deep learning (DL) allows extracting the features automatically instead of subjective feature extraction and selection in conventional ML techniques, which may be more appropriate for automatic segmentation in multimodality data and multicenter data. DL can easily recognize the intrinsic features of the data [20]. DL techniques, such as stacked denoising autoencoder (SDAE) [21] and convolutional neural network (CNN) [22][23][24], have been used in tumor segmentation successfully with improved accuracy. No studies have been reported to apply the deep convolutional neural network (DCNN) in the automatic GTV delineation for HNC patients on PET-CT images. In our study, we proposed an automatic method of GTV delineation for RT planning of HNC based on DL and dual-center PET-CT images, aiming to improve the efficiency and accuracy. Materials and Methods In brief, our methodology included the contouring of the gold standard, training and testing of the DL model, and evaluating the performance of our trained model. After reviewing the MRI, CT, and PET images, an oncologist and a radiologist decided the contouring of GTV by consensus which was treated as the gold standard in the following training and testing of our method. We developed a deep convolutional neural network (DCNN) for HNC tumor lesion segmentation, and then we trained the network based on the PET-CT images and the gold standard of GTV in the training dataset. In the testing step, we input the testing dataset to the network, and it automatically contoured the GTV. To test the accuracy of this automated method, we compared the results of our method with those of other methods in the literature and with the gold standard. Structure of Our DCNN Model. Inspired by the fully convolutional network [25] and U-net [26], we designed a DCNN model for GTV delineation. e structure of our proposed DCNN model is shown in Figure 1. is network consisted of two stages: feature representation phase and scores map reconstruction phase. Feature Representation Phase. e main purpose of the feature representation phase was to extract the feature information of PET images and CT images, by combining the low-level features to represent the high-level features with semantic information. e feature representation phase contained 5 downsampling blocks, 4 convolution (conv) layers, and 4 rectifier linear unit (ReLU) layers ( Figure 1). A downsampling block included a convolution layer, an ReLU layer, and a pooling (pool) layer. e first convolution layer was to extract the low-level features of PET images and CT images, respectively, by filters of 5 × 5 voxels and to fuse them together. We were able to fuse the features because the PETand CT images were input simultaneously with the same gold standard. We in the next 4 convolution layers applied the convolutions for the permutation and combination of the low-level features to obtain more high-level features with semantic information. In all the 5 downsampling blocks, the convolution layers were followed by a pooling layer. We applied pooling with 2 × 2 filters and 2 strides which decreased the length and width of the feature map by 50%. us, it could reduce the number of connection parameters and the computational time and provided the position invariance and more global information. e use of unaltered filters on a smaller image may contribute to the larger local receptive fields, and these enlarged local receptive fields could extract more global features. After each convolution layer, we used an ReLU layer as an activation layer to increase the nonlinearity of our network and to accelerate the convergence. e length and width of the feature maps were reduced by 50% after a downsampling block. After the feature map size was reduced to 16 × 16, it was then connected with a convolution layer with 16 × 16 filters. It means that every neuron in the following layer was connected with all the neurons in the previous layer to imitate the fully connected layer in the traditional classification network. e size of the feature maps was 1 × 1 pixel after this convolution layer. en, we used 2 convolution layers with 1 × 1 filters for the permutation and combination of these features to obtain more abstract information. e finally acquired 1 × 1 scores maps were used as the input in the scores map reconstruction phase. Scores Map Reconstruction Phase. e main purpose of the scores map reconstruction phase was to reconstruct the scores map into the same size of input images by upsampling. is reconstruction phase consisted of 5 upsampling blocks, a convolution layer, and an ReLU layer. An upsampling block was composed of a deconvolution (deconv) layer, a concatenation (concat) layer, a convolution layer, and an ReLU layer. e deconvolution layer was designed for upsampling. e first deconvolution layer reconstructed the 1 × 1 scores map to 32 × 32 by 32 × 32 filters. However, we found that deconvolution would cause the loss of the high-resolution information in images. To overcome this problem, we utilized the concatenation layer to fuse the feature maps in the previous pooling layers or convolution layers with the current feature maps in the deconvolution layer. We believed that these skip-layer designs could capture more multiscale contextual information and improve the accuracy of segmentation. To fuse the lowand high-resolution information pixel by pixel, we set the filters of all the following convolution layers at 1 × 1. With all the upsampling blocks, we finally reconstructed the scores maps to an output image with a size of 512 × 512, the same as in the input PET or CT images. In order to optimize the network, we estimated the loss by calculating the Euclidean distance between the gold standard and the reconstructed tumor lesions [27,28]. en, the parameters of the network were iterated and renewed by backpropagation from the loss. In our experiment, we decided to use the Euclidean distance to estimate the loss because it had shown better performance than the cross entropy loss that was used in some other studies of Ronneberger et al. [26]. PET-CT scans in both centers were from the top of the skull to the shoulder. Acquisition time of PET for each bed position was 2.5 minutes. e patients from center 1 were scanned with Discovery STE (GE Healthcare, Milwaukee, USA); the spatial resolution and image matrix of most CT images were 0.49 × 0.49 × 2.5 mm 3 and 512 × 512 × 63, respectively, while the spatial resolution and image matrix of the PET images were 1.56 × 1.56 × 3.27 mm 3 and 256 × 256 × 47, respectively. e PET scan in center 1 was acquired in 3dimensional mode and reconstructed using the orderedsubset expectation maximization iterative algorithm. e patients from center 2 were scanned with Discovery 690 PET-CT scanners (GE Healthcare, Milwaukee, USA); the spatial resolution and image matrix of the CT images were 0.59 × 0.59 × 3.27 mm 3 and 512 × 512 × 47, respectively, while the spatial resolution and image matrix of PET images were 1.17 × 1.17 × 3.27 mm 3 and 256 × 256 × 47, respectively. In center 2, the PET scanning was acquired in 3-dimensional mode and reconstructed using the VPFXS reconstruction method. Training of To make use of the information of both the PET image and the CT image, we performed coregistration of PET to CT images by sampling the PET images using linear interpolation in SPM8 (Wellcome Department of Imaging Neuroscience, London, United Kingdom). Finally, we had 934 samples (one sample includes one slice of the CT image and one coregistered slice of the PET image, both with a matrix size of 512 × 512) for the 17 patients from center 1 as Database 1 and 200 samples for the 5 patients from center 2 as Database 2. e primary GTVs were manually outlined by an experienced radiologist and double-checked by an experienced oncologist on the registered PET/CT with reference to MRI, PET, and CT images using the ITK-SNAP software (http:// www.itksnap.org) [29]. e resultant GTV contouring was used as the gold standard in training and testing of our proposed model and for the comparisons with automatic segmentation in terms of their volume and geometrical overlap. Specifically, we discarded the images in which the tumor size was smaller than 0.5 cm 2 (in the 2-dimensional images) by considering the partial-volume effect (PVE) in the PET image, suggested by the radiologist. PVE could affect the imaging accuracy of small tumor lesions whenever the tumor size is less than 3 times the full width at half maximum (FWHM) of the reconstructed image resolution [30]. We performed two experiments with our data. In Experiment 1, we evaluated the proposed method using only the data in Database 1. We evaluated the proposed method using a leave-one-out cross-validation (LOOCV) strategy, leaving the images of one patient for testing and the images of all other patients for training. To balance the positive and negative samples in the training dataset, we selected all the slices with tumor lesions as positive samples and randomly selected the same number of slices without tumor lesions as positive samples. To satisfy the need of huge training data in DL, we augmented the training dataset to nearly 15,000 samples by rotating the images, horizontal mirroring, changing the contrast, and image scaling. In Experiment 2, we used the two databases (1134 samples) and augmented the training dataset to nearly 18,000 samples and also evaluated the method by using the LOOCV strategy similarly. Before training and testing in both Experiments 1 and 2, all data were normalized by performing min-max normalization. Network Training. e training of the whole network was composed of three stages. At the first stage, we obtained an output image after the third upsampling block (Figure 1), and the size of the output image was 128 × 128. In the second stage, which was initialized by the network parameters in the first stage, a 256 × 256 scores map was obtained. Finally, we based on the network parameters in the second stage trained the whole network, and the scores maps were used to reconstruct an output image with a size of 512 × 512 (the same as the size of the input PET or CT images). e model was trained by using an Adam optimizer for 200,000 iterations with a fixed learning rate of 0.00001. We used a GPU NVIDIA GeForce GTX 1080 Ti equipped on an Intel Xeon E5-2650 2.30 GHz × 16 machine and the DL framework Keras for training [31]. e whole training procedure took about 24 hours. Evaluation of Automatic GTV Delineation Performance. After the successful training of the DCNN model, we used the testing dataset to evaluate the segmentation performance of our method by calculating the dice similarity coefficient (DSC) as follows: where true positive (TP) denotes the correctly identified tumor area, false positive (FP) denotes the normal tissue that is incorrectly identified as tumor, and false negative (FN) denotes the tumor area that is incorrectly predicted as normal tissue. DSC describes the overlap between the gold standard and the automatic segmentation result. Comparison with Other Methods in the Literature. To instigate the improvement of our method, we also compared our results with the previous studies. We tried to apply these previous methods on our database; however, the performance was all lower than the published results. Hence, we directly compared our results with those in these publications, in terms of DSC. Although they may not be reasonably comparable, these comparisons to some extent provide insights about how our method outperformed the similar studies. Note that for a fair comparison, we used the results of median performance in Experiment 2 for the comparison. Comparison with the Gold Standard of GTV. Although we repeated our experiments for several times, for a fair comparison, we used the results of median performance in Experiment 2 with dual-center data for the comparison, and the results were recorded as GTVa. e gold standard by manual contouring was recorded as GTVm. Pearson's correlation was performed between GTVa and GTVm. To further evaluate the accuracy of GTVa against GTVm, we calculated mean surface distance (MSD), sensitivity, and precision as follows: where X and Y denote the boundary of autosegmentation and the gold standard, respectively (y 1 i , i � 1, . . . , N ∈ X is the boundary points of X; y 2 j , j � 1, . . . , M ∈ Y is the boundary points of Y, respectively). MSD describes the mean Euclidean distance between GTVa and GTVm along their boundaries. Sensitivity describes how much the overlap of GTVa and GTVm was included in GTVm. Precision describes how much the overlap of GTVa and GTVm was included in GTVa. e absolute difference between GTVa and GTVm was also estimated by calculating GTVa − GTVm [32]. Automatic GTV Delineation Performance. With our trained model, a tumor segmentation task for a sample (a coregistered PET image and a CT image, two-dimensional) took about 0.28 seconds; thus, for an HNC patient with around 50 slices of coregistered PET/CT images, our method took about 14 seconds for GTV segmentation. An example of segmentation with high accuracy is shown in Figure 2, in which the DSC was 0.943. Two typical examples of the poor results and their corresponding PET images are shown in Figures 3 and 4, in which the DSC was 0.610 and 0.408, respectively. As shown in Figure 5 e segmentation results in Experiment 2 were recorded as GTVa and used for the following comparisons. Comparison with Other Methods in the Literature. e results of previous studies about HNC segmentation based on PET-CT are shown in Table 1. e mean DSC of our method in Experiment 2 for 22 patients was 0.736. Stefano et al. [14] achieved a high DSC of 0.848; however, their method was on PET images only and was semiautomatic. Discussion We proposed an HNC automated GTV contouring method based on DL and PET-CT images, with encouraging segmentation results. Most of the studies on HNC delineation were based on PET images only [14][15][16][17], in which the anatomical information was insufficient due to the low spatial resolution compared to CT or MRI [13]. Yang et al. [13] achieved similar segmentation accuracy (DSC � 0.740); however, their method was based on three modalities (PET, CT, and MRI). e methods of Stefano et al. [14] and Song et al. [17] were all semiautomatic. Berthon et al. [15] reported a higher accuracy of 0.77; however, their gold standard for performance evaluation incorporated the information of automatic segmentation results. Compared to these studies [13][14][15][16][17] with the data from one center only, our proposed method shows stable performance on dual-center data. To summarize, our proposed method has shown relatively high accuracy and is fully automatic, making use of both the metabolic and anatomic information. Either in Experiment 1 or in Experiment 2, the performance was high and stable for Database 1. is may suggest that the proposed DCNN model was effective and robust. Note that in Experiment 2, the DSC was higher than that in Experiment 1. is may be because with more samples, more features can be learned by our DCNN model, and thus, the segmentation accuracy could be improved. However, we also in Experiment 2 observed that the accuracy for Database 2 was lower than that for Database 1. e reason may be that the features were somehow different between these two databases. e features learned from Databases 1 and 2, mainly from Database 1, were probably not suitable enough to be applied to Database 2. Note that with only 22 patients, we already achieved such good performance of automatic contouring. However, we may recruit more data to further verify the robustness of our model. e image features are critical in machine-learningbased segmentation tasks. We used the multimodality images, namely, PET and CT images, as the input of our DCNN model, and this may improve the segmentation than with PET or CT images alone. is finding echoed the results reported in the study of Song et al. [17] or Bagci et al. [18]. As shown in Figure 2, since the metabolism is significantly different between tumor regions and normal tissues, the contrast of the tumor region to the adjacent tissues is high; thus, the location of tumor is easily detected in PET images. However, the spatial resolution of PET images is low; thus, the tumor boundary is unclear in PET images. In CT images with higher spatial resolution, the anatomical information is more sufficient for detecting the boundary of tumors. By using both PET and CT images, our method extracted and combined both metabolic and anatomical information as the efficient features for more accurate segmentation. e DL technique we used to extract the features has shown more advantages than traditional machine-learning (Table 1). As shown in Figure 5, the region marked by the blue circle with high metabolism was actually an inflammation region, which looks very similar to the tumor lesions. e inexperienced clinicians may incorrectly consider this region as a tumor lesion, while our trained model was able to learn the difference between these inflammation regions and the tumor lesions and correctly recognized this as a nontumor region. Such an example showed that our DCNN method can extract the intrinsic features of tumor lesions and finally achieve better GTV contouring results. Besides, we used a skip-layer architecture for the fusion of the feature maps at the feature representation phase and scores map reconstruction phase, which can be another technical improvement in our method. As shown in Figure 7, although the semantic information of the features in the feature representation phase was worse than that of the features in the scores map reconstruction phase, it could help fix the problem of information loss in the reconstruction procedure. Compared to the feature map fusion method used by Long et al. [25], our method successfully incorporated the more useful features during the process of feature fusion. We believed that this fusion improves the accuracy of segmentation by using the skip-layer architecture. Contrast Media & Molecular Imaging e comparisons between GTVa and GTVm ( Figure 6 and Table 2) indicated that GTVa was similar and close to GTVm. However, there were still some shortcomings in our automatic method. Firstly, the GTVa was unsatisfactory in some tumors. As shown in Figure 3, the tumor in the PET image was large, but the boundary was unclear; thus, part of the tumor was incorrectly identified as normal tissue. As shown in Figure 4, the low metabolism region, which was within the region where tumor lesions were often seen in some other patients, was incorrectly detected as tumor lesions. As shown in Figure 6(a), two patients showed a large difference between GTVm and GTVa. e tumors of these two patients were large with lots of lymphatic metastasis. is kind of tumor was few in our database; thus, our method failed to learn the features of these kinds of tumors. Secondly, we discarded the images in which the tumor size was smaller than 0.5 cm 2 (in the 2-dimensional images) because such tumor lesions were difficult to detect in PET images by visual assessment. In addition, the imaging accuracy of small tumor lesions could be affected due to PVE. us, the performance of our method for such small tumors remains unclear. Our results may be improved in future studies in the following aspects. Firstly, more data should be recruited for training a better model and to test-retest the performance. Especially, the data from different centers should be better balanced. Also, the MRI images may be employed as they provide better soft tissue contrast and may improve the performance. Secondly, in the training and testing, only the 2-dimensional images were used and the volumetric information was abandoned. We would carefully improve the network architecture and also adjust the training parameters for better segmentation results. Finally, for successful application of our method in the radiotherapy of HNC, the automatic contouring of organs at risk should also be incorporated, and the clinical target volume (CTV) and planning target volume (PTV) should also be drawn. Conclusion In this study, we successfully proposed and verified a robust automated GTV segmentation method for HNC based on DCNN and dual-center PET-CT images. With multimodality images, both anatomic and metabolic features are extracted automatically and objectively, which contribute to the increased accuracy. e DL algorithm showed good potential in GTV segmentation. All these contributed to the high accuracy and efficiency of our method compared to manual contouring. Our method may be helpful in aiding the clinicians in radiotherapy of HNC; thus, it is of great potential in HNC patient management. Future studies may aim to improve further the segmentation accuracy with more training data and optimized network structure, to draw CTV/PTV, and to verify our method with data from multicenters. Data Availability e authors do not have permission to share data. Conflicts of Interest e authors declare that they have no conflicts of interest.
2018-12-02T16:55:10.381Z
2018-10-24T00:00:00.000
{ "year": 2018, "sha1": "a3f0a1d216866f40b93b82ee0354efb8491c3337", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1155/2018/8923028", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "cac5fae76ccd52f485de641693e81968d1f825d3", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
152246902
pes2o/s2orc
v3-fos-license
Beyond Islamophobia and Islamophilia as Western Epistemic Racisms: Revisiting Runnymede Trust’s Definition in a World-History Context The media have become obsessed with something called “Islam,” which in their voguish lexicon has acquired only two meanings, both of them unacceptable and impoverishing. On the one hand, “Islam” represents the threat of a resurgent atavism, which suggests not only the menace of a return to the Middle Ages but the destruction of what Senator Daniel Patrick Moynihan calls the democratic order in the Western world. On the other hand, “Islam” is made to stand for a defensive counterresponse to this first image of Islam as threat, especially when, for geopolitical reasons, “good” Moslems like the Saudi Arabians or the Afghan Moslem “freedom fighters” against the Soviet Union are in question. … But rejection alone does not take one very far, since if we are to claim, as we must, that as a religion and as a civilization Islam does have a meaning very much beyond either of the two currently given it, we must first be able to provide something in the way of a space in which to speak of Islam. Those who wish either to rebut the standard anti-Islamic and anti-Arab rhetoric that dominates the media and liberal intellectual discourse, or to avoid the idealization of Islam (to say nothing of its sentimentalization), find themselves with scarcely a place to stand on, much less a place in which to move freely. (Said, 1980:488) INTRODUCTION It is now almost three decades since Edward Said penned the above pertinent words in his essay "Islam Through Western Eyes" published in The Nation soon after the publication of his Orientalism (1979).Apart from the not so insignificant political changes since that time-such as how the Afghan Moslem "freedom fighters" fighting the Soviet Union turned from Western powers' regional allies into their global sworn enemies in the new clothing of Al-Qaeda-the simplistic dichotomy of the two images of Islam in Western eyes as noted by Said has not drastically changed, perhaps has only been further amplified.What the decades in between clearly illustrate, in fact, is how Western Islamophobia and Islamophilia are two sides of the same coin and how readily they can become one another in the ebb and flow of imperial global geopolitics. Is it possible that what the West regards as its ultimate foes and friends in Islam today, manifesting its Islamophobic and Islamophilic tendencies, are both, at least partly, contradictory byproducts of its own centuries-old and contemporary global imperial expeditions?Is it possible to regard both Islamophobia and Islamophilia as found today as by-products of two sides of the same phenomenon brought on by Western imperial policies pursued around the globe especially during the post WWII era?What ISJ 1:1 (2012) remains an urgent project yet to be accomplished nearly three decades after Said penned his words is the carving out of what he called a "place to stand on, … a place in which to move freely"-here, free of readily hurled Islamophobic and Islamophilic charges-in order to peruse, among and in critical dialogue with other intellectual and spiritual world traditions, Islam's own genuine contributions to the task at hand of maneuvering away from and beyond the treacherous or caricatured landscapes of Islamophobia and Islamophilia. Revisiting the definitional framework offered by The Runnymede Trust in 1997 for Islamophobia, in this paper I draw on and seek to critically contribute to a conceptual framework advanced by Grosfoguel and Mielants (2006)-as informed by the works of Grosfoguel (2002Grosfoguel ( , 2006Grosfoguel ( , 2007)), Maldonaldo-Torres (2004, 2006), Dussel (1994Dussel ( , 2004)), Mignolo (2000Mignolo ( , 2006Mignolo ( , 2007)), and Tlostanova (2006), among others-to understand and help transcend Islamophobia in a world-history context.I will argue that both Islamophobia and Islamophilia should be regarded as forms of Western religious, cultural, orientalist, and epistemic racism that similarly other, oversimplify, essentialize, and distort our views of the 'really existing Islam' as a plural weltanschauung-one that, like any other, has historically produced contradictory interpretative, cultural, and socio-political trends involving liberatory and imperial/oppressive aspirations. The essential thesis advanced here is that Islamophobia and Islamophilia, far from being Western reactions to an independently developing Islamic tradition, are direct byproducts of how Western imperial (more recently, oil-based) geopolitics have helped overdevelop the static, oppressive and ultraconservative interpretations of Islam-which have often been in fact the breeding grounds of Islamic fundamentalisms and terrorisms-at the expense of marginalizing and misrepresenting its dynamic, liberatory and egalitarian interpretations as exemplified, for instance, by Sufism.I will argue that aspects of the Runnymede definition of Islamophobia represent Islamophilic tendencies that need rethinking and de/reconstruction.An alternative definitional framework for Islamophobia/Islamophilia will thereby be proposed. In what follows I will first overview the definitional framework offered by Runnymede Trust for Islamophobia.I will then summarize the conceptual framework advanced by Grosfoguel and Mielants, et al., regarding the nature of Islamophobia as a form of religious, cultural, orientalist, and epistemic racism that is not merely additive but constitutive of the "modern/colonial capitalist/patriarchal world-system."Then I will turn to a reexamination of the above conceptual framework followed by a critical reexamination of the Runnymede Trust's definition of Islamophobia.An alternative definition of Islamophobia/Islamophilia is proposed in the process. RUNNYMEDE TRUST'S DEFINITION OF ISLAMOPHOBIA "Islamophobia" is a term that originated in the 1980s and gained wider use in response to the then contemporary events, such as the Islamic Revolution in Iran in 1979, the advent of the Iran-Iraq war during the 1980-1988 period, the defeat of the Soviet aggression in Afghanistan by a fundamentalist religious movement aided by the U.S., the West, and their regional allies (such as Pakistan and Saudi Arabia), and, later, the fall of the Soviet Union and Eastern Bloc nations and the subsequent posing of Islam in global imperial politics as an alternative nemesis to the West. The term came to be formally coined and defined in a report titled Islamophobia: A Challenge For Us All, 1 published in the United Kingdom in 1997 by the Runnymede Trust, which was founded in 1968 "with the stated aim of challenging racial discrimination, influencing legislation and promoting multi-ethnicity in the UK." 2 The report was researched and written by the then newly established (in 1996) multi-ethnic and multireligious Commission on British Muslims and Islamophobia chaired by Professor Gordon Conway and composed of eighteen members. 3Since the events of September 11, 2001 and the significant rise in biased and discriminatory policies and behaviors toward Islam and Moslems, the term has achieved much wider circulation. The Runnymede report defined Islamophobia and "closed views of Islam" as follows: 1. Islam [is] seen as a single monolithic bloc, static and unresponsive to new realities.In an editorial note to the collection of conference papers guest coedited by Grosfoguel and Mielants (2006), I noted that, while the definitional framework for Islamophobia as proposed by the Runnymede Trust does not imply its misuse as a vehicle for dismissing criticisms made of one or another Islamic belief or of Islam as a whole, opponents of the term have suggested that the term lends itself to silencing "legitimate" criticisms that one may raise against Islam or one or another of its varieties. 6As a result, I noted, some have responded by accusing those who have warned against Islamophobia for being themselves tinted by various degrees of Islamophilia, 7 i.e., of lending uncritical support and wholesale admiration to 4 Ibid.p. 2. 6 In a letter published in 2006 in the French weekly newspaper Charlie Hebdo, warning against Islamic "totalitarianism" and signed by Salman Rushdie and several others, for instance, Islamophobia has been referred to as a "wretched concept that confuses criticism of Islam as a religion and stigmatization of those who believe in it" (for a full text of the letter see http://news.bbc.co.uk/2/hi/europe/4764730.stm).Ironically, the letter was published following the widespread global protests in the Islamic world to the publication of mocking and derogatory cartoons of the founder of Islam in Western media, purportedly as a mechanism to "test" the openness of Islam to criticism. 7 "Islamophilia is a controversial term (believed to have been first used by critic of Islam Daniel Pipes) employed by some journalists, media commentators and politicians to Islam and blindly accepting its associated ideas and practices.I concluded then that such criticisms of the term Islamophobia and its use, however, often fail to make a distinction between the definitional coordinates of the term itself as coined in the Runnymede Trust report and the misuse that the term (like any other term) may suffer in ideological and political debates.Clearly, I argued, the definition provided by the Runnymede Trust for Islamophobia does not exempt Islam or any of its variants from being subjected to criticism nor does it limit the option, within a constructive dialogical framework, for those believing in and practicing Islam to present their responses to the criticisms launched against their views. In light of the fact that the term "Islamophilia" has been used by those critical of the term "Islamophobia" in general and of the definitional framework offered by the Runnymede Report's in particular to express their dissatisfaction with the term, for the purpose of further clarification and exploration, I will return at the end of this paper to the controversy over the definitions of the term(s).For this purpose, let me first review the conceptual framework advanced by Grosfoguel and Mielants before proceeding further in a critical reexamination of the latter followed by a critical reconsideration of the Runnymede definition. ISLAMOPHOBIA AS WESTERN RELIGIOUS, CULTURAL, ORIENTALIST, AND EPISTEMIC RACISM In their article titled "The Long-Durée Entanglement Between Islamophobia and Racism in the Modern/Colonial Capitalist/Patriarchal World-System"-an introduction to a collection of proceedings of an international conference on Islamophobia they co-organized in 2006 in Paris, France 8 -Ramón Grosfoguel and Eric Mielants proposed that Islamophobia is not a new, conjuncturally coincidental, or structurally epiphenomenal feature of the capitalist world-economy but one that has been a centrally constitutive element of the modern world for centuries, having taken a variety of forms entangled with religious, cultural, orientalist, and epistemic racism and modes of racial othering.They argued, in other words, that, while the term "Islamophobia" may be new in the recent historical context, its content and what it represents as racism and a practice of racial othering is not anything new when considered in the worldhistorical context of the emergence, development, and decline of the modern world-system.The novelty of the argument advanced was thereby describe unwavering and uncritical admiration of Islam and used to counteract what many believe to be spurious accusations of Islamophobia.British journalist Julie Burchill also complained of a kind of "mindless Islamophilia" that was "considerably more dangerous" than Islamophobia owing to what she claimed was a white washing of Islamic History and its use as a way of stifling debate" (http://en.wikipedia.org/wiki/Islamophilia_(neologism). in regard to both the exposition of the systemically constitutive role of Islamophobia in the making of the modern world and to its worldhistorically evolving forms. In order to better appreciate and further build upon the conceptual framework as advanced by Grosfoguel and Mielants, a more detailed consideration of their perspective is necessary here. Grosfoguel and Mielants' view on Islamophobia in a world-history context is one that follows a broader conceptual framework as advanced in Grosfoguel's earlier writings (Grosfoguel and Cervantes-Rodriguez 2002, Grosfoguel 2006, 2007).Central to this framework is the recognition that the modern world-system is not a unilogical world reducible to a singular economic motive (Wallerstein 1979;Hopkins and Wallerstein 1982) but a complex system of multiple, crisscrossing and overlapping, economic, political, and cultural hierarchical structures in which the latter two are not simply additive but also constitutive of the economic and the overall social structure.Culture and politics, in other words, contrary to the classical Marxist perspective still informing the world-systems analysis, are not merely superstructural but also organically constitutive of the economic processes and vice versa, such that no a priori primacy of one factor over others could be established. 9oreover and similarly, the authors also insist that imperiality and coloniality are not a past and transient, but a continuing and structurally necessary feature of the modern world, necessitating ever newer forms of what Hatem Bazian (2007) calls "organizing principles" of imperial rule, for which various modes of cultural, religious, gender, and racial subordination and stratification are continually reinvented and employed to maintain the systemic status quo. "Post-"coloniality, amid such a world-system constituted of overlapping and interconstitutive hierarchical structures, is thereby an illusion, one that merely helps to ideologically hide its essentially continuing imperial/colonial nature.In this regard, the close affinity of the authors' views with and its indebtedness to what Anibal Quijano has called the "coloniality of power" is evident (cf.Quijano 2000).Colonialism is not a matter of the past; coloniality is a continuing, ever renewing process essential to the workings and survival of the modern world-system. For the above reasons, from this perspective it is not fruitful to characterize the modern world as simply "capitalist" but, at the cost of sounding awkwardly long, as a "modern/colonial capitalist/patriarchal world-system."Racial, gender, religious, and imperial/colonial hierarchies, in other words, are not to be seen as merely additive but, instead, as structurally constitutive building blocks of the capitalist system, necessary components that the system must continually produce and reproduce in order to maintain itself. Using such a conceptual framework, it becomes possible for the authors to consider Islamophobia itself not simply as an epiphenomenal but as a constitutive element and "organizing principle" of the modern world, an element which has taken a variety of forms over the centuries and whose historical making can be traced to the origins of the world-system in the long sixteenth century, particularly marked historically by the events of the year 1492.In Mignolo's words, as quoted by the authors: In this year, the Christian Spanish monarchy re-conquered Islamic Spain expelling Jews and Arabs from the Spanish peninsula while simultaneously 'discovering' the Americas and colonizing indigenous peoples.These 'internal' and 'external' conquests of territories and people not only created an international division of labor of core and periphery, but also constituted the internal and external imagined boundaries of Europe related to the global racial/ethnic hierarchy of the world-system, privileging populations of European origin over the rest.Jews and Arabs became the subaltern internal 'Others' within Europe, while indigenous people became the external 'Others' of Europe (Mignolo 2000)."(cited in Grosfoguel and Mielants, 2006:2) The authors then trace the "long-durée entanglement between Islamophobia and racism," noting how an originally religious difference between Christianity, Islam, and New World Indian indigenous culture became rearticulated into a racial difference and hierarchy whereby Moslems as a "people with the wrong God" and "New World" Indians as a "people without a God" (Maldonaldo-Torres, 2006) were separated from the Christian Europeans as "others" and inferiorized into the strata of respectively lower or non-human beings (Dussel 1994).It is this racial othering of Islam in religious form that then metamorphoses into cultural (following the secularizations of Western culture) and more specifically orientalist forms across the following centuries, in terms of confronting a people without civilization, barbaric, exotic, sexist and irrational, merging in subtler and covert forms with new cultural practices of racism in contemporary times when the more overt biological rationalizations of racial stratification and domination could not hold legitimacy in the face of the onslaught of contemporary anti-colonial and civil rights movements.Islamophobia is simply a new word that expresses the latest "organizing principle" of a longstanding religious, cultural, and orientalist racism toward Islam as an alternative civilizational project. 10f we regard capitalist patriarchal coloniality, religious and cultural racism, and orientalism, not as additive but as overlapping and progressively narrowing concentric circles, it becomes clear why the further identification of Islamophobia as epistemic racism takes such a central role in Grosfoguel and Mielants' analysis of the significance of Islamophobia in maintaining the modern world.Islamophobia, in other words, is most fundamentally and generatively present in the foundations of Western epistemic architecture.A capitalist world-system without a drive to continually produce and reproduce Islamophobia in its epistemic foundations in one or another form would be inconceivable.The emphasis on epistemic racism in the authors' non-reductive sociological analytical framework allows them to highlight how such underlying epistemic constituents help maintain and reproduce orientalist, cultural, religious, and social/institutional forms of racism: Epistemic racism leads to the Orientalization of Islam.This is crucial because Islamophobia as a form of racism is not exclusively a social phenomenon but also an epistemic question.Epistemic racism allows the West to not have to listen to the critical thinking produced by Islamic thinkers on Western global/imperial designs.The thinking coming from non-Western locations is not considered worthy of attention except to represent it as "uncivilized," "primitive," "barbarian," and "backward."Epistemic racism allows the West to unilaterally decide what is best for Muslim people today and obstruct any possibility for a serious inter-cultural dialogue.Islamophobia as a form of racism against Muslim people is not only manifested in the labor market, education, public sphere, global war against terrorism, or the global economy, but also in the epistemological battleground about the definition of the priorities of the world today.(Grosfoguel and Mielants, 2006:9) The significance of the above realization is best captured in the authors' reference to what Enrique Dussel has characterized as the epistemic racism embedded in Descartes' "I think, therefore I am."In Dussel's words, it is the "I conquer, therefore I am" that implicitly contextualizes the Western mode of knowing based on "objective" rationality whereby the correctness and truthfulness of the Western epistemology is merely presumed as a universal fact, unlocated in and floating above the particular imperial/colonial historicities of time and geographies of space: Dussel (1994), Latin American philosopher of liberation, reminds us, Descartes' ego-cogito ("I think, therefore I am") was preceded by 150 years of the egoconquirus ("I conquer, therefore I am").The God-eye view defended by Descartes transferred the attributes of the Christian God to Western men (the gender here is not accidental).But this was only possible from an Imperial Being, that is, from the panoptic gaze of someone who is at the center of the world because he has conquered it…. What is the relevance of this epistemic discussion to Islamophobia?It is from Western hegemonic identity politics and epistemic privilege that the 'rest' of the epistemologies and cosmologies in the world are subalternized as myth, religion and folklore, and that the downgrading of any form of non-Western knowledge occurs.The former leads to epistemic racism, that is, the inferiorization and subalternization of non-Western knowledge, while the latter leads to Orientialism.It is also from this hegemonic epistemic location that Western thinkers produce Orientalism about Islam.The subalternization and inferiorization of Islam were not merely a downgrading of Islam as spirituality, but also as an epistemology.(Grosfoguel and Mielants, 2006:8) The above theme was more or less further amplified in other contributions 11 in the volume for which the essay by Grosfoguel and Mielants served as an introduction.The latter closed their article by drawing attention to this important insight-as underlined by inspirations drawn from Tlostanova's contribution to the volume-that to counter Islamophobia it is not sufficient to oppose and expose it but to pose alternative, non-Islamophobic, and non-racist epistemic frameworks where alternative inclusive visions of a better world can be cross-culturally and crossparadigmatically cultivated and practiced.They wrote: … [I]n "Life in Samarkand" Madina Tlostanova provides us with insight into a potential way out of present dilemmas. Her 11 "Islamophobia/Hispanophobia: The (Re)Configuration of the Racial Imperial/Colonial Matrix" (Mignolo 2006); "No Race to the Swift: Negotiating Racial Identity in Past and Present Eastern Europe" (Boatcã 2006); "How Washington's 'War on Terror' Became Everyone's: Islamophobia and the Impact of September 11 on the Political Terrain of South and Southeast Asia" (Noor 2006); "Militarization, Globalization, and Islamist Social Movements: How Today's Ideology of Islamophobia Fuels Militant Islam" (Reifer 2006); "Muslim Responses to Integration Demands in the Netherlands since 9/11" (Tayob); and "Life in Samarkand: Caucasus and Central Asia vis-à-vis Russia, the West, and Islam" (Tlestanova 2006). study of cultural and ethnic hybrids in both Central Asia and the Caucasus, and the concurrent significance of Sufism in the region, in opposition to the binary logics imposed by both the Russian/Soviet Empire on the one hand and the capitalist world-system on the other hand, could very well be an alternative epistemology ignored for too long.(p.11) To sum up, in Grosfoguel and Mielant's view, Islamophobia as a fear of the Islamic other is not new but is a structurally necessary and historically evolving phenomenon in the modern world-system that has taken various forms in entanglement with religious, cultural, orientalist, and epistemic racism.Its function has been to enable imperial rule over the Islamic other by justifications involving purported confrontations with a "people with the wrong god" or "people without a civilization," barbaric, inferior, violent, exotic, sexist, and irrational, whose knowledge is not worthy of serious intellectual consideration. ISLAMOPHOBIA AND ISLAMOPHILIA: THE JANUS FACES OF THE ORIENTALIST WORLD-SYSTEM The conceptual framework as advanced by Grosfoguel and Mielants and briefly summarized above is fruitful in understanding the structural causes and evolving historical forms of Islamophobia in modern times.However, it is important to note three aspects of the perspective that need further reconsideration, clarification and development. First, it is important to note that just because a civilizational project has subjected another to imperial/colonial subjugation and racial inferiorization does not mean that the subjugated civilizational project itself was devoid of similar tendencies in the first place.The authors themselves write, for instance, "The 'imperial difference' after 1492 is the result of imperial relations between European empires versus Non-European Empires and we will characterize it here as the result of the 'imperial relation'" (p.3).Or, elsewhere they recognize that "the European Empires' relations with the Islamic Empires turned from an 'imperial relation' into a 'colonial relation' …" (p.3).In other words, it is always important not to forget that historical Islam itself was not exempt from having in it tendencies toward imperial and colonial conquest of others.And what do empires do? Two, the authors themselves recognize historically regressive and oppressive tendencies that associate themselves with Islam.For instance, when considering the case of Tariq Ramadan as a European Muslim subjected to undue harassment and censorship by Western governments, the authors find it necessary to dissociate him as a "moderate reformist European Islamic thinker" who is "critical of Islamic fundamentalism, suicide bombers, lapidation against women, terrorism, etc." (p.9).In other words, here we have a recognition, again, that, just because a civilizational project is subjected to imperial/colonial subjugation and oppression, this does not mean that the subjugated civilizational project is uniformly moderate or reactionary but that it contains contradictory and conflicting interpretations and practices of its seemingly singular and unifying ideological identity, as Islam is often taken to be. Third, and in light of the above two points, it may be fruitful to consider the inter-imperial and inter-civilizational relation not as a simplified and zero-sum master-slave binary in which one side simply rules and subjugates the other but in terms of how the imperial and oppressive tendencies (and, in the same token, subaltern and resistance movements) across the civilizational projects historically engage in complex modes not only of politico-military and economic but also of religious, cultural, aesthetic, and intellectual articulation over time in order to preserve (or promote or transform) their hierarchical class, status, and power positions not only across but also within their own respective civilizational projects.Once we adopt this more complicated lens in exploring the intercivilizational relations, it becomes evident that the perpetuation of imperial and colonial rule and subjugation has often historically necessitated not a one-sided but a double-sided "stick and carrot" policy on the part of commonly interested dominant socio-political forces and tendencies across civilizational projects. More specifically, a closer examination of historical record will clearly indicate that the metamorphosis, across the centuries, of an originally religious difference into successive forms of imperial/colonial, religious, cultural, orientalist, and epistemic racism, which has most recently been manifested in the terminological clothing of Islamophobia, in the Western eyes cannot be easily separated from a parallel and also centrally constitutive process that may best be called Islamophilia.Islamophobia and Islamophilia in many ways represent the stick and carrot aspects of a singular imperial/colonial policy in the Western attitude toward the historical Islam and its challenges to the West as both a complementary and alternative, though not necessarily antagonistic, civilizational project. A. Broadening Our World-Historical Horizons Before elaborating further on such a Janus-faced history of Western imperial attitudes toward Islam, it is important to step back and further expand the horizons of the world-historical framework used for understanding (and hopefully transcending) Islamophobia.For this purpose, I think it will help to draw upon a conceptual framework for understanding imperiality in a world-historical (and not just Western/modern) context that I recently advanced in Review, the journal of the Fernand Braudel Center (Tamdgidi 2006b). Therein, I tried to tentatively illustrate, by way of advancing a nonreductive dialectical conception of the history of imperiality in contrast to materialist approaches, both the relative historical validity and the transitory (heuristic) nature of the primacy of economies and their analyses in world-historical social science.The dialecticity of the conception as proposed allows for politics, culture, and economy to have similarly played primary parts in the rise of distinct forms of imperiality in world history corresponding to ancient, medieval, and modern historical eras across multiple, but increasingly synchronous and convergent, regional trajectories.The nonreductive dialectical mode of analysis reverses and relativizes the taken-for-granted universalistic modes of analysis of imperialism in terms of class, allowing for considerations of political domination, cultural conversion, and economic exploitation as historical forms of deepening imperial practice that violate self-determining modes of human organization and development.Power-, status-, and class-based relations and stratifications are thereby reinterpreted as distinct forms of imperial practice that now assumes a substantively generative position vis-à-vis those structural forms.I argued that, given the non-synchronous tempo of emergence and development of various ancient civilizations, imperial expansions across civilizations also took place non-synchronously across the globe, adding significant complexity to the trajectory of development of each community in light of the more or less advanced states of development of populations in other regions with which they came in contact through imperial expansion.I further argued that three major forms of imperiality may be distinguished from one another during the long imperial era up to the present: political, cultural, and economic.To be sure, all empires and imperial expansions involve all these three dimensions.I have argued elsewhere for treatment of culture, polity, and economy in terms of part/whole dialectics (Tamdgidi 2007b).The political and the cultural processes must not be conceptualized as being "non-economic" but as integral to it.Indeed, it was the political and cultural preconditions set by precapitalist empires that made possible the modern predominantly economic form of imperiality.What distinguishes the three forms of imperiality from one another is the primary means by which the incorporation of new groups, communities, and regions into the empire is carried out and maintained.In political imperialism, the primary motives are militaristic invasion, control, and domination of other communities and civilizations.In cultural imperialism, the violence of ideological conversion of other communities to one's own cultural and religious beliefs becomes the key motivating factor.In economic imperialism, the primary motive is the exploitative integration of the natural and human resources and wealth of other communities.The key processes distinguishing the three forms of imperialism are thereby political domination, cultural conversion, and economic exploitation. We need not uniformly impose a materialist or idealistic logic across the three imperial periods to uncover a universalistic and trans-historical "economic basis" for political or cultural imperialism or a cultural basis for political and economic imperialism or a political basis for cultural and economic imperialism.These distinct forms could exist as developmental phases of imperiality or even exist contemporaneously within or across clashing empires.The move from outright dominative political modes of imperiality to more subtle cultural and economic modes involves a deepening of the imperial relations of ruling.All aspects may be present, but, in each period, one or another mode of imperiality becomes a predominant mode, casting its hue on other motives.The relative lack of economic development under political and cultural imperialism itself can be explained by the extra-economic determinations of social development during these periods, not vice versa.In contrast, it is the establishment of economic foundations of cultural hegemony and political domination in the modern period that has made possible the deceptive, seemingly autonomous and "sovereign," cultural and political forms of neocolonialism present in the contemporary period. In broad world-historical outlines, although political imperialism may be considered to have originated back in 2300 B.C. with the rise of the Akkadian empire, it was in the aftermath of the Indo-Europeans invasions of the south and the rise of the Assyrian empire circa 800 B.C. that the classical period took shape, later reaching its height in the Persian, Hellenic, and Roman empires in west Asia and Europe, Maurya and Han empires in south and east Asia, and the old and new Maya empires in the pre-Columbian Americas-non-synchronously across space.Classical periods entered their structural crises during A.D. 300-500 and were gradually followed by cultural imperialisms of Zoroastrian (Sassanid), Christian (Byzantine), Islamic (Arabic), Hindu (Gupta), Buddhist (Tang and Sung), and pre-Columbian religious empires (Inca, Aztec, and Taltec), which presided over various increasingly synchronous "medieval" periods.The fall of Constantinople in A.D. 1450 ushered a rapid, globally synchronous phase of transition to the modern period characterized by the rise of economic empires originating in Western Europe.The older model of imperiality characterized by the monopolistic drive of a single power increasingly proving to be a failure, through the sheer violence of trial and error, the modern economic empires invented collective imperialism, which became finally and formally established in the mid-twentieth century, after two world wars, with the formal institutionalization of the "United Nations."This innovation in imperiality, long in the making since the fifteenth century, in effect created the most successful and enduring world-empire in history characterized by a singular economy but of multiple cultures and polities organized in a system of hierarchical core, with peripheral and semi-peripheral "nation-states" (Wallerstein 1979(Wallerstein , 1996)).By mid-twentieth century, the whole face of the globe became finally integrated into the economic world-system of collective imperialism. The relevance of the above framework for the subject under consideration is significant.Islam was not itself a homogeneous and monolithic civilizational reality confronting the rising Western civilizational project in the long sixteenth century but one that itself historically contained contradictory and conflicting tendencies since its very beginnings, including imperial and subaltern tendencies as well as diverse class-, gender-, and ethno-cultural interpretations of the Koran and Prophet's sayings and traditions.Previously ( 2006), I have noted how it is important to make a distinction between the original religious doctrines and teachings on one hand and the imperial use to which they were put by the emerging empires of the medieval periods on the other.Religion in itself is not a culprit for imperialism, as much as philosophy and law were not so for political imperialism during the classical periods nor science for economic imperialism in the modern period.That these fragmented forms of human knowledge became increasingly split from one another and acquired an ideological character and were thereby substantively and organizationally manipulated and revised to become primary or secondary means of imperial expansion were altogether different processes.As such, they must be distinguished from the reasons for which these world-outlooks were originally invented in ancient civilizations as by-products of the essentially curious, creative, and artful human endeavor. The point here is to emphasize that, in considering the process through which Islam in the eyes and policies of the West became entangled with colonial, religious, cultural, orientalist, and epistemic racisms in the long durée rise of the "modern/colonial patriarchal/capitalist world-system," we need not ignore the internal complexity, heterogeneity, and hierarchical cartography of Islam as not simply a civilizational but also an imperial project, albeit in its cultural (in contrast to Western economic) imperial form bent on forceful (though not necessarily always violent) cultural-religious conversion of others.And in doing so, we need not attribute all that was ushered by Islam since its inception with an imperial motive since the complexity of Islam, like any other civilizational project, can hardly be contained in a singular, all positive or all negative, logical model.The relevance of this more complex understanding of Islam becomes more significant if we alternatively ask the question what the contacts with the emerging and then rising Western imperial project and the latter's colonialist designs and expeditions did to the development or rather under-and/or over-development of one or another tendency in the complex cartography of the really existing historical Islam during the long durée of successive Western incorporative efforts and imperial/colonial aggressions. B. Also Considering Islamophilia Islamophobia and Islamophilia are two sides of the West's orientalist attitude toward Islam.Both signify and serve, based on false and manipulative (intentioned or not) premises, to erect misrepresentative views of the reality of Islam so as to legitimate its cooptation by coercion or consent.They are two Janus faced policies that serve to misrepresent and misshape the historical Islam in favor of the West's short-term or long-term economic, geo-political, cultural and even aesthetic interests.What would have been the really existing Islam like if the West did not have, as recently as in the 20 th century, a deepening strategic interest in the oil and energy resources of the region precipitating modes of economic, politico-military, and cultural policies that seek to secure a strategic and long-lasting base among an ultraconservative Saudi leadership in the geospiritual heart of Islam who wields the sword of an outdated and static view of Islam and of "Islamic" behavior in domestic and global affairs?Who would have financially and politically aided the Moslem "freedom fighters" in Afghanistan against the Soviet aggression-as did the Saudi government and the repressive Pakistani regime under Zia-ul-Haq (which presided over the "radical" Islamization of Pakistan)-and how would the spiritual heart of Islam been represented differently had it not been possible to strengthen, through long-term politico-military treaties, the ultra-orthodox face of Islam?What would the heart, and the face, of Islam be like, if the West had not conducted significant, covert and overt, direct or indirect, interference in the lives of Muslims in the Middle East and beyond?What would the heart and face of Islam be like if it did not have to cope and deal, amid unrelenting violence and multiple wars, with the occupation of Palestinian lands and subjugation of a whole people via the agency of the last remaining settlercolonial state that is Israel?What would have been the extent of economic prosperity, cultural vitality, formal education and political visions and sensibilities of Moslems as a whole (and not limited to a select few) if the Moslem population had not been subjected to decades, if not centuries, of direct or indirect colonial rule and imperial designs aided by local regimes perpetuating outdated monarchic (Jordan, Saudi Arabia) or de facto dictatorial (Egypt) administrative forms of government and political rule? Islamophilia is the other side of the Western orientalist attitude toward Islam, seeking to one-sidedly amplify, strengthen, and reinforce those elements and agencies in Islam that best suit the economic interests, political security, and cultural, moral, philosophical, scientific, and aesthetic interests of the West and its orientalist looking glass self.Bush's Islamophilia toward Saudi rulers who also pursue "Middle Age" policies domestically with respect to, for instance, women may appear to sharply contrast with his and his wife's "dedication" to the liberation of women in Afghanistan.But the two policies are two sides of the same attitude on the part of the West that helps preserve, strengthen, and reinforce the same misguided and misrepresentative trends in, for instance, the realm of gender relations in Islam.With one hand, the West plants the seeds of cultural ultraconservatism that it claims to be seeking to eradicate and liberate with the other hand.This Janus faced carrot and stick policy that helps deform Islam underlies and, in fact, justifies in the imperial mind the continuation and perpetuation of the status quo in the West's foreign policy toward Islam and helps fuel and engender both Islamophobic and Islamophilic attitudes in Western media and wider Western public opinion. It is the lack of historical perspective and critical sociological imagination on the part of the lay Western population, fueled by short-term memory and amnesia perpetuated by the Western media, that mischaracterizes the problems of Islam as if they separately and independently evolved alongside a West that pretends it has had nothing to do with the rise of "backwardness" and "ignorance" among Moslems.At the very same time that Western media self-righteously boast at ridiculing Islamic religious beliefs for the higher cause and in the higher interest of defending freedoms of speech, they ignore the extent to which their governments for decades sought to install or desperately secure the lives and regimes of one or another regional ally (read dictatorship) in Shah's Iran, Saddam's Iraq, etc., regimes that did their utmost to violate human rights and freedoms of speech amid their Moslem subjects. In the realm of art and literature, it is difficult to deny the extent to which the works of Islamic thinkers have been subjected, albeit with good intentions, to the mistranslation and misrepresentations at the hand of Western writers.A case in point may be that of how the quatrains of Omar Khayyam were received by the West.Gayatri Chakravorty Spivak, in her famous article 'Can the Subaltern Speak?,' (1988) noted how "writers like Edward FitzGerald, the 'translator' of the Rubayyat of Omar Khayyam ... helped to construct a certain picture of the Oriental woman through the supposed 'objectivity' of translation" (1994 [1988]: 102).The key point regarding the relevance of Khayyam to the argument advanced here is that it helps to illustrate well the juxtaposition of an oriental vs. an authentic representation of his thought.Just because a FitzGerald mistranslated Khayyam and helped to construct an orientalist view of his poetry, his philosophy, and in fact of his spirituality and of the "East," does not mean that an authentic representation of Khayyam's thought is not warranted or possible.The most telling, if not degrading by-product of the introduction of Omar Khayyam to the world through FitzGerald, has been the notion that Khayyam's culture is incapable of representing itself through producing verse translations of its own to convey the beauty and subtlety of his quatrains, that his culture needs a FitzGerald to give the West a taste of Khayyam in English because his culture cannot, that his culture cannot represent itself, that it must be represented. 12 similar example most recently has been the way in which Rumi's mystical poetry has been received and "translated" by Western authors.Coleman Barks does not even pretend to have known Persian when translating Rumi and has based much of his translations on secondary translations of yet other Westerners.And yet, he and the mass of the audience that has nevertheless found some glimmer of Rumi's message amid Bark's "abbreviated" translations takes his translations as the most genuine representative of Rumi's thoughts and intentions.In his words, for instance, Rumi's love of God turns into: The extent to which what the West hates and loves about Islam is a fabrication of its own imagination rather than based on a sound, direct, and in-depth understanding of Islamic culture and values cannot be so easily measured as in the translation rendered above.Even when the mistranslation and misrepresentation is acknowledged, even with all good intentions, by a FitzGerald himself and those who have studied and compared his translations with the quatrains in the original, the Islamophobia or Islamophilia internal to the subjectivities of Moslems themselves, especially those educated and socialized amid Western culture also shape the outcome of the ensued civilizational dialogue.The realities that generate Islamophobia and Islamophilia, while being strongly generated, shaped, or rather misshaped, by decades if not centuries of Western imperial policy and colonization, have also penetrated the really existing Islam and been reified to the extent that distortions that were originally strongly precipitated due to imperial Western imaginations and policies now appear as if they are essential attributes of Islam-hence generating Islamophobic and/or Islamophilic reactions in Western eyes.Said put this misfortune quite aptly in 1980: For the first time in history (for the first time, that is, on such a scale) the Islamic world may be said to be learning about itself in part by means of images, histories and information manufactured in the West.If one adds to this the fact that students and scholars in the Islamic world are still dependent upon U.S. and European libraries and institutions of learning for what now passes as Middle Eastern studies (consider, for example, that there isn't a single first-rate, usable library of Arabic material in the entire Islamic world), plus the fact that English is a world language in a way that Arabic isn't, plus the fact that for its elite the Islamic world is now producing a managerial class of basically subordinate natives who are indebted for their economies, their defense establishments and for their political ideas to the worldwide consumer-market system controlled by the West-one gets an accurate, although extremely depressing, picture of what the media revolution (serving a small segment of the societies that produce it) has done to Islam.(p.490) C. Beyond Islamophobia and Islamophilia: Critical Self-Reflexivity as an Essential Insight from Sufism The Prophet of Islam said, "Whosoever knows his self, knows his Lord"; That is, self-knowledge leads to knowledge of the Divine.Sufism takes this saying (hadith) very seriously and also puts it into practice.It provides, within the spiritual universe of the Islamic tradition, the light necessary to illuminate the dark corners of our soul and the keys to open the doors to the hidden recesses of our being so that we can journey within and know ourselves, this knowledge leading ultimately to the knowledge of God, who resides in our heart/center.(Nasr, 2007:5) Perhaps one way to seek alternative epistemologies to global knowledge and transformation would be to scrutinize the modality of antisystemic behavior gripping many social movements in the modern historical period and seek innovative "othersystemic" 14 and utopystic 15 ways out of the global crisis that are more concerned with building the alternative worlds in the here and now than posing them as goals to be achieved in the future. The world to be known and transformed is not just 'out there' but 'in here' as well, in the intricate modes of thinking, feeling, sensing, relating, processing, and acting to which all of us have been more or less habituated as a result of the blind workings of what Grosfoguel and Mielants aptly call the "modern/colonial capitalist/patriarchal world-system."The Anzaldúan proposal for the simultaneity of self and global transformation (Anzaldúa 1987;cf. Tamdgidi, forthcoming), her innovative alchemy of self and world transformation as a way out of the global crisis, has intimate affinities with the Sufi and esoteric spiritual ways of changing the world through radical self-knowledge and inner transformation.For sure, Sufi ways of change may also learn from our world social forums to not limit the scope of knowing and transformative behavior to the intrapersonal landscapes-expanding the realm of selfhood to that of the collective global community. Beyond Islamophobia and Islamophilia, the sociology of selfknowledge as advanced in my work (Tamdgidi 2002(Tamdgidi , 2002(Tamdgidi -, 2007a) ) seeks to draw attention to the voices and traditions of esotericism and mysticism, including those in Islam, that have for millennia also agonized over the human condition and sought ways of bringing the alienated human "reeds" (as Rumi would have it) together as parts of a common humanity.Islamophobes cannot ignore the voices of Rumi, of Hafiz, of Jami, of Sa'di, and of Khayyam, among many others, arising from the landscapes of mystical Islam, voices that for millennia have attracted the love and admiration and inspiration of the world to the poignancy of their logic and epistemology and the poetic nature of their transformative praxes across generations.As Said observed, To dispel the myths and stereotypes of Orientalism, the world as a whole has to be given an opportunity to see Moslems and Orientals producing a different form of history, a new kind of sociology, a new cultural awareness: in short, the relatively modest goal of writing a new form of history, investigating the Islamicate world and its many different societies with a genuine seriousness of purpose and a love of truth.(1980:491) REVISITING THE RUNNYMEDE DEFINITION OF ISLAMOPHOBIA IN LIGHT OF ISLAMOPHILIA In light of the above analysis and the fact that the term "Islamophilia" has been used by those critical of the term "Islamophobia" in general and especially of the definitional framework offered by the Runnymede Report's to express their dissatisfaction with the term, I find it necessary to return to the controversy over the definitions of the term(s). While I consider the first set of definitions labeled as "closed views of Islam" and specifically aimed at defining "Islamophobia" as warranted with perhaps a few adjustments, the second set of "open views of Islam" may be misunderstood and may leave the term "Islamophobia," by association, open to criticism and accusations of "Islamophilia"-the latter term requiring its own clarification, of course. Let me begin with certain adjustments to the list of "closed views of Islam" as advanced by the Runnymede Report.I propose making the following changes to the definitional framework, identified in bold: 1. Islam as a whole [is] Short of the above clarifications, I think one may regard the Runnymede Report's existing definition of Islamophobia as an inadvertent definitional framework for Islamophilia instead, though in its more sophisticated expressions.Runnymede Trust's "open views of Islam" unfortunately falls in the trap of regarding Islam monolithically, in turn as being characterized by one or another trait, and does not adequately express the complex heterogeneity of a historical phenomenon whose contradictory interpretations, traditions, and sociopolitical trends have been shaped and has in turn been shaped, as in the case of any world tradition, by other world-historical forces.The irony here is that such an effort to remedy the harms caused by Islamophobia seems to have been made in order to avoid negative stereotyping of Islam while acknowledgment of the troubling interpretations, traditions, and sociopolitical trends in Islam, or at least their continued strength and survival, may have had as much to do with the continuation of a Janus-faced global imperial policy that finds it in its shortterm, if not long-term, strategic interest to amplify and reinforce those very troubling agencies in Islam, agencies that in the ever-changing ebb and flow of geopolitics metamorphose back and forth between civilized friend and barbarian foe identities.Islamophilia and Islamophobia are strange bedfellows in the Western mind. The purpose in the above, revised "open views of Islam" is to move away from a monolithic view of Islam that is rightly rejected as a cornerstone of Islamophobia as defined in Runnymede Report's own definition.Here, I have deconstructed "Islamophobia" and revealed a somewhat biased "Islamophilic" view of Islam contained in Runnymede Report's second, "open views of Islam" list, an attitude that also oversimplifies and distorts the tradition of Islam away from its complex heterogeneity and in favor of a monolithic view that is simplistically portrayed as being all positive.Such simplifications do not serve well the cause of understanding and transcending Islamophobia and lend themselves to unwarranted criticism from conservative quarters and social forces that readily cite the troubling tendencies in Islam as proofs for the monolithic regard and dismissals of Islam as a whole.These conservative, and at times even liberal, critiques often ignore or hide the fact that many such troubling tendencies of Islam may not be due to intra-generated but to externally and imperially imposed conditions amid decades and centuries of Western imperial and colonial designs and policies toward Islam.Critiques of the Runnymede Report often dismiss the imperial world-historical context within which various tendencies in Islam have emerged and, by separating and othering Islam as a closed box, perpetuate the fallacy of attributing all its faults and wrongs to Islam alone, not to mention the fact that often the very racial bias displayed toward Islam often takes the standard procedure of simplistically attributing the troubling nature of one or another event or tendency in Islam to the "nature" of Islam as a whole in an essentialist and ahistorical manner.A terrorist act by or tendency in a self-proclaimed offshoot of Islam, itself perpetuated and strengthened by an imperial policy under earlier circumstances where support for it was geopolitically expedient, is suddenly elevated as a standard-bearer of what Islam as whole is and is about. The most long-term damage done to Islam by Islamophobia and Islamophilia, however, may be what one may not readily expect and that is the extent to which the common threat faced by Moslems are translated into a lack of self-critical thinking and attitude among Moslems themselves.Here is a pertinent observation by a Moslem scholar, sympathetically quoting another observer: The most subtle and, for Muslims, perilous consequence of Islamophobic actions," a Muslim scholar has observed, "is the silencing of self-criticism and the slide into defending the indefensible.Muslims decline to be openly critical of fellow Muslims, their ideas, activities and rhetoric in mixed company, lest this be seen as giving aid and comfort to the extensive forces of condemnation.Brotherhood, fellow feeling, sisterhood are genuine and authentic reflexes of Islam.But Islam is supremely a critical, reasoning and ethical framework… [It] or rather ought not to be manipulated into 'my fellow Muslims right or wrong'."The writer goes on to add that Islamophobia provides "the perfect rationale for modern Muslims to become reactive, addicted to a culture of complaint and blame that serves only to increase the powerlessness, impotence and frustration of being a Muslim. ISJ 1:1(2012) (Imam Dr. Abduljalil Sajid, 2005:34-35, quoting from Davies, 2002) CONCLUSION One does not have to acknowledge the danger of Islamophobia for fear of being accused of Islamophilia.Nor should one abandon being critical of Islamophilia in fear of being accused of Islamophobia.Islamophobia and Islamophilia are woven of similar threads in the sense that they both seek to oversimplify and essentialize Islam as a civilizational project for being entirely bad or good.What is to be done away with is the binary logic feeding such argumentations.One can be critical of both Islamophobia and Islamophilia and be also critical of centuries of imperial policies that have helped distort the realities of historical Islam. What is to be confronted and questioned head on is the common premises displayed in both tendencies that civilizational projects are monolithically good or bad, right or wrong.The West prides itself for being self-critical, and dynamic as a result, but it seeks to silence the views of those who regard other civilizational projects, Islam included, to be characterized by the same complexities and contradictory tendencies from which the West is itself not exempt.It is this presumption of presumed uniformity and monolithic heterogeneity that the West falsely attributes to its colonial others and then blames them for.Islamophobia and Islamophilia, thereby, are aspects of the West's epistemic racism and its own looking glass self projected upon colonized subjects as if it points to their essential attributes. Recent examples of support for and then the overthrow of Saddam and the original support for and the current war against Afghani "freedom fighters" metamorphosed into Al-Qaeda suggest how the contemporary political realities of Islam that engender Islamophobic and Islamophilic reactions in Western eyes are far from independent processes and phenomena that the West merely reacts to.They are the very byproducts of its imperial policies, for empires and Bin-Ladins (and Saddams) are two faces of the same actual and latent imperial coin.The West regards itself as a beauty, desperately seeking to respectively adorn and cleanse the Janusfaced images of the beauty and the beast on the wall of Islam, not realizing that the wall is a mirror and both reflected images of the beauty and the beast on the wall ever cross-morphing by-products of its own orientalist imperial adventures across modern world-history. 8 See "Othering Islam: Proceedings of the International Conference on "The Post-September 11 New Ethnic/Racial Configurations in Europe and the United States: The Case of Islamophobia" (Maison des Sciences de l'Homme, Paris, France, June 2-3, 2006)."Human Architecture: Journal of the Sociology of Self-Knowledge V(1), 2006. Challenge to Us All (Summary)."London,UK:Runnymede Trust, p. 2. 1997.The summary and full report may be obtained from The Runnymede Trust, Suite 106, London Fruit & Wool Exchange, Brush field St, London E1 6EP, United Kingdom.The summary can be downloaded from http://en.wikipedia.org/wiki/Runnymede_Trust.Islam [is]seen as an actual or potential partner in joint cooperative enterprises and in the solution of shared problems.5. Islam [is] seen as a genuine religious faith, practised sincerely by its adherents.6. Criticisms [by Islam] of 'the West' and other cultures are considered and debated.7. Debates and disagreements with Islam do not diminish efforts to combat discrimination and exclusion.8. Critical views of Islam are themselves subjected to critique, lest they be inaccurate and unfair. 5 1"Islamophobia: A seen as containing diverse, contradictory interpretations and traditions that may offer a spectrum of progressive to conservative socio- political tendencies, some displaying dynamic, self-critical, and self-transformative attitudes while others remaining static, dogmatic, and unresponsive to new realities]. The seen as a single monolithic bloc, static and unresponsive to new realities.2.Islam as a whole[is]seen as separate and other-(a) not having any aims or values in common with other cultures (b) not affected by them (c) not influencing them.3.Islam as a whole [is] seen as inferior to the West-barbaric, irrational, primitive, sexist.4.Islam as a whole [is] seen as violent, aggressive, threatening, supportive of terrorism, engaged in 'a clash of civilisations'.5.Islam as a whole [is] seen as a political ideology, used for political or military advantage.6.Criticisms made by Islam of 'the West' [are] rejected out of hand.7.Hostility towards Islam [is] used to justify discriminatory practices towards Muslims and exclusion of Muslims from mainstream society.8.Anti-Muslim hostility [is] accepted as natural and 'normal'.The need for the above adjustment becomes clear when we move on to reconsider the alternative list of "open views of Islam" as offered in the Runnymede Report.To expedite the comparative considerations, I will provide adjustments and commentaries to the second list as follows (alternative formulations are offered in bold in brackets, while further explanations are provided in italics, when needed):1.Islam [is] seen as diverse and progressive, with internal differences, debates and development.[Islam is traditions, and sociopolitical tendencies in Islam may display different degrees of openness to interdependence and sharing of values and aims with other faiths and cultures, each trend's responsiveness (ranging from accommodation to rejection) and strength varying depending on changing social-historical (economic, cultural, and political) conditions, interests, and forces both internal and external to the Islamic community] . 3. Islam [is] seen as distinctively different, but not deficient, and as equally worthy of respect.[The extent to which Islam is regarded as distinctively different, promising or deficient, or worthy of respect depends on which interpretations, traditions, and sociopolitical tendencies in Islam are under consideration and which social agency outside the Islamic community is making such assessments and judgments; some may be highly civilized, rational, advanced, and egalitarian; others may be fundamentalist, barbaric, irrational, primitive, and sexist, keeping in mind that such a spectrum of tendencies may have been shaped and distorted by forces both internal and external to the Islamic community] . 4.Islam [is]seen as an actual or potential partner in joint cooperative enterprises and in the solution of shared problems.[As in may not only be subjected to discrimination and exclusion, which are unwarranted simply because of debates and disagreements with one or another trends in Islam, but some Moslems associated with particular trends in Islam may also practice discrimination and exclusion because of intracommunal debates and disagreements or as a result of debates initiated or disagreements expressed by those outside the Islamic community; at the same time, there may be other Islamic tendencies that self-critically eschew such discriminations and exclusions practiced by other Moslems and, thereby, condemn and seek to end them] . 8. Critical views of Islam are themselves subjected to critique, lest they be inaccurate and unfair.[Both the
2019-05-10T13:07:40.862Z
2012-01-01T00:00:00.000
{ "year": 2012, "sha1": "94fcb04455b18824ce16fd2702fb494fd1c621f4", "oa_license": "CCBY", "oa_url": "https://www.scienceopen.com/document_file/60f63e90-2d36-4206-9408-ea8bd0ae6645/ScienceOpen/islastudj.1.1.0054.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "bf21c56639ef82e851ec954188fcc06758e86083", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [ "Political Science" ] }
222159958
pes2o/s2orc
v3-fos-license
The Effects of Moderate and Severe Salinity on Composition and Physiology in the Biomass Crop Miscanthus × giganteus Saline land represents a growing resource that could be utilised for growing biomass crops, such as Miscanthus × giganteus (Greef et Deu.), for eliminating competition with staple food crops. However, the response mechanisms to different salinity regimes, in relation to the impact on quality of the harvested biomass and the combustion properties are largely unknown. Herein, the focus was on the salt-induced compositional changes of ion flux and compartmentalization in the rhizome, stems, and leaves in relation to their impact on salinity tolerance and the combustion quality through investigating the photophysiological, morphophysiological, and biochemical responses of M. × giganteus to moderate and a severe salinity. Severe salinity induced an immediate and sustained adverse response with a reduction in biomass yield, photoinhibition, and metabolic limitations in photosynthesis. Moderate salinity resulted in a slower cumulative response with low biomass losses. Biomass composition, variations in ion compartmentalisation and induction of proline were dependent on the severity and duration of salinity. Ash behaviour indices, including the base percentage and base-to-acid ratio, indicated lower corrosion potential and lower risk of slagging under salinity. Understanding the impact of salinity on the potential for growth on saline land may identify new targets for breeding salinity-tolerant bioenergy crops. Introduction Degraded lands, often termed as marginal, have been reported suitable for cultivation of grasses, which are more adapted to low-nutrient, erodible, or drought prone soils [1]. Second generation perennial biomass crops, such as the grass Miscanthus, which is a highly productive and sustainable crop for bioenergy and feedstock for the bioeconomy [2], are ideal for cultivation on marginal land and would not compete with conventional food crops [3]. Saline land is marginal for most agriculture and represents a growing resource that could be utilised for Miscanthus cultivation [4]. Miscanthus sinensis exhibits salt spray tolerance growing in coastal landscapes as ornamental grass [5,6], with salt concentrations higher than 10 dS m −1 NaCl reducing the yield by over 50% [7]. The genetic diversity of salt tolerance to combinations of salinity and drought conditions [8] and single salt stress in Miscanthus have been recently documented [9][10][11]. Nevertheless, studies of salt tolerance mechanisms in Miscanthus have focused on the morphophysiological and biochemical [7,8,[10][11][12][13] and transcriptional [14] responses. considering that delayed senescence and leaf fall reduces the content of ash producing leaves in the harvest [57] and that high ash content negatively impacts the yield and quality of fast pyrolysis liquids [58]. The response mechanisms of the commercial hybrid M. × giganteus, in relation to the impact of salinity on the quality of the harvested biomass and the combustion properties under moderate and high salinity are largely unknown. This study aims to determine the photophysiological and morphophysiological response of M. × giganteus to moderate and a severe salinity stress, coupled with the salt induced compositional changes in terms of ion influx and compartmentalization in the different plant tissues in relation to their impact on salinity tolerance and the combustion properties. Effects of Moderate and Severe Salinity on Plant Growth Increased salinity negatively affected plant growth ( Figure 1; Table 1) and biomass production ( Table 1). High salt stress (19.97 dS m −1 ) induced an immediate and sustained adverse response, whereas the moderate salt stress (5.44 dS m −1 ) resulted in a slower cumulative response compared to the control plants (Supplementary Materials Figure S1). Height of the main stem (cm) was reduced under both salt treatments in response to time (p < 0.001) ( Figure 1), with an earlier response observed at 19.97 dS m −1 compared to the moderate stress and control treated plants. The interaction effect between treatment and time (p < 0.001) on total number of senesced leaves, showed an early significant increase at 19.97 dS m − 1 (p < 0.05), compared to the delayed senescence observed at 5.44 dS m −1 (Figure 1). The leaf area was significantly affected by the interaction effect between treatment and time (p < 0.001) at 19.97 dS m −1 NaCl with a decrease being observed between week 3-5 (p < 0.05) ( Figure S2). Biomass Accumulation in Response to Salinity Leaf number was significantly reduced under moderate and high NaCl stress across time (p < 0.05); however, the stem number was unaffected by salinity, harvest time, and their interaction (data not shown). The total production of fresh matter (FM) and dry matter (DM) was reduced in response to treatment (p < 0.001), especially at severe salinity (p < 0.05) after harvest day 32 ( Figure 2; Table 1 and Table S1). Across time, only plants under moderate salinity and control conditions increased their total DM. Aboveground DM was significantly reduced under severe salinity, whereas no changes were observed under moderate salinity ( Table 2). DM of leaves and stems was also reduced in plants treated with both salt treatments after harvest day 32 and showed an increase only in moderate salinity at the last harvest point ( Table 2). Belowground DM was reduced significantly in severe salinity at harvest days 46 and 54, due to a reduction in rhizome DM observed after harvest day 32 and a delayed decrease in roots DM under severe salinity at the final harvest day ( Table 2). (19, 32, 46, and 54). Data are mean ± Standard Error (n = 5). Different letters show significant differences between treatments within each day (p < 0.05). Dots indicate outliers (data points that are located outside the fences, "whiskers" of the boxplot). Physiological Response to Salinity Several physiological parameters were affected by the cumulative effect of salinity (Table S2.). The significant effect of salinity on PSII maximum efficiency (Fv/Fm) was attributed not only to the effect of treatment per se but also on the duration of the treatment (effect of time) (Table S2). When taking into consideration these effects we observed that Miscanthus plants treated with moderate salinity were unaffected compared to controls and only severe salinity had a negative impact on PSII (19, 32, 46, and 54). Data are mean ± Standard Error (n = 5). Different letters show significant differences between treatments within each day (p < 0.05). Dots indicate outliers (data points that are located outside the fences, "whiskers" of the boxplot). Physiological Response to Salinity Several physiological parameters were affected by the cumulative effect of salinity (Table S2.). The significant effect of salinity on PSII maximum efficiency (F v /F m ) was attributed not only to the effect of treatment per se but also on the duration of the treatment (effect of time) (Table S2). When taking into consideration these effects we observed that Miscanthus plants treated with moderate salinity were unaffected compared to controls and only severe salinity had a negative impact on PSII maximum efficiency after 23 days, with the impact becoming more severe with time, leading to complete inhibition of chlorophyll fluorescence ( Figure 3). Performance index (PI) was a more sensitive indicator of photoinhibition for the highest salt concentration (19.97 dS m −1 NaCl) showing an earlier response, at 10-days, but this was unaffected by moderate salinity stress ( Figure 3). The area above the fluorescence curve was significantly reduced after day 40 only in the highest salinity (Figure 3), indicating that electron flow into the plastoquinone (PQ) pool on the reducing side of PSII was blocked. Relative chlorophyll content was significantly reduced under the accumulative impact of salinity ( Figure 4) and the plants under moderate salinity showed a delayed reduction of chlorophyll content as observed on day 50 compared to the control plants ( Figure S1). Increasing salinity induced a significant and immediate decrease in stomatal conductance (g s ) (p < 0.001) with differences between the treatments being observed on day 3 under severe salinity and a week later in the moderate salt stress compared to control plants ( Figure 4). Effect of Salinity on Carbon Fixation Efficiency The effects of salt treatments on carbon fixation efficiency were investigated to describe the parameters derived from the dependence of CO 2 assimilation rate (A) to leaf internal CO 2 mole fraction (Ci). The measured for each plant at four time points and modeled A/C i curves are presented in Figure S4. Severe salinity at harvest days 46 and 54 had a negative impact on plants, which were senesced and dry, and therefore the measurements would have been biased. The A max ( Figure 5 and Table S3) was significantly affected by the increasing salinity, with the highest salt concentration showing a more rapid effect on week 3, whereas moderate salinity-induced a delayed decline in A max on week 5 (Table S3). The accumulative effect of salinity induced an increase in the ratio of intercellular to external CO 2 concentration (C i /C a ) at week 3, yet non-significant (time × treatment; p < 0.1) ( Figure 5). The maximum carboxylation efficiency (CE) was significantly reduced on week 3 under severe salinity, and it was unaffected in moderate salinity in comparison to the controls (Table S3). The CO 2 saturated PEP carboxylation rate (V pmax ; µmol m −2 s −1 ) and the CO 2 compensation point were reduced with increasing salinity yet not significantly, whereas the PEPC Michaelis-Menten constant for CO 2 (Kp) and the curvature (omega, ω) of the A/C i curves were not affected by the increasing salinity, but rather by time. Despite the reduction in stomatal conductance over time for both salt treatments (Figure 4b), stomata were not the main limiting factor of carbon fixation at 5.44 dS m −1 NaCl, until after week 5, where a slight decoupling was observed ( Figure 5). For the severe salinity, rapid decline in the assimilation rate was mainly caused by metabolic limitations (Table S3, Figure 5). constant for CO2 (Kp) and the curvature (omega, ω) of the A/Ci curves were not affected by the increasing salinity, but rather by time. Despite the reduction in stomatal conductance over time for both salt treatments (Figure 4b), stomata were not the main limiting factor of carbon fixation at 5.44 dS m −1 NaCl, until after week 5, where a slight decoupling was observed ( Figure 5). For the severe salinity, rapid decline in the assimilation rate was mainly caused by metabolic limitations (Table S3, Figure 5). Water Relations Responses Intrinsic water use efficiency (WUEi) increased only in 5.44 dS m −1 NaCl-treated plants after week 3 (Table S3), followed by a significant decline at week 7. The relative water content (RWC) in the leaves of moderately stressed plants was not affected ( Figure 6). However, under severe salinity, leaves showed a significant decrease in RWC at harvest day 54 ( Figure 6). Role of Salinity on Leaf Tissue Compounds Salinity affected all of the biochemical parameters measured with a significant interaction effect between treatment and harvest time (Table 1). Relative electrolyte leakage in leaves was significantly increased under both salt stresses; however, in moderate salt stress it occurred only at the last harvest day 54 ( Figure 6). Malondialdehyde (MDA) content increased significantly in leaves ( Figure 7) only under 19.97 dS m −1 NaCl on harvest days 46 and 54. Proline accumulation in leaves increased dramatically on day 32 under severe salinity (Table 3). Intrinsic water use efficiency (WUEi) increased only in 5.44 dS m −1 NaCl-treated plants after week 3 (Table. S3), followed by a significant decline at week 7. The relative water content (RWC) in the leaves of moderately stressed plants was not affected ( Figure 6). However, under severe salinity, leaves showed a significant decrease in RWC at harvest day 54 ( Figure 6). (19, 32, 46, and 54). Data are mean ± Standard Error (n = 5). Different letters show significant differences between treatments within each day (p < 0.05). Dots indicate outliers (data points that are located outside the fences, "whiskers" of the boxplot). Role of Salinity on Leaf Tissue Compounds Salinity affected all of the biochemical parameters measured with a significant interaction effect between treatment and harvest time (Table 1). Relative electrolyte leakage in leaves was significantly increased under both salt stresses; however, in moderate salt stress it occurred only at the last harvest day 54 ( Figure 6). Malondialdehyde (MDA) content increased significantly in leaves ( Figure 7) only under 19.97 dS m −1 NaCl on harvest days 46 and 54. Proline accumulation in leaves increased dramatically on day 32 under severe salinity ( Table 3). The ash content increased with salinity intensity in both, leaves and stems across harvest days ( Figure 8; Table 4). The ash content was higher under severe salinity, relatively high under moderate salinity and low under control conditions. Between the two tissue types, leaves had a higher percentage in ash content compared to stems, regardless of the treatment and time effects ( Figure 8; Table 4). Leaves in severe salinity, showed ash contents increased by 1.1-fold (harvest day 32) and 2.7-fold greater (harvest day 54), whilst the ash content of leaves from moderate salinity treatments increased by approximately 0.4-fold when compared to the controls on harvest day 54. The ash content in stems under severe salinity was 1.7-fold and 3-fold greater on harvest days 46 and 54, respectively, whereas under moderate salinity the increase was up to 1.1-fold, compared to control plants on harvest day 54 ( Figure 8). (19, 32, 46, and 54). Data are mean ± Standard Error (n = 5). Different letters show significant differences between treatments within each day (p < 0.05). Dots indicate outliers (data points that are located outside the fences, "whiskers" of the boxplot). (19, 32, 46, and 54). Data are mean ± Standard Error (n = 5). Different letters show significant differences between treatments within each day (p < 0.05). Dots indicate outliers (data points that are located outside the fences, "whiskers" of the boxplot). (19, 32, 46, and 54). Data are mean ± Standard Error (n = 5). Different letters show significant differences between treatments within each day (p < 0.05). Dots indicate outliers (data points that are located outside the fences, "whiskers" of the boxplot). The ash content increased with salinity intensity in both, leaves and stems across harvest days ( Figure 8; Table 4). The ash content was higher under severe salinity, relatively high under moderate salinity and low under control conditions. Between the two tissue types, leaves had a higher percentage in ash content compared to stems, regardless of the treatment and time effects ( Figure 8; Table 4). Leaves in severe salinity, showed ash contents increased by 1.1-fold (harvest day 32) and 2.7-fold greater (harvest day 54), whilst the ash content of leaves from moderate salinity treatments increased by approximately 0.4-fold when compared to the controls on harvest day 54. The ash content in stems under severe salinity was 1.7-fold and 3-fold greater on harvest days 46 and 54, respectively, whereas under moderate salinity the increase was up to 1.1-fold, compared to control plants on harvest day 54 ( Figure 8). Ion Flux and the Role of Salinity on Tissue Compartementalisation and Combustion Properties Salinity affected the levels of total K, Ca, Mg, S, Cl, and Si, but not Fe ( Figure S4; Table 5, Tables S4 and S5.). Sodium increased dramatically in leaves, stems, and rhizome under both salinity treatments with plants under severe salinity having a more dramatic effect on the Na accumulation. The distribution of Na was equal throughout the components of the biomass partitions in the control Ion Flux and the Role of Salinity on Tissue Compartementalisation and Combustion Properties Salinity affected the levels of total K, Ca, Mg, S, Cl, and Si, but not Fe ( Figure S4; Table 5, Tables S4 and S5). Sodium increased dramatically in leaves, stems, and rhizome under both salinity treatments with plants under severe salinity having a more dramatic effect on the Na accumulation. The distribution of Na was equal throughout the components of the biomass partitions in the control treatment. However, Na was concentrated in the rhizome in moderately stressed plants, and this was translocated and concentrated in the leaves in plants under severe salinity stress. High salinity stress produced higher total Na in treated plants during the experiment, but under moderate stress the increase was induced only after day 46 (Table S6). Water soluble chloride, unlike Na, increased only in leaves and rhizome under both salinity treatments throughout the experimental duration, but no effect was observed on the stems (Table 5 and Table S6). Accumulation of total potassium was also higher in leaves and stems, but only under severe salinity, yet Ca increased in leaves under both stress treatments and in stems of plants growing under severe salinity stress. Total magnesium increased only in the rhizome under the severe salinity. Total sulphur increased only in stems and rhizomes under severe salinity and was higher in rhizomes compared to the above-ground tissues. Silicon accumulated to higher levels in leaves and stems under 19.97 dS m −1 NaCl (Table 5). More Ca was translocated from the rhizome to the above-ground biomass, whereas total K reached the stems at moderate salinity, and the leaves under severe salinity (Table 5). Total potassium was significantly higher in leaves under severe salinity and did not show any change over time compared to the moderately stressed and the control plants, which had significantly lower K over time (Table S6). Total sulphur in leaves was reduced with increasing salinity, whereas only under moderate stress a reduction over harvesting points was observed. The Si and Ti were significantly affected only by the increasing intensity of salinity and the observed differences were detectable only at the final harvest day (day 54) (Table S6). The Si was mainly accumulated in the leaves with a lower content being found in the stems and no traces were detected in the rhizomes ( Table 5). The ratios K/Na and Ca/Na (Table 6 and Table S6) were reduced in leaves with increasing salinity in all tissue types and harvest days. Both ratios were higher in leaves compared to stems and rhizomes at moderate salinity; however, under severe salinity stress K/Na was higher in the rhizome compared to leaves and Ca/Na was higher in leaves compared to stems and rhizome. The early decrease observed in both ratios in leaves from harvest day 19 occurred under severe salinity, whilst at moderate salinity K/Na ratio showed a delayed decline (harvest day 54) compared to Ca/Na, which decreased eight days earlier (Harvest day 46). Lower Ca/K and Si/K ratios, and therefore increased slagging tendency, were only observed in leaves under severe salinity. Compared to the control and moderate salinity treatments, severe salinity induced a reduction in Ca/K, which began on harvest day 46 and was followed by a cumulative significant reduction in Si/K on the final harvest day 54 (Table 6 and Table S6). The molar ratio 2S/Cl, which is used as empirical index to evaluate the corrosion potential for herbaceous biomass, was higher in stems compared to leaves (p < 0.001) on the final harvest day, whereas time course measurements in leaves showed that under both salt treatments the ratio 2S/Cl was significantly lower compared to the control pants. Significantly increased Rb/a is used as an indicator that the fouling tendency of a fuel ash increased under 19.97 dS m −1 NaCl in leaves (Table 7 and Table S7); however, in stems the highest salt level induced a decline in the Rb/a. However, stems compared to leaves, always showed higher base to acid ratio except under severe salinity, where no differences where observed ( Table 7). The % base was greater in both leaves and stems with increasing salinity and especially under severe salinity stress. Leaves had the highest % base compared to stems, except in moderate salt stress, where no differences between the organs were observed ( Table 7). The increase in % base of leaves occurred earlier (day 19) under severe salinity, whilst under moderate salinity the % base was also increased compared to control plants on day 54 (Table S7). Table 5. Tukey HSD (T HSD ) post-hoc test for the effects of treatment on the total element content for K, Na, Cl, Ca, Mg, S, and Si of M. × giganteus leaves, stems, and rhizomes at 0, 5.44, and 19.97 dS m −1 NaCl on harvest day 54. Different lowercase letters indicate significant differences between treatments (BT) for each tissue type; uppercase letters differences within treatment (WT) between tissue types at p < 0.05; ns indicate no significant differences. Data are mean ± Standard Error (Leaves: n = 3; Stems and Rhizomes: n = 5). Discussion Saline land provides an opportunity for growing second generation biomass crops to avoid competition with staple food crops; however, quantitative, and qualitative effects of salinity may be a constraint towards the utilisation of such lands for biomass yield. The potential to exploit salt affected lands will depend on the salt concentrations in the soil and the extent at which the yield and biomass quality are reduced. Understanding plant tolerance under salinity and its impacts on the harvested product may provide a new range of targets for breeding strategies towards salinity-tolerant bioenergy crops. Herein, we investigated how composition, including ion content and compartmentalization, proline accumulation, and water relations interact with photophysiological and morphophysiological response mechanisms towards salinity tolerance and biomass quality of M. × giganteus. Effects of Moderate and Severe Salinity on Biomass Accumulation and Partitioning Elevated salt content induced a reduction in the water uptake capacity of Miscanthus, observed as a rapid reduction in growth rate, in a similar way as in drought stress. Severe salinity (19.97 dS m −1 ), induced an immediate and sustained adverse response with a reduction in biomass yield up to 56.4%. Moderate salinity (5.44 dS m −1 ) triggered a slower cumulative response compared to the control plants but did not incur great losses in biomass (<23%). The reduction in above-ground DM was manifested as abscission of older leaves, highlighting a cumulative ionic effect due to prolonged exposure to salinity. High salinity induced early premature senescence of older leaves (from day 12), whilst the effect of moderate salinity was more gradual, suggesting plants responded initially to osmotic and later to accumulated ionic effect of salinity, compared to the severe ionic effect under high salinity. Consecutive harvests revealed root growth was inhibited earlier (day 19) under high NaCl, compared to the delayed inhibition observed in moderate salinity. Rhizome DM was also reduced earlier in plants growing under high salinity. Płażek et al. [13] observed a similar response in M. × giganteus. This ability of perennial grasses to maintain the below-ground biomass under stress conditions, could preserve sufficient reserves to invest in the following season's growth [59]. This may be physiologically relevant for transitory stresses like drought; however, it remains unclear how the annual growth will be affected by maintaining the below-ground biomass under consistent salinity stress of variable intensities due to seasonal environmental changes [60,61]. Impact of Salinity on Metabolic and Non-Metabolic Factors The degree of tolerance to the osmotic effects of salinity is reflected on the ability of plants to maintain the g s [62], which is associated with regulation of CO 2 assimilation rate and transpiration and is positively associated to the relative growth rate in saline soils [63,64]. Both salinity treatments induced a reduction in g s , which was more severe under 19.97 dS m −1 NaCl. Reduction of g s has been attributed to the impact of high ion concentrations in leaves, the induced perturbation of water status and the local synthesis of abscisic acid in the guard cells [65]. In maize, the decline in the assimilation rate under increasing salinity was mainly associated to stomatal limitations (Ls) and to a lesser extent photoinhibition [66]. Herein, high salinity induced a significant decrease in the photochemical efficiency of PSII (F v /F m ). The elevated C i /C a values (week 3), combined with reductions in V pmax , indicate that the decrease in carbon assimilation rate could not be explained by CO 2 deficiency or limitations in stomatal function. The impact of salinity on photosynthesis is therefore likely due to the observed reductions in the activity and regeneration of PEP carboxylase, which was reflected in significant reduction in carbon assimilation rate and as a result significant losses in DM. Similar response patterns in physiological parameters have been previously observed in Miscanthus, switchgrass, and sugarcane in response to cold stress [67][68][69]. In moderate salinity the observed gradual reduction in carbon fixation (A max ) occurred mainly due to salt-induced osmotic effect, manifested as induced stomatal resistance. The cumulative effect under prolonged exposure to moderate salinity resulted in uptake and accumulation of Na + , which according to Muranaka et al., [70] may have directly affected the electron transport; thus, the observed delayed reduction in photosynthetic capacity. However, this delayed reduction was not severe enough to affect biomass accumulation and increase in proline content may have played a significant role in that. Severe salinity caused reduction in photosynthesis not only in terms of metabolic limitations but also photoinhibition. The decline in maximum quantum yield of PSII (F v /F m ) was observed under high salinity treatment after day 23 with the impact becoming more severe over time, leading to complete inhibition of chlorophyll fluorescence. The reduction in the area above the fluorescence curve between F o and F m indicated that the electron flow to the plastoquinone (PQ) pool on the reducing side of PSII was blocked by high salinity as was demonstrated by Kalaji et al. [71] in barley, where 120 mM NaCl resulted in inhibition of electron transport from the reaction centres to the plastoquinone pool. In contrast, moderate salinity did not affect the maximum quantum yield of PSII, which could be the explained by the non-toxic accumulation of Cl in the leaves that is shown to activate PSII [26]. Proline Accumulation in Relation to Chlorophyll Content, Electrolyte Leakage, and Photosynthetic Performance The foliar water relations are influenced by the ion accumulation and plant ability for osmotic adjustment, [72]. Herein, M. × giganteus was able to maintain leaf water and relative chlorophyll contents after prolonged moderate salinity stress, indicating a potential mechanism of osmotic adjustment, which is related to the accumulation of osmoprotectant molecules, such as proline. Proline is a multifunctional amino acid that adjusts the osmotic potential inside the cytoplasm and its accumulation during stress conditions is mainly due to increased synthesis and reduced degradation [73]. The increased proline accumulation in M. × giganteus at 19.97 dS m −1 NaCl, as early as the first harvest day (day 19), provides evidence for water preservation in leaves through osmotic adjustment. Although in high salinity, the osmotic adjustment occurs at the expense of plant growth, it may assist with plant survival or even recovery (reviewed by [65,74,75]). The increased proline accumulation observed under moderate salt stress appears to have conferred tolerance to the photosynthetic apparatus. The PSII maximum efficiency and the electron flow to the PQ pool on the reducing side of PSII, were unaffected by moderate salinity throughout the experiment and CO 2 assimilation was unaffected up to week 3. However, in the high salt treatment, the excessive accumulation of proline occurred too slowly to prevent a reduction of the negative effects on photosynthetic performance (Figures 4-7). Proline is synthesized under stress conditions both in the shoot and root and can also be transported to the root via the phloem by proline transporters [76]. Therefore, in this study, proline accumulation under salinity might contribute to root growth, which was maintained, and was only reduced at the highest salinity on the final harvest day (54). Proline homeostasis, rather than the proline accumulation per se, is considered important for the maintenance of cell division under abiotic stress [77]; however, the effect of the temporal and spatial concentrations of proline (basal versus elevated levels) on plant growth in response to stress is yet to be determined [77]. The pronounced leaf senescence observed under high salinity after day 15 can be initially induced by the osmotic phase of salinity, when growth inhibition and metabolic changes occur [15,28]. Herein, the ability of M. × giganteus to maintain water content of leaves under both salinity levels and leaf chlorophyll after prolonged stress duration at moderate salinity stress, may indicate a potential mechanism of osmotic adjustment in moderate salinity, such as salt-induced accumulation of proline. Relative electrolyte leakage increased in leaves exposed to 19.97 dS m −1 NaCl from harvest day 32 onwards, whereas a delayed increase (final harvest day) was observed at 5.44 dS m −1 NaCl treated plants. The premature leaf senescence of M. × giganteus and the damage to membrane structure induced by high salinity concentrations may be a result of the excessive ion accumulation in shoots and leaves, especially Cl − and Na + in toxic levels, which could also explain the reduction in photosynthesis. Both salinity treatments affected the ionic balance of M. × giganteus leaves, stems, and rhizomes ( Figure S4; Tables 6 and 7). In many plant species high NaCl concentrations act antagonistically to the uptake of nutrients like K + , Ca 2+ , and Mg 2+ , by reducing their concentrations [78][79][80]. The significant accumulation of sodium (Na) in leaves was observed only at the highest salinity treatment, whereas stems and rhizome accumulated total Na under both salinity treatments and especially at the highest salinity level. Total Na content did not differ in different organs in control plants, whilst it was sequestered in rhizome under 5.44 dS m −1 NaCl and was translocated in greater quantities to leaves over time under 19.97 dS m −1 NaCl. The observed alterations in ion flux and ion distribution among plant tissues were accompanied by the induction of proline in the leaves, possibly as a measure of osmotic adjustment. However, the impact of severe salinity was too intense for proline to counterbalance the negative effects from increased electrolyte leakage and lipid peroxidation. Ion Accumulation and Compartmentalisation Ability Elemental composition effects cellular processes and stress tolerance, but in biomass crops this is of particular importance in affecting biomass composition. Changes in elemental composition may have beneficial or detrimental effects on biomass quality depending on which organs are affected and the direction of change. Herein, ion accumulation varied in different tissue types (leaves, stems, and rhizomes) and the ability to compartmentalise toxic ions to specific plant tissues, which is dependent on the severity of salinity was also demonstrated. Under high salinity, total sodium (Na) was accumulated in leaves, stems, and rhizome, whereas under moderate stress was mainly found in rhizome and stems. Total Na content did not differ in different organs in control plants, whilst it was sequestered in the rhizome under 5.44 dS m −1 NaCl and was translocated in greater amounts to leaves over time under 19.97 dS m −1 NaCl. Water soluble chloride (Cl), unlike total Na, increased only in leaves and rhizome under both salinity treatments, but no treatment effect was observed in the stems, possibly due to compartmentation in leaf vacuoles. Under control conditions, M. × giganteus stems accumulated most chloride (16.2 mg g −1 ), and therefore may be characterised as a moderately tolerant crop to Cl − toxicity. Maintaining Ca accumulation and transport under salinity is important for enhanced tolerance [81], as it modulates intracellular Na + homeostasis in plants (Munns, 2002). In both salinity treatments, Ca was translocated from the rhizome to the above-ground biomass and particularly leaves, whereas total K was elevated in stems at moderate salinity and in the leaves under severe salinity, where it was maintained at high levels in all harvest points. This increase in both Ca and K in leaves under salinity stress may demonstrate an exertion to maintain the osmoregulation and function of cell membranes [82]. Enhanced tolerance to salt stress has been observed in the presence of a more efficient selective uptake of K + and cellular compartmentation and distribution of K + and Na + in the shoots of barley [83][84][85]. Total K was accumulated in greater amounts than total Na in stems and rhizomes in both salinity treatments (harvest day 54); however, in leaves, more Na was sequestered at the highest salinity level after harvest day 46, when it was at toxic levels causing rapid senescence, inhibiting gas exchange and causing extreme electrolyte leakage and an increase in lipid peroxidation. Similarly, Si accumulated at higher levels in leaves and stems ranged from 5.5% at control conditions to 6.7% and 8.5% at 5.44, and 19.97 dS m −1 NaCl, respectively. Non-stressed M. × giganteus has showed Si content between 0.55 and 2.42% grown on various locations in US [86]. Despite the negative impact of Si on thermo-conversion efficiency of biomass to bioenergy, there are several beneficial biological effects, including enhanced photosynthetic activity, increased resistance to pests and pathogens, reduced mineral toxicity, improved nutrient balance, and tolerance against drought, and frost stress [87,88]. Increased accumulation of Si in plants has been shown to enhance growth under drought by reducing transpiration [89] as well as under salinity, partially due to Si-induced decrease in transpiration, and the disruption of the Na concentration in roots and flag leaves of wheat [90] and rice [91]. According to this, it is possible that the observed 6.7% Si accumulated in leaves under moderate salinity played a role in the maintenance of growth and possibly in the lower accumulation of Na in these leaves. Biomass Quality and Combustion Properties The biomass of M. × giganteus has been shown to have good combustion characteristics [92] compared to other lignocellulosic crops [93]. The ash content, as expected, was greater in leaves compared to stems and increased up to 2.7-fold and 3-fold, respectively, at severe salinity on the final harvest day. In moderate salinity, ash content increased by 0.4-fold in leaves and 1-fold in stems (Figure 8). High ash content has been shown to significantly reduce the energy output of biomass combustion [51]. In Miscanthus, the leaves contain higher mineral content and twofold the ash content compared to stems or reproductive organs [57], which was also observed herein under all treatments. As such, premature leaf senescence and leaf loss occurring under increasing salinity may contribute to improved quality of the harvestable biomass and thus, compensate for the total yield loss. However, despite the lower biomass quality for combustion due to high ash content, it is expected that after winter period additional loss of mineral content will occur from either senescence, leaf drop or leaching, and thus, M. × giganteus could be a good candidate for growing under moderate salinity levels. The ash melting behaviour is greatly affected by the elemental composition of ash (Na, K, P, Cl, Si, and Ca) [94]. Miscanthus is considered to have a low ash melting temperature [95][96][97] possibly related to the concurrent occurrence of increased Si, K, and Ca [98]. K concentration in the biomass fuel is required to be in low levels, because of slagging risk [92]. In this study, Si and Ca, contents were mainly observed in leaves, whilst K was present in both leaves and stems. Therefore, leaves may contribute to the increase of ash melting point and reduction of the slagging potential. Reductions in the ratios of Ca/K and Si/K, and therefore increasing slagging tendency, was only observed in leaves at severe salinity. Hence, the increased leaf loss in both salinity treatments possibly may contribute to enhanced biofuel quality according to the similar results observed under regular, non-stressed growing conditions by Monti et al. [57]. To reduce emissions and lower the corrosion risks in conventional boilers, especially when the fuel is high in Cl (>1-2 g kg −1 ) and K (>5 g kg −1 ) and low in S (<2 g kg −1 ), the maximum steam temperature has to be at 450 • C, [92]. In this study the Cl, K, and S concentrations were much lower in both leaves and stems, similar to results in M. × giganteus by Monti at al. [57] under control conditions. Minor corrosion is likely to occur if 2S/Cl molar ratio in the fuel are >4 [99]. Herein, the corrosion potential based on the molar ratio 2S/Cl, was higher in stems (1.34) compared to leaves (0.28), on the final harvest day and was not affected by salinity treatments, indicating lower corrosion potential. Therefore, despite leaves having higher accumulation of ion contents their presence may contribute to the increase of ash melting point and reduction of the slagging potential. The greater increase in the proxy estimations of base to acid ratio (R b/a ) of leaves under severe salinity in relation to the decline in stems, indicate lower risk of slagging in stems at high salinity treatment. Among organs, stems showed consistently higher base to acid ratio except in high salinity stress, where no differences were observed. Nevertheless, the ratio was higher under all treatments in relation to the recommended values of <0.5 for low risk of slagging and >1 indicative of severe slagging problems [55]. We have addressed key knowledge gaps in unravelling Miscanthus response mechanisms to moderate and high salinity, which could be the basis for enhancing crop adaptation to climate change. The concluding remarks that can be drawn from this research are (i) M. × giganteus is identified as tolerant to moderate salinity stress due to osmotic adjustment, and therefore can be cultivated in moderate salinity affected lands as a more energetically suitable bioenergy crop that would balance the energetic input in terms of fertilization and cultivation requirements, without diminishing the combustion quality; (ii) the effects of salinity on C4 photosynthesis are mediated by both stomatal and metabolic limitations depending on the salt concentration; (iii) ion accumulation varied in the type of tissue and the ability to compartmentalise toxic and essential ions to specific tissues was demonstrated; (iv) proline accumulation in leaves was induced by increasing salinity, which previous studies have shown to have an osmoregulatory role in protecting metabolic related photosynthetic processes reducing lipid peroxidation under moderate salinity; and (v) the duration and intensity of increasing salinity inhibited the production of biomass, which was unaffected by moderate salinity. The results presented herein revealed the potential for growth of M × giganteus on saline areas and may contribute to the wider understanding of the mechanistic effects of moderate and severe salinity on morphophysiology, photophysiology, composition, and biomass quality of M. × giganteus. This approach is expected to provide insights into new targets for breeding salinity-tolerant bioenergy crops by dissecting the salt-induced osmotic stress and ion toxicity effects in order to highlight the potential for growth of the biomass on underutilized or abandoned land. Conventional harvest under field conditions are recommended to better understand the effect of salinity on biomass quality and combustion properties, because composition may differ due to additional loss of mineral content from senescence, leaf drop, or leaching. Plant Material and Experimental Design The experiment was conducted at IBERS, Aberystwyth University, Wales, UK in controlled glasshouse conditions (Venlo) with 16-and 8-h day/night photoperiod from supplemental lighting with approximately 500 µmol photons m 2 s −1 of photosynthetically active radiation and 25 and 15 • C day/night cycle, respectively. M. × giganteus plants from approximately 20 g rhizome pieces were established and grown in 6.2 L pots containing John Innes No. 2 compost (Levington ® , Evergreen Garden Care Ltd., Surrey, UK). A homogeneous population was selected and grown to seven fully expanded leaves. Two different NaCl concentrations (5.44 and 19.97 dS m −1 equivalent to 60 and 210 mM NaCl, respectively) and zero salt content (control), selected from our previous work [7], as indicative for the induction of different responses in M. × giganteus, were supplied via irrigation. To avoid osmotic shock, increasing rates of 5.44 dS m −1 NaCl were applied gradually per day until all treatments reached the target concentration (approximately on day 18; Figure S5). Plants were irrigated with 1 2 strength Hoagland's solution [100] every 2 weeks with the electrical conductivity adjusted to the experimental salt concentrations. Moisture content and electrical conductivity was measured as an average of three measurements per pot using a WET sensor (WET; Delta-T Devices Ltd., Cambridge, UK) inserted at three roughly equidistant points around the surface of the pot, and readings were recorded by a hand-held moisture meter (HH2 moisture meter; Delta-T Devices Ltd., Cambridge, UK). A total of 60 plants (3 pots/m 2 ) were treated for 54 days in a completely randomised design with 20 biological replicates per treatment and four harvest time points, at 19, 32, 46, and 54 days for destructive measurements, with n = 5 biological replicates per treatment (days 1-16: n = 20; days 17-26: n = 15; days 27-37: n = 10; and days 38-54: n = 5). All morphological and physiological measurements were performed twice every week between 09:00 and 14.00 h. Morphological Measurements The number of senesced (dead) leaves was assessed by counting the leaves that were completely senesced and were either attached to or detached from the plant. Stem length of the longest stem was measured from the ligule of the youngest fully expanded leaf to the base of the stem at soil level. Leaf area was determined by length and width (at half leaf length) measurements of the youngest fully expanded leaf with a ligule as described by Clifton- Brown and Lewandowski (2000): where LA is leaf area (cm 2 ), LL is the leaf length (cm), and LW is leaf width at half LL (cm). Harvested plants were separated into leaves, stems, rhizomes, and roots and the final morphological parameters were measured (n = 5). Above-and below ground biomass was harvested and fresh weight (FM) was measured, followed by drying at 60 • C until a constant weight was achieved to estimate dry matter (DM). Stomatal Conductance (g s ) Measurements of stomatal conductance (g s , mmol m −2 s −1 ) were performed using a diffusion AP4 porometer (Delta-T Devices Ltd., Cambridge, UK). A single reading was recorded after conductance readings had stabilised for at least three cycles and no more than five cycles. Relative Water content (RWC) Relative water content (RWC) indicates the hydration state of the leaf and is a function of the water content of a leaf (n = 5) relative to its fully hydrated or fully turgid state and is calculated using the following equation: To measure the fresh matter (FM), screw-cap tubes (2.5 mL) were weighed and numbered. Leaf discs from each plant (days 1-16: n = 20; days 17-26: n = 15; days 27-37: n = 10; and days 38-54: n = 5) were harvested in capped tubes and stored on ice until all samples were collected and weighed. The turgid weight (TW) was assessed using rehydrated freshly weighed leaves after floating on distilled water in a Petri dish for 3-4 h. To determine DM the cap was removed, and the samples were dehydrated at 60 • C overnight. After reaching a constant weight, the tubes were sealed, let to cool in room temperature and reweighed. Changes in RWC are proportional to alterations in leaf turgor state; thus, it is considered an indirect measure of change in turgor under certain conditions. Relative Chlorophyll Content Relative chlorophyll content was measured according to Stavridou et al. [7] on three leaves per plant and 5 biological replicates per treatment and time point using a SPAD-502 m (Konica Minolta Optics Inc., Osaka, Japan). In Situ Chlorophyll Fluorescence The assessment of chlorophyll a fluorescence in dark-adapted state was performed on the adaxial leaf surface of the youngest fully expanded leaf with a ligule using a Handy PEA chlorophyll fluorimeter (Hansatech Instruments Ltd., King's Lynn, UK) after 30 min of dark adaptation. The fluorescence parameters of maximal fluorescence (F m ), minimal fluorescence (F o ), variable fluorescence (F v ), maximal quantum efficiency of PSII photochemistry (F v /F m ), and the performance index (PI), amongst others, were calculated based on the Strasser et al. (2000,2004) using the manufacturer's software (Hansatech Instruments Ltd., King's Lynn, UK). The area above the fluorescence curve between F o and F m is essentially proportional to the pool size of the electron acceptors, Qa, on the reducing side of photosystem II (PSII) and is calculated by the Handy PEA fluorimeter. The area component is an informative parameter highlighting alterations occurred in the shape of the induction kinetic between F o and F m . The hypothesis is that the area should be reduced if proton donation is reduced from lack of H 2 O, and the electron transfer from the reaction centres to the quinone pool is blocked. Using the reduction in the area component may explain any alteration in the shape of the induction kinetics between F o and F m . Photosynthetic Intercellular-CO 2 Response Curves Gas exchange data were used to predict the variables of the von Caemmerer, [101] C4 model and the Excel fitting tool (EFT) [102] was used to derive a set of C4 photosynthetic parameters. The measurements of the response of A to the intercellular-CO 2 (C i ) were conducted on the fully expanded leaf with a ligule (n = 5) using a portable infra-red gas analyser GFS-3000FL (Walz Measurement Instrumentation, Effeltrich, Germany). Prior to the A/C i curves, a light response curve was performed to identify the light saturating point of photosynthesis. Measurements of A were made starting at photosynthetic active radiation (PAR) of 500, 1000, 1249, 1500, 2000, and 500 µmol m −2 s −1 , while the [CO 2 ] was kept at 390 µmol mol −1 . Leaves were initially dark-adapted (approximately 20 min), so that all the centres of PSII were at an open state and energy dissipation through heat would be minimal and then were placed in the cuvette. After 10 min of complementary dark adaptation a measurement of the F v /F m was recorded. Then the photosynthetic photon flux density (Q) was maintained at 1500 µmol m −2 s −1 using a chamber integrated red-blue light source. Measurements of A initiated at 400 µmol mol −1 [CO 2 ] surrounding the leaf for 10 min and allowing for the leaf to reach a stable value [103]. The CO 2 concentrations were modified to change stepwise to the following levels in sequence, 390, 200, 100, 75, 50, 25, 400,600, 800, and 1500 µmol mol −1 [CO 2 ]. The leaf remained at each CO 2 level until a stable A was determined. The leaf temperature was controlled at 26.4 • C and the vapor pressure deficit (VPD) of the air entering the gas exchange system was 7 Pa kPa −1 in average. The response of A to C i at C i < 70 µmol mol −1 was used to solve for Vpmax [101]. The CO 2 -saturated photosynthetic rate (Vpr) was estimated from the horizontal asymptote of a non-rectangular hyperbolic function for each A/C i curve. For each A/C i response curve, the carboxylation efficiency (CE) of PEPc was calculated as the slope of the initial linear portion of the curve (C i < 100 µmol mol −1 ), where photosynthesis is controlled by PEP regeneration and/or carboxylation limitation within the bundle-sheath. The operating point of photosynthesis (C i , 400) was calculated as the C i that corresponds to a given C a of 400 µmol mol −1 , fit using a linear regression of the ratio of intercellular to growth CO 2 (C i /C a ) for each individual leaf [104,105]. The photosynthetic rate where C i = C a (400 µmol mol −1 ) represents the hypothetical scenario in which there is no stomatal limitation to photosynthesis. The percent reduction in photosynthesis due to stomatal limitation (Ls) was calculated from each replicate A/C i curve according to [104] as where A o is the assimilation rate that would occur if resistance to CO 2 diffusion in sites of carboxylation were zero (i.e., when C i = C a with C a being the ambient concentration of CO 2 ) and A is the actual rate at the C i corresponding to the normal C a . For illustrative purposes, mean response curves of A/C i were fitted with a non-rectangular hyperbola for all data pooled within each genotype and treatment ( Figure S4). Intrinsic Leaf Water Use Efficiency (WUEi) Intrinsic leaf water use efficiency (WUEi) was assessed as the ratio of CO 2 assimilation (A) over stomatal conductance (g s ) at photon fluxes of 300 (net irradiance) and 1500 µmol m −2 s −1 (saturating irradiance) (n = 5). The A/g s is considered a more realistic and comparable between studies parameter, as it is not affected by alterations in leaf to air VPD in the leaf chamber [106,107]. Proline Content and Lipid Peroxidation For the analyses of proline content and lipid peroxidation, sampling of leaves was performed on 5 biological replicates. Proline (µmol g −1 FW) cold extraction procedure was performed according Plants 2020, 9, 1266 20 of 26 to [108] by mixing 20 mg of leaf fresh weight (FW) aliquots with 400 µL of ethanol: water (40:60 v/v). Proline content was measured based on the method of Carillo et al. [109]. Proline content was measured spectrophotometrically at 520 nm with a micro-plate reader (µQuant; Bio-Tek Instruments, Winooski, VT, USA) using KC4 software (v. 3.3; Bio-Tek) using the method of Carillo et al. [109] from three biological and three technical replicates per treatment. Lipid peroxidation was assessed from the total content of 2-thiobarbituric acid reactive substances (TBARS) expressed as equivalents of malondialdehyde (MDA), which has been extensively used as a biomarker for lipid peroxidation [110] using the method of [111] with the following modifications: ground leaf powder (0.25 g) was homogenised in 1 mL of 0.1% (w/v) trichloroacetic acid (TCA) solution and centrifuged at 12,000× g for 10 min. The supernatant was added to 1 mL of 0.5% (w/v) thiobarbituric acid (TBA) in 20% TCA. A 30 min incubation of the mixture at 95 • C was performed and samples were, placed in an ice bath to stop the reaction and were briefly vortexed. Aliquots of 200 µL from each sample were placed in triplicate in a 96-well plate. The absorbance of the supernatant was measured at 532 nm and 600 nm using a micro-plate reader (µQuant; Bio-Tek Instruments, Winooski, VT, USA) using KC4 software (v. 3.3; Bio-Tek). The MDA-TBA complex (red pigment) was assessed according to Equation (4) [112]: where A532 and A600 (non-specific absorption) are the absorbances at 532 nm and 600 nm, respectively; the excitation coefficient, ε = 155 mM cm −1 ; V is the volume of the extract (mL); and FW is the fresh weight of each sample (g). Ash Content The ash content (%) was determined as previously described in [7]. Ground leaf sample (1 g) (n = 5) was initially dried overnight at 100 • C in previously weighed beakers (25 mL). The samples were placed into desiccators to cool and were weighed. Following, the samples were placed in a muffle furnace at 550 • C for 16 h and in an oven at 100 • C to lower the sample temperature and the ash was weighed after 30 min. The ash content (%) in each sample was calculated according to Equation (5): %Ash (dried basis) = % Mass of the ash sample (g)/original mass of the dried sample (g) (5) Elemental Content Analysis For the analysis of the elemental content, oven dried at 60 • C and ground leaf (n = 3; at four time points), stem and rhizome (n = 5; at harvest) samples were sent to NRM laboratories© (Bracknell, UK). Plant tissue analysis for determination of total elements was performed using microwave digestion. The sample was digested in concentrated nitric acid at high temperature and pressure, to avoid the development of strong oxidising agents that would destroy organic matter and break down the mineral matrix of the sample. The total elements of potassium, magnesium, calcium, sulphur, sodium, iron, aluminium, and titanium in solution were then assessed by Inductively Coupled Plasma Optical Emission Spectroscopy (ICP-OES) (The Analysis of Agricultural Materials, MAFF Reference Book RB427, ISBN 0 11 242762 6). Chloride was extracted from the dried plant material with deionised water and determined by ion chromatography. Quantification was performed by peak area or height. The method of known additions was used to resolve uncertainties of identification or quantification. All recovery data were between 90 and 100%. The lowest detectable concentration of an anion was determined as a function of the sample size and the conductivity scale used. Generally, this is 0.01% for chloride. (Standard Methods for the Examination of Water and Wastewater 1985, 16th Edition). For the silicon (Si) assay the acid-insoluble ash method was used. Residual dry matter was assessed gravimetrically as the residue remaining after drying at 102 • C for 16 h. Acid-insoluble ash is the insoluble residue remaining after the sample was ignited in a muffle furnace at 450 • C and the ash treated with hydrochloric acid. The insoluble residue after acid treatment was filtered, washed, and ignited at 600 • C. ( [16][17]. The base to acid ratio (R b/a ) (Equation (6)) is often used to determine the fouling tendency of a fuel ash (Baxter et al., 2012). The R b/a and % Base indices were calculated from the estimated oxides of each compound in the biomass as an approximation to their content in the ash. The weight concentration of an element to its oxide is the ratio of the oxide molecular weight to the element atomic weight. The corrosion potential index 2S/Cl was the estimated molar ratio of S and Cl, which can be calculated as the ratio of the element weight to its molecular weight. Statistical Analyses All statistical analyses were performed using R (R Core Team, 2016). The effects of the different salinity treatments on the morphophysiological, photophysiological, and biochemical parameters compared to the control plants were assessed using one-way ANOVAs, whereas the time course measurements using two-way ANOVAs (salinity as between subjects' and days as within subjects' effects) with the afex packages [113]. Data were tested for normality (Shapiro test) and transformations were attempted when normality failed. For the two-way ANOVA data were also tested with Mauchly's test for sphericity and if the assumption of sphericity was violated the corresponding Greenhouse-Geisser corrections were performed. If significant differences were observed among treatments, then the Tukey's HSD post-hoc test was performed to determine specific treatment, time point, and interaction differences using the Agricolae package [114]. Figure S5: Total element contents for K, Na, Cl, Ca, Mg, S and Si of M. × giganteus leaves (green bars), stems (light blown bars) and rhizomes (dark brown bars) at 0, 5.44, 19.97 dS m −1 NaCl on the final harvest day 54., Table S1: Tukey HSD (T HSD ) post-hoc test for the effects of treatment and harvest day on the FM of the above and below biomass, leaves stems, roots and rhizomes at 0, 5.44, 19.97 dS m −1 NaCl. Table S2
2020-10-06T13:34:06.289Z
2020-09-25T00:00:00.000
{ "year": 2020, "sha1": "78a70f6fb25ae6f01f39e1d5ef3f53572534c93e", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2223-7747/9/10/1266/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ea61df8707181bb2ef26936216687b57c5e28f9d", "s2fieldsofstudy": [ "Agricultural And Food Sciences", "Environmental Science" ], "extfieldsofstudy": [ "Medicine", "Environmental Science" ] }
54554533
pes2o/s2orc
v3-fos-license
Case Report: retained gutta-percha as a cause for persistent maxillary sinusitis and pain Dental sources of infection can produce acute and chronic maxillary sinusitis. In some cases, the source of the infection may be related to the presence of endodontic materials in the oral cavity. In this article, we report a case of retained gutta-percha in the maxillary sinus resulting in chronic sinusitis. There are multiple case reports in the otolaryngology literature of projectiles that have become lodged in the paranasal sinuses 1 . Retained packing gauze from endoscopic sinus surgery has also been reported in the paranasal sinuses 2 . In the endodontic literature, many reports and reviews have described various materials that can cause disease of the maxillary sinuses 3 . In this case report, we present an interesting case of a patient who presented with recalcitrant maxillary sinusitis that was ultimately found to be related to retained endodontic material in her maxillary sinus. Case report A 26-year-old Caucasian woman presented with a chief complaint of left-sided maxillary pain with intermittent, discoloured nasal drainage. Seven years prior to current presentation, the patient had reported a history of headaches, nasal congestion and bilateral discolored drainage refractory to prednisone, antibiotics, and endoscopic sinus surgery at an outside facility. Revision endoscopic sinus surgery one year after the initial surgery provided partial relief. Two years prior to current presentation, her left maxillary pain recurred, and a computed tomography (CT) scan revealed a left wisdom tooth projecting into her maxillary sinus. Following wisdom teeth extraction, she had marked improvement in pain and nasal drainage; however 8 weeks later, left maxillary pain returned and was associated with left-sided yellow nasal discharge. Her oral surgeon discovered an infection in the molar adjacent to the extracted wisdom tooth and performed a root canal. Two weeks following the root canal, she presented with continued left maxillary tooth pain and left-sided discolored nasal discharge. Extraction of the molar failed to resolve her pain, and she was subsequently referred to the facial pain/headache clinic by her dentist where she was prescribed gabapentin 300 mg TID, which was ineffective. Her dentist also prescribed her multiple courses of clindamycin 150 mg TID and guaifenesin 600 gm QID, which would improve her pain and discoloured drainage. However, her symptoms would return after completing the antibiotics. On examination with rigid endoscopy, she had widely patent maxillary, ethmoid, and frontal sinus ostia with no purulence or polyposis. On flexible endoscopic examination of the floor of her left maxillary sinus, white to slightly yellow mucus was found. Aspiration of the mucus relieved the patient's discomfort. Maxillary sinus cultures revealed few polymorphonuclear leukocytes, few mononuclear cells, and no microorganisms. Her most recent CT scan from two years prior to presentation at our facility was remarkable for minimal mucosal thickening of the floor of the left maxillary sinus was otherwise normal ( Figure 1). An occult dental infection was considered high Amendments from Version 1 The major change made in the second version of the paper was the inclusion of new Figure 2-Figure 4. I made figures with an inset to highlight the retained gutta-percha more clearly. I also included a comment regarding the lack of pathological specimen. Based on the surgical procedure, the gutta-percha had to be drilled out of the maxillary bone, and thus there was no specimen for analysis. See referee reports REVISED in the differential diagnosis; however, because no actual dental infection could be demonstrated, a medial maxillectomy was considered in order to facilitate topical washing of the left maxillary sinus. Follow-up appointments in the facial pain/headache clinic found "some features of migraine, but it is unclear whether headaches represent primary or secondary headaches with migraine features". Repeat evaluations by her dentist found no evidence of a dental infection. Because of past improvement on an 8-week course of clindamycin 300 mg TID, she was prescribed a 12-week course of clindamycin 300 mg TDI before surgery was considered, and she was referred to an infectious disease specialist. Clindamycin initially improved her symptoms, but midway through the course her congestion returned. A bone scan was positive in the region of the left maxilla; however, a repeat indium scan was negative. Her CT scan was repeated and she was referred to an oral surgeon for evaluation ( Figure 2-Figure 4). The CT scan was evaluated by an oral and maxillofacial surgeon who noted that there were two small remnants of the prior left maxillary root canal that had been performed two and half years ago (Figure 2-Figure 4). A combined endoscopic and Caldwell-Luc approach under computer-assisted navigation to drill out retained gutta-percha in the maxillary sinus resolved the patient's pain and drainage immediately, without recurrence at her three month follow-up. No pathological specimens were available for analysis as the gutta-percha remnants were drilled out during the surgery. Discussion Gutta-percha, a product of tropical rubber plants, has been used since the mid-1800s as an endodontic filling material following root canal procedures 4 . The chemical structure of gutta-percha is a trans-isomer of poly-isoprene, or natural rubber; however it is more crystalline than natural rubber and often is formulated with medications such as zinc oxide, iodoform, chlorhexidine and calcium hydroxide, which contribute to both the antibacterial and antifungal activity 4-7 . Animal studies have shown that gutta-percha becomes encapsulated by fibrous connective tissue with little inflammatory reaction 8 . Maxillary sinus complications from gutta-percha from root canals are rare. According to a previous case report, gutta-percha from a maxillary tooth root canal can migrate to and obstruct the maxillary ostium 9 . Gutta-percha has also been reported to migrate into the ethmoid sinus 10 . In the current patient, retained gutta-percha in the maxillary sinus resulted in chronic inflammation and a persistent sinusitis-type picture with nasal congestion, pain and drainage. Her symptoms preceding the sinus surgery may or may not have been related to dental infection, but she clearly improved following wisdom teeth extraction, and she relapsed due to the infection of the adjacent molar. The retained gutta-percha prevented the resolution of infection and symptoms, even following extraction of the affected tooth. Partial improvement with antibiotics directed to usual dental pathogens provided some evidence that the etiology of her symptoms was a dental infection. In patients with persistent sinusitis and a history of endodontic procedures, an evaluation for dental materials retained in or near the sinuses may be warranted to rule out an additional source of infection. Removal of these retained dental materials may require an external approach with drilling, which can be facilitated by endoscopic visualization through the Caldwell-Luc procedure. A review of the otolaryngology literature did not provide any additional case reports on retained gutta-percha. In fact, only two case reports were found in the oral and maxillofacial surgery literature that described retained gutta-percha in the paranasal sinuses. In the oral surgery and endodontic literature, there are multiple reports of aspergillosis occurring in the maxillary sinus as a result of overextension of root canals of maxillary teeth, especially using materials containing zinc oxide or formaldehyde 3,11 . One case series of aspergillosis of the maxillary sinus was found in the otolaryngology literature. In this series, 85 cases of aspergillosis of the maxillary sinus in non-immunosuppressed patients were reviewed. Of these, 94% presented evidence of a radio-opaque foreign body in the maxillary sinus, with 85% of the cases related to endodontic dental paste 12 . Our case report highlights the importance of investigating alternative sources of infection in cases of recalcitrant sinusitis. Dental sources of infection as well as retained or overextended endodontic materials should be investigated in patients with unexplained, chronic sinusitis. Consent Written informed consent for publication of clinical details and clinical images was obtained from the patient.
2018-04-03T02:35:12.270Z
2014-03-31T00:00:00.000
{ "year": 2014, "sha1": "e5d8463d94a98ea95668dcc525cd93cbcf56fa15", "oa_license": "CCBY", "oa_url": "https://f1000research.com/articles/3-81/v2/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "14426dfea3742717615947f96d3d3a34f018862a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
59577691
pes2o/s2orc
v3-fos-license
Fermion RG blocking transformations and conformal windows We explore fermion RG blocking transformations in LGT with the aim of studying IR behavior. In the case of light fermions the main concern is ensuring locality of any adopted blocking scheme. We study how an exact fermion blocking transformation may be constructed which is manifestly local in a general gauge field background. We also present an approximate, easily implementable scheme for combined fermion-gauge field blocking, which, as a first modest goal, may allow quick, inexpensive estimates of the location of conformal windows for various groups and fermion representations. We apply such a scheme to SU(2) and SU(3). Introduction In view of the LHC a lot of attention is currently being given to the phase structure of non-Abelian gauge theories with varying fermion content.For sufficiently large number of fermions asymptotic freedom (AF) and confinement are lost.It is generally believed that there is an intermediate range with an UV AF fixed point but no confinement; instead there should be an IR fixed point resulting in conformal IR dynamics.Exploring the dependence of these 'conformal windows' on the gauge group, the number of fermion multiplets and their representation, and the structure of the associated IR fixed points, i.e. their relevant directions and anomalous dimensions, is of central importance for informed model building "beyond the standard model". Several approaches have been employed over the last few years in order to investigate these issues [1].One popular method is to adopt some definition of a running 'coupling, e.g. the 'Schroedinger functional' coupling definition.Since any definition of a 'coupling' outside the perturbative regime is necessarily arbitrary, any such choice is scheme-dependent.Caution should then be exercised in interpreting results.Another approach is to compute the spectrum and/or some judiciously chosen physical observable and track their deformation as a function of the number of fermions and their masses.This should, in principle, yield an unambiguous outcome, but is clearly very expensive to carry out in practice to the degree of accuracy required for unambiguous results.Furthermore, it is not always clear what the expected behavior should be. The above may be classified as indirect methods.A direct method for getting at the phase structure would be implementation of the Wilsonian Renormalization Group (RG), ie.RG blocking transformations for direct determination of the effective action RG flow and its fixed points.Conceptually, this is the most straightforward and unambiguous approach but challenging to set up in the presence of light fermions.Studies using the two-lattice matching MCRG method have recently appeared in [2]. Fermion RG blocking In devising RG blocking transformations for light fermions the main concern is maintaining reasonable locality of the blocked action after each blocking step.This is a question of interest in it's own right apart from issues of practical implementation.For free fermions a blocked action can be defined along the lines below (eq.(2.3)-(2.4))that can be shown to maintain locality [3].This has been used to argue [4] the locality of determinant rooting for free fermions.This locality demonstration, however, cannot be extended in the presence of an arbitrary gauge field background. To begin exploring this issue consider the Wilson operator in background gauge field U: where the only non-vanishing elements of matrix M are: A fermion RG blocking step specifies a transition from the fermion degrees of freedom on the original lattice Λ to 'thinned-out' degrees of freedom on a lattice Λ .With Λ a lattice of spacing a, let Λ (1) denote the lattice of spacing 2a.We take Λ = Λ (1) ∪ S, where S is a subset of Λ to be specified later.We introduce variables Ψ (1) on Λ = Λ (1) ∪ S by D ψ Dψ exp(− ψD 0 ψ) 3) The rectangular matrices Q : Λ → Λ are given by Note that α → ∞ gives a δ -function definition of the Ψ (1) 's, i.e.Ψ (1) n = (Qψ) n .Integrating out the original variables one has on the new lattice Λ : In general, with light fermions, G is a very non-local propagator.We now choose the set S and the decimation parameter ζ in (2.4) so that in the action 3) one cancels the nontrivial part of D 0 : This is accomplished by setting ζ = κ/α, and taking the set S as shown in Fig. 1(a), i.e. S, viewed as a subset of Λ, is the set of the interior sites of the boundary plaquettes of the 2x2-hypercubes that are the elementary d-cells of Λ (1) .The propagator G for the the original variables ψ in (2.3) is then given by and the action in (2.6) from the integration over the original ψ becomes 1) . (2.10) (2.10) is manifestly local on Λ = Λ (1) ∪ S. In particular, note that from (2.9), taking α large, interactions induced by G may be expanded in a hopping expansion in κ 2 α and resummed.In fact, The set S, as a subset of Λ, is the set of sites reached from the interior site of each 2x2 hypercube by displacement by a in each lattice direction; (b) fermions at sites in S interact with fermions on Λ (1) and among themselves by 'fat links' consisting of all possible 'staples' of length 2a, a subset of which is indicated here by the broken lines. in the limit α → ∞, i.e. for the δ -function definition of the blocked variables (cf.above), G becomes ultralocal, and in fact U-independent, and the action goes to After this blocking step we end up with fermions on the boundary of the blocked cube, i.e. a form of exact 'potential moving' fermion blocking albeit on a non-hypercubic lattice.The fermion action is still bilinear in the blocked fermion fields which interact via 'fat' gauge field bonds (Fig. 1(b)).The gauge field bond variables are yet to be blocked.It is important to note that if one starts integrating out gauge field bonds inside each blocked hypercube, four-and higher-fermion interactions will in general be induced (but of course at the scale of the blocked cube).Ultralocality of G in this connection completely avoids potential difficulties from the DetG −1 factor in (2.6) in performing any such accompanying gauge field blocking. We have devised a fermion blocking step in a general gauge field background that results in a manifestly local action.How to perform successive fermion blocking steps in a fixed gauge field background under the requirement that the action be manifestly local at the scale of each step is an interesting question in its own right.The following is a sufficient condition: for the n-th step implement the above blocking procedure integrating out the fermions on the sites satisfying the conditions: (2.12) Then the n-th step blocked fermions interact again via bilinear interactions through gauge field fat links of maximum length 2 n (in units of a).One can actually do better if one allows maximum length 2 n+1 and performs a two-stage blocking of fermions interior to each blocked hypercube.The general lesson, though, is that one cannot achieve a manifestly local blocked fermion action residing on a purely hypercubic lattice, i.e. without some non-empty analog of the set S above (cf.Fig. 1).This seems to accord with previous studies of fermion actions [5]. Full RG blocking and approximation schemes For the physical questions we are here interested in, i.e. determination of the RG flow and its fixed points, RG blockings of both fermions and gauge fields must of course be implemented.A local blocking step for fermions such as above must then be accompanied by a gauge field blocking step.It is of course possible to define a variety of such blocking transformations, familiar examples of which occur in the literature, in particular, in connection with MCRG.Actually implementing the gauge blocking, however, can generally be done only numerically.In the case of pure gauge theories, MCRG provides a tried method.In the presence of fermions, however, it is not clear exactly how MCRG should be applied.One may, of course, proceed to integrate out the fermions completely, but then the resulting action is very non-local, and application of the ideas underlying the MCRG method is problematic.We will return to these issues elsewhere.Here, in order to get some insight in what to expect, we opt for the modest goal of devising and trying out an approximate simple decimation scheme that can be easily carried out. To obtain such a scheme we proceed as follows.Eliminate first the fermions on S and compensate by adjusting the hopping parameter of the remaining fermions residing on the hypercubic lattice -this may be viewed as a typical fermion potential moving approximation to get a purely hypercubic action with 'renormalized' gauge interactions (Fig. 2).The 'renormalized' hopping parameter is determined by requiring that some long-distance gauge-invariant fermion correlation function remains invariant under the fermion blocking step.Thus, taking the 2-point function, ψ(x)U[Γ xy ]ψ(y), (other choices, such a meson-meson correlator, will do equally well) this requirement translates into the exact relation: where PoS(Lattice 2011)092 Fermion RG and conformal windows E.T. Tomboulis (3.1)-(3.2) provide a recursion relation relating the hopping parameters at lattice spacings a and 2a, which may, in particular, be solved to obtain the critical hopping parameter.One may then proceed to carry out the gauge field blocking step by whatever method is chosen. The above approximation allows one to maintain bilinear fermionic interactions at each successive blocking step.This suggests the following considerations.Assume one implements some RG blocking scheme.Denoting by U (n) , ψ(n) , ψ(n) the blocked variables at the n-th step, then, quire generally, one has ) . ( If, furthermore, as after the above approximation, S F and S (n+1) F maintain the same bilinear form on Λ (n) and Λ (n+1) , respectively, then with F(U (n) ,U (n+1) ) specifying the blocking relation between U (n) and U (n+1) 's.So if the fermion decimation is sufficiently local, the ratio of determinants in (3.4) can be expected to be reasonably local.Indeed, major part of non-locality is manifestly cancelled in the ratio as seen from the formal expansion of the determinants in terms of gauge field loops: all loops in the expansion of ln DetS are contained in that of DetS F .The main contribution to the ratio comes in fact from loops inside each blocked hypercube.This suggest that one may approximate it by just such a set. Simplest RG implementation model These considerations lead us to an approximate implementation of RG decimations of the full system of fermions and gauge fields given by the following recipe. After a fermion blocking step, apply the approximation depicted in Fig. 2(a) combined with (3.1)-(3.2).Then the gauge field action on the spacing 2a lattice is given by (3.4).As just explained, after the cancellation of the denominator by the numerator in the determinant ratio expressed in terms of gauge field loops, we approximate the remainder by the contributions of loops interior to each blocked hypercube -in particular, by just the single plaquette loop weighted by a parameter η. (One may of course consider approximations involving more interior loops, e.g. 2 × 2 loops, and more decimation parameters.)It remains to specify and perform a gauge field blocking over each hypercube.This we do by standard potential moving decimation: every interior plaquette is moved to the hypercube boundary with the appropriate weights 2 d−2 (Fig. 2(b)).Expanding all quantities in group characters one thus obtains the recursion relations relating the coefficients in the character expansion of the l.h.s of (3.4) to the coefficients of the character expansions of the determinant and action factors in the integrand on the r.h.s.The procedure may then be iterated. This scheme depends only on two quantities: the parameter η, and, at each blocking step, the critical κ(2 n a).We have applied it to SU(2) and SU(3).The results are shown in Table asymptotic freedom is lost, respectively.The parameter η was fixed by the value N U for the upper bound of the conformal window for SU(2) fundamental representation fermions.This decimation parameter could in principle be tuned independently in each case, but it turns out that once set by the top entry in the second column, no further adjustment need be made to produce the rest of the entries with the correct N U values.Having fixed the parameter η the only uncertainties in practice are in obtaining the critical κ values.They were obtained from estimates from (3.2) and/or values from simulations reported in the literature [6].The resulting recursion relations for the character coefficients can then be run, very cheaply, to essentially arbitrary accuracy.It is noteworthy that such an approximate decimation scheme can already produce results such as shown in Table 1. Conclusions -Outlook We have explored locality-preserving RG blocking schemes for light fermions.We saw that blockings maintaining locality in an arbitrary gauge field background can be devised, but they generically require non-hypercubic blocked fermion actions.Such schemes may prove useful in investigating a variety of interesting issues (e.g. the 'rooting' of fermion determinants).Incorporation in MCRG procedures to perform full (fermion plus gauge field) RG blocking would seem to provide the most direct and reliable way for elucidating IR phase structure with varying fermion content.Relatively crude but inexpensive schemes already provide, as we saw, rather encouraging results. This research was partially supported by NSF PHYS-0852438. Figure 2 : Figure 2: Approximations: (a) Merge the fermions on S from the exact blocking with the fermions on the hypercube edges while appropriately 'renormalizing' the hopping parameter of the latter, cf.text; (b) standard potential moving approximation for gauge field blocking: move all interior plaquettes symmetrically to the boundary with appropriate weights. Table 1 : . N L and N U denote the number of flavors at which an IR fixed point first appears and that at which Conformal window lower and upper bounds as obtained from the simple blocking recursion relations described in the text.
2018-12-27T03:52:34.517Z
2012-07-05T00:00:00.000
{ "year": 2012, "sha1": "cebe5efd993b984e210720f8eada274dbb3472b8", "oa_license": "CCBYNCSA", "oa_url": "https://pos.sissa.it/139/092/pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "cebe5efd993b984e210720f8eada274dbb3472b8", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
270420089
pes2o/s2orc
v3-fos-license
Donkey Gelatin and Keratin Nanofibers Loaded with Antioxidant Agents for Wound Healing Dressings Acute and chronic wounds present a significant healthcare challenge, requiring innovative solutions for effective treatment. The exploitation of natural by-products with advanced cell regeneration potential and plant-based materials, which possess bioactive properties, is an innovative topic in wound management. This study investigates the potential of donkey gelatin and keratin for blending with natural bioactive extracts such as sumac, curcumin, and oak acorn to fabricate antioxidant and antimicrobial nanofibers with accelerated wound healing processes. The fabricated nanofibers possess good in vitro biocompatibility, except for the sumac-based donkey nanofibers, where cell viability significantly dropped to 56.25% (p < 0.05 compared to non-treated cells). The nanofiber dimensions showed structural similarities to human extracellular matrix components, providing an ideal microenvironment for tissue regeneration. The donkey nanofiber-based sumac and curcumin extracts presented a higher dissolution in the first 10 min (74% and 72%). Curcumin extract showed similar antimicrobial and antifungal performances to rivanol, while acorn and sumac extracts demonstrated similar values to each other. In vitro tests performed on murine fibroblast cells demonstrated high migration rates of 89% and 85% after 24 h in the case of acorn and curcumin nanofibers, respectively, underscoring the potential of these nanofibers as versatile platforms for advanced wound care applications. Introduction The increasing prevalence of diseases caused by blows, traffic accidents, cuts, scalds, and burns (chronic diseases) and the aging population bring into focus the considerable burden and care of wounds.The success rate of wound healing outcomes entails the following steps: fast hemostasis, adequate inflammation, mesenchymal cell differentiation, growth and migration to the site of injury, formation of new blood vessels, rapid Gels 2024, 10, 391 2 of 22 re-epithelialization and appropriate synthesis, proper cross-linking, and orientation of collagen to strengthen the healing tissue [1].The ineffectiveness of current antibiotics in infections with resistant pathogens (multi-drug-resistant bacteria) constitutes the greatest threat to global health [2][3][4].The World Health Organization (WHO) highlighted that the inefficiency of antibiotics used to treat the bacterial infection may result in the deaths of almost 10 million people annually by 2050 [5] as well as prolong the patient's hospital stay. Modern dressings are created from multifunctional materials with the aim of improving the rate of wound recovery by speeding up the healing process, offering physical and antimicrobial protection, and maintaining the moisture balance of the wound microclimate [6].The restoration of dermal tissue through the exploitation of natural by-products with advanced cell regeneration potential, compared to the existing products on the market, is an innovative topic in wound management.Collagen, a biopolymer known for its regenerative and tissue reconstruction properties, has been extensively investigated for the design of natural wound dressings.Collagen, found in bones, muscles, skin, and tendons, is produced by fibroblasts.Biomaterials containing collagen promote certain cells, such as macrophages and fibroblasts, thereby improving wound healing [7].Keratin is a group of proteins that form cysteine-rich filaments, constituting the main component of hair, hooves, wool, nails, horns, and feathers [8].The roles of keratins are to encourage the growth of keratinocytes and maintain epithelial integrity within medical dressing materials [9].Donkey hide (Equus asinus L.) is a basic raw material for the preparation of a gelatin (Colla corii asini) used as a food and drug to treat anemia in traditional Chinese medicine for over 2000 years [10].More than 58 compounds were isolated from donkey gelatin, among which amino acids (hydroxyproline, the fingerprint amino acid for collagen), proteins (collagen α1 (I), collagen α2 (I), and albumin), polysaccharides (dermatan sulfate), volatiles, and inorganic substances (calcium oxide and sodium oxide) [10].It was reported that the low-molecular weight peptides obtained from the gelatin hydrolysates of donkey hide are responsible for their high antioxidant properties [11].These peptides are an effective anti-photoaging agent against UVB radiation and increase the synthesis of type I procollagen [12]. In the management of innovative wound healing, the antioxidant and antimicrobial dressings can prevent the wound infection and promote wound healing through the active release of antimicrobial agents or passively by their antiseptic surfaces [13,14].For instance, curcumin (diferuloylmethane), a naturally derived polyphenol found in turmeric root, demonstrated anti-inflammatory and antioxidant characteristics, promoting keratinocyte migration and proliferation and showing potential benefits during the maturation phase of wound treatment [15][16][17][18].To overcome the challenges associated with the low solubility of curcumin in aqueous solutions and limited bioavailability, researchers have investigated its incorporation into different carriers, such as chitosan/hyaluronic acid (HA)/poly(vinyl alcohol) (PVA)-magnetic montmorillonite (mMMt2) [15] and PCL-chitosan [19].These prepared nanofibers demonstrated effectiveness in inhibiting the growth of E. coli and S. aureus bacteria [15] and had a positive impact on the viability and proliferation of human dermal fibroblasts (HDFs) [19].Additionally, nanofibers containing curcumin, γ-polyglutamic acid, and gum arabic exhibited therapeutic potential in wound healing by accelerating the re-epithelialization process, enhancing wound contraction, and promoting the regeneration of new blood vessels and hair follicles [20].Literature data highlight the use of sumac fruits (Rhus coriaria L.) and acorns (Quercus brantii Lindl.) in various domains such as industry, pharmaceuticals, and nutrition [21][22][23].At 5 mg mL −1 and 10 mg mL −1 concentrations of sumac extracted from fruits, it was found to accelerate the healing of experimentally induced wounds in male Wistar rats [24].Acorns are rich in polyphenols like gallic and ellagic acids [25], depending on the oak species, and display anti-inflammatory, antibacterial, hypoglycemic, or antifungal activities [26].One paper reported the use of acorn extract in contents of 0.5%, 1%, and 1.5% (w/v) as a natural crosslinker and antibacterial agent for chitosan/gelatin/poly(vinyl alcohol)(PVA) nanofibers for wound-healing applications [23].In this context, our research strategy involved the use of plant-derived extracts, such as curcumin, acorn, and sumac, by incorporating them into a mixture of gelatin and keratin extracted from donkey hide for developing antimicrobial and antioxidant nanofibers with potential application in wound healing. Electrospinning is a relatively straight-forward procedure capable of rapidly producing nanofibrous structures with a high surface area-to-volume ratio and tunable fiber properties [27].Nanofibers for innovative medical dressings manufactured using the electrospinning technique must fulfill several requirements: to absorb excess exudates, provide and maintain a moist environment or an adequate water vapor transmission rate, possess smaller pores compared to fibers produced using traditional methods, exhibit good cellular adhesion to support cell proliferation, and enhance the healing process.Nonetheless, there is evidence that the use of inflammable liquids with high shear strength and voltage can potentially generate permanent denaturation of the collagen fibrous structure [28].Our previous publications reported the fabrication of nanofibrous wound dressings by the electrospinning process of different protein extracts, such as collagen derived from cattle hides [29,30], rabbit skins [31,32], fish scales [33,34], donkey hides [34], or keratin extracted from sheep wool [35], loaded with various non-active antimicrobial agents and having advanced regenerative properties for acute and chronic wound healing. The aim of this paper is to combine the gelatin and keratin extracted from donkey hide with natural bioactive extracts such as curcumin, sumac, and acorn to obtain nanofibers with a potential application in accelerating the healing process of dermal wounds, leveraging their anti-inflammatory, antioxidant, and antimicrobial properties.Our hypothesis was that the use of bioactive nanofibers with small fiber diameters and fine pores would quickly prevent bacteria penetration into the wound area and stimulate cell proliferation and skin regeneration.This study presents a novel strategy for the fabrication of bioactive nanofibers, expanding the potential uses of readily available natural resources such as curcumin, acorn, and sumac, as well as valuing animal-derived by-products like donkey hide and hair, contributing to the advancement of Sustainable Development Goals (SDGs) set by the United Nations for achievement by 2030. Physical-Cheminal Properties of Gelatin and Gelatin/Keratin Loaded with Bioactive Agents The extraction yields of gelatin and keratin from the donkey hide and hair, were 30 ± 5% and 80 ± 10%, respectively, according to raw material weight. Table 1 displays the main properties of gelatin extracted from donkey hide and gelatin mixed with keratin hydrolysate extracted from donkey hair.Gel strength of the gelatin is the main parameter for assessing the quality of the gelatin [36], being induced by the attraction between hydrogen bonds from water and the carboxyl ends of the amino acids [37].These interactions lead to the formation of more aggregate macromolecules.The composition of the amino acid residues in gelatin consists of glycine, proline, and hydroxyproline, which are connected together through peptide bonds [38].Hydrogen bonds between the inter-amino acid residues ensure the triple-helix structure of gelatin, providing strength and stability to the gelatin network.Measuring the gel strength of gelatin is vital for both control purposes and determining the amount of gelatin needed for a specific application [39].Donkey hide gelatin is traded as a Chinese traditional remedy and has 250 g of gel strength [40].Gelatins of 225-325 g gel strength are high-bloom gelatins used for preparing desserts, meat-based food, soft capsules, and ballistic items [41].As compared to the literature data showing the gel strengths of calf skin gelatin and pork skin gelatin of 336.87 g and 308.07 g, respectively [42], our donkey hide gelatin is a high bloom gelatin with superior values.The combination of donkey gelatin with keratin has a higher bloom test value due to the higher pH value, which influences molecular associations between collagen and keratin peptides compared to donkey gelatin alone [43]. The physical-chemical characteristics of bioactive formulations before the electrospinning process are displayed in Table 2.The high electrical conductivity for DKGS, DKGA, and DKGC nanofibers is due to the presence of minerals, vitamins, and unsaturated fatty acids in the composition of sumac [21,44,45], acorn [21,46], and curcumin [47], as well as donkey hide [10].DGK, DGKC, and DGKR solutions exhibited a pH around 9 that was explained by the formation of OH− ions with a buffering effect.The pH value of around 7 for DKGS and DKGA could be explained by the acidic pH reported for sumac fruit extracted in an aqueous solution [48] and the total fatty acid found in acorn products [46].Salinity, expressed by the dissolved salts in the solutions of DKG loaded with bioactive extracts, was directly related to the electrical conductivity values.The ionic compositions of bioactive formulations rich in Ca, Mg, and Cl ions (Table 3) led to increased salinity values compared with those for DKG.Rheological data (Table 2) showed an increase in the viscosity and shear stress of DKG solutions loaded with bioactive extracts compared with the original solution.The physical-chemical parameters for the investigated donkey gelatin-keratin loaded with different plant extracts depend on the geographical and environmental conditions where the plants were collected.1 shows the size distribution of particles measured for the solutions pre for the electrospinning process after the centrifugation step.Large polydispersity indices (Pdi) for keratin of 0.558, curcumin of 0.477, and acorn extracts of 0.399 indicate a very broad size distribution, overlapping with that of DKG (0.553).For DKGA and DKGC, three distinct peaks were observed, suggesting a potential cross-linking of gelatin and keratin, resulting in the generation of larger molecules.This behavior is related to the increased viscosity as the bioactive agents were loaded into DKG (Table 2).For DKGA, the peak diameters were recorded in the range of 20.79 ± 6.654 nm to 157.1 ± 74.6 nm.The small peak in intensity occurring at 5386 nm could be due to the sample preparation.For DKGC, the main two peaks with diameters of 29.87 ± 80.14 nm and 155.7 ± 80.14 nm, respectively, were observed.The smallest peak diameter size was encountered for DKGA (20.79 ± 6.654 nm), while the highest peak diameter size of 2951 ± 1460 nm was detected by DKG.Z-average shows values of 1942 nm for DKG, 119.5 nm for DKGA, and 78.69 nm for DKGC solutions.Zeta potential indicated negative values due to the abundance of anionic amino acid residues [37].A slow increase in the stability of particles for DKG loaded with bioactive extracts was recorded around −16.5 mV, compared with DKG and DKGR, for which the zeta potential was −15.9 ± 3.19 mV and −13.1 ± 3.44 mV, respectively. SEM/EDS Analysis The morphology and average diameter of bioactive nanofibers were examined via SEM (Figure 2a-e).Large polydispersity indices (Pdi) for keratin of 0.558, curcumin of 0.477, and acorn extracts of 0.399 indicate a very broad size distribution, overlapping with that of DKG (0.553).For DKGA and DKGC, three distinct peaks were observed, suggesting a potential cross-linking of gelatin and keratin, resulting in the generation of larger molecules.This behavior is related to the increased viscosity as the bioactive agents were loaded into DKG (Table 2).For DKGA, the peak diameters were recorded in the range of 20.79 ± 6.654 nm to 157.1 ± 74.6 nm.The small peak in intensity occurring at 5386 nm could be due to the sample preparation.For DKGC, the main two peaks with diameters of 29.87 ± 80.14 nm and 155.7 ± 80.14 nm, respectively, were observed.The smallest peak diameter size was encountered for DKGA (20.79 ± 6.654 nm), while the highest peak diameter size of 2951 ± 1460 nm was detected by DKG.Z-average shows values of 1942 nm for DKG, 119.5 nm for DKGA, and 78.69 nm for DKGC solutions.Zeta potential indicated negative values due to the abundance of anionic amino acid residues [37].A slow increase in the stability of particles for DKG loaded with bioactive extracts was recorded around −16.5 mV, compared with DKG and DKGR, for which the zeta potential was −15.9 ± 3.19 mV and −13.1 ± 3.44 mV, respectively. SEM/EDS Analysis The morphology and average diameter of bioactive nanofibers were examined via SEM (Figure 2a-e).The morphology of bioactive nanofibers is influenced by the composition of the formulation, the physical-chemical properties of the solution, and the electrospinning parameters.As depicted in Figure 2, DKG and DKGS formulations show nanofibers without beads and defects, in contrast to the morphology of DKGC and DKGA nanofibers.The electrical conductivity values for DKG and DKGS ranged between 0.5 mS/cm and 9.45 mS/cm, suggesting the generation of a stable electrospinning jet.This is associated with nanofiber dimensions, ranging from 142 ± 1 nm to 157 ± 1 nm (Figure 2a,b).In the case of nanofibers based on curcumin and acorn extracts, beads were observed (Figure 2d,e).These formulations exhibited high electrical conductivity values, suggesting possible inter-and intramolecular interactions, which may lead to the formation of beads.The morphology of bioactive nanofibers is influenced by the composition of the formulation, the physical-chemical properties of the solution, and the electrospinning parameters.As depicted in Figure 2, DKG and DKGS formulations show nanofibers without beads and defects, in contrast to the morphology of DKGC and DKGA nanofibers.The electrical conductivity values for DKG and DKGS ranged between 0.5 mS/cm and 9.45 mS/cm, suggesting the generation of a stable electrospinning jet.This is associated with nanofiber dimensions, ranging from 142 ± 1 nm to 157 ± 1 nm (Figure 2a,b).In the case of nanofibers based on curcumin and acorn extracts, beads were observed (Figure 2d,e).These formulations exhibited high electrical conductivity values, suggesting possible inter-and intramolecular interactions, which may lead to the formation of beads.Interesting: DKGA and DKGC nanofibers show the smallest size dimensions of nanofibers, around 101 nm, as well as the occurrence of beads.The increase in the electrical conductivity of solutions due to the side components in the extracts of bioactive compounds leads to a decrease in the diameter of bioactive nanofibers, attributed to jet elongation.Also, this observation can be explained by the increased viscosity of DKG loaded with bioactive extracts (Table 2).This behavior, when nanofiber diameter decreases with the increase in conductivity of the electrospinning solution, was also observed for wool keratin blended with polyvinyl alcohol (PVA) [49].The decrease in nanofiber diameters is expected to positively affect cell adhesion and growth.Al-Sudani et al. [50] reported a similar trend, noting an increase from 405.2 ± 107.8 nm to 571.7 ± 171.8 nm in the fiber diameter of polymethyl-methacrylate (PMMA)/gelatin impregnated with a propolis content of 10% to 50%. The results obtained from DLS and SEM analyses showed that there is a difference between the fiber diameters.This discordance in diameter sizes of nanofibers may result from the nanoparticles being in a compressed condition during the SEM investigation while they were swollen when DLS was conducted [51]. However, the fabricated bioactive nanofibers showed an average diameter very similar to the diameter of ECM collagen fibers found in the skin, typically ranging from 50 nm to 500 nm.In an extensive study examining the average diameter of gelatin nanofibers prepared by electrospinning gelatins derived from different sources-bovine, donkey, rabbit, and fish scale-it was observed that the donkey gelatin exhibited the smallest nanofiber diameter (73.15 nm ± 3.37 nm) [34].The authors concluded that the origin of gelatin and optimized electrospinning conditions are essential for achieving nanofibers with dimensions closely resembling those of the extracellular matrix (ECM).Another study also reported a size dimension between 120 and 215 nm for nanofibers based on acorn/chitosan/gelatin [23]. The chemical elements of electrospun nanofibers were determined using EDS analysis (Figure 3). According to the data shown in Table 3, the main chemical elements found in DKG nanofibers are carbon, nitrogen, and oxygen.They appeared in all the fabricated nanofibers.In addition, calcium, aluminum, sulfur, and other trace elements appeared in the compositions of bioactive extracts.The gold presence originated from the processing of samples.Therefore, the peaks not related to the samples (Al Kα and Au M peaks) are not marked in the spectra.All DKG loaded bioactive extract nanofibers contained a decreased ratio of C/O compared with DKG nanofibers.This can be explained by the content of polyphenols in the composition of bioactive extracts.2).This behavior, when nanofiber diameter decreases with the increase in conductivity of the electrospinning solution, was also observed for wool keratin blended with polyvinyl alcohol (PVA) [49].The decrease in nanofiber diameters is expected to positively affect cell adhesion and growth.Al-Sudani et al. [50] reported a similar trend, noting an increase from 405.2 ± 107.8 nm to 571.7 ± 171.8 nm in the fiber diameter of polymethyl-methacrylate (PMMA)/gelatin impregnated with a propolis content of 10% to 50%.The results obtained from DLS and SEM analyses showed that there is a difference between the fiber diameters.This discordance in diameter sizes of nanofibers may result from the nanoparticles being in a compressed condition during the SEM investigation while they were swollen when DLS was conducted [51]. However, the fabricated bioactive nanofibers showed an average diameter very similar to the diameter of ECM collagen fibers found in the skin, typically ranging from 50 nm to 500 nm.In an extensive study examining the average diameter of gelatin nanofibers prepared by electrospinning gelatins derived from different sources-bovine, donkey, rabbit, and fish scale-it was observed that the donkey gelatin exhibited the smallest nanofiber diameter (73.15 nm ± 3.37 nm) [34].The authors concluded that the origin of gelatin and optimized electrospinning conditions are essential for achieving nanofibers with dimensions closely resembling those of the extracellular matrix (ECM).Another study also reported a size dimension between 120 and 215 nm for nanofibers based on acorn/chitosan/gelatin [23]. The chemical elements of electrospun nanofibers were determined using EDS analysis (Figure 3).According to the data shown in Table 3, the main chemical elements found in DKG nanofibers are carbon, nitrogen, and oxygen.They appeared in all the fabricated nanofibers.In addition, calcium, aluminum, sulfur, and other trace elements appeared in the compositions of bioactive extracts.The gold presence originated from the processing of samples.Therefore, the peaks not related to the samples (AlKα and AuM peaks) are not marked in the spectra.All DKG loaded bioactive extract nanofibers contained a decreased ratio of C/O compared with DKG nanofibers.This can be explained by the content of polyphenols in the composition of bioactive extracts. ABTS Free Radical Cation Scavenging Assay The free radical scavenging activity (IC 50 values) of sumac, curcumin, and keratin extracts is shown in Figure 4.According to the data shown in Table 3, the main chemical elements found in DKG nanofibers are carbon, nitrogen, and oxygen.They appeared in all the fabricated nanofibers.In addition, calcium, aluminum, sulfur, and other trace elements appeared in the compositions of bioactive extracts.The gold presence originated from the processing of samples.Therefore, the peaks not related to the samples (AlKα and AuM peaks) are not marked in the spectra.All DKG loaded bioactive extract nanofibers contained a decreased ratio of C/O compared with DKG nanofibers.This can be explained by the content of polyphenols in the composition of bioactive extracts. ABTS Free Radical Cation Scavenging Assay The free radical scavenging activity (IC50 values) of sumac, curcumin, and keratin extracts is shown in Figure 4.The IC 50 determined based on the ABTS •+ assay was 5.6 µg mL −1 for sumac extract, 475 µg mL −1 for curcumin extract, and 28 µg mL −1 for keratin.Acorn extracts from the fruits of Quercus coccifera L. (kermes oak) are frequently consumed as herbal coffee in some regions and have demonstrated high antioxidant activity (91.09 ± 1.71%) [52].Quercus cerris seeds, another coffee substitute, were recognized for their high IC 50 values of 271.61 µg mL −1 [53]. The results of the radical scavenging activity (RSA) assay of donkey gelatin-based nanofibers are presented in Table 4.As was expected from the obtained IC 50 data, the bioactive nanofibers showed high RSA values (Table 4).The antioxidant properties of keratin can be attributed to the cysteine amino acids, which can be converted into sulfoxide compounds through alkaline hydrolysis, as was previously reported [54].Also, the results indicate that the polyphenolic compounds of sumac [55], curcumin [56], and acorn [57] are responsible for the most efficient antioxidant properties.The antioxidant activity of the acorn extracts by ABTS •+ assay was reported in the range of 17.20-35.21µmol Trolox 6-hydroxy-2,5,7,8-tetramethylchroman-2-carboxylic acid equivalents (TE) per g of dry matter [58].A similar high radical scavenging activity (RSA%) of 88.58 ± 0.15% was reported using a 2,2-diphenyl-1-picrylhydrazyl (DPPH) free radical scavenger for acorn shell extracted with ethanol [59].The IC50 determined based on the ABTS •+ assay was 5.6 µg mL −1 for sumac extract, 475 µg mL −1 for curcumin extract, and 28 µg mL −1 for keratin.Acorn extracts from the fruits of Quercus coccifera L. (kermes oak) are frequently consumed as herbal coffee in some regions and have demonstrated high antioxidant activity (91.09 ± 1.71%) [52].Quercus cerris seeds, another coffee substitute, were recognized for their high IC50 values of 271.61 µg mL −1 [53]. Controlled Release of Sumac and Curcumin The results of the radical scavenging activity (RSA) assay of donkey gelatin-based nanofibers are presented in Table 4.As was expected from the obtained IC50 data, the bioactive nanofibers showed high RSA values (Table 4).The antioxidant properties of keratin can be attributed to the cysteine amino acids, which can be converted into sulfoxide compounds through alkaline hydrolysis, as was previously reported [54].Also, the results indicate that the polyphenolic compounds of sumac [55], curcumin [56], and acorn [57] are responsible for the most efficient antioxidant properties.The antioxidant activity of the acorn extracts by ABTS •+ assay was reported in the range of 17.20-35.21µmol Trolox 6-hydroxy-2,5,7,8-tetramethylchroman-2-carboxylic acid equivalents (TE) per g of dry matter [58].A similar high radical scavenging activity (RSA%) of 88.58 ± 0.15% was reported using a 2,2-diphenyl-1-picrylhydrazyl (DPPH) free radical scavenger for acorn shell extracted with ethanol [59].The DKGS and DKGC nanofibers showed a higher release in the first 10 min (74% and 72%).The high controlled release of sumac at 10 min and 60 min compared with that of curcumin can be related to the larger diameter of DKGS nanofibers.The high dissolution of curcumin at 10 min can be explained by the transformation of curcumin in the electrospinning process from a crystalline to an amorphous state, leading to an increase in its free energy [52].In other studies, due to the low solubility of curcumin in aqueous media, in vitro curcumin release investigations were conducted in PBS (pH 7.4) containing Tween 80 (0.4-0.5% (wt/v)) and ethanol (10% v/v) [19,59].The release of antimicrobial and antioxidant agents was also reported in 90:10 water:ethanol at different temperatures for polyelectrolyte multilayer (PEM) thin films loaded with 5% (wt/v) curcumin for transdermal drug delivery applications [60] and 70% ethanol in the case of clotrimazole-loaded fabric testing [61].The sumac release from nanosheets was assessed at neutral (7.4) and acidic (4.5) pH values, simulating different microenvironment conditions in intact and injured skin [62]. Controlled Release of Sumac and Curcumin The differences in the release of bioactive agents over time can be associated with the composition and surface-specific area of nanofibers, as well as the concentration of bioactive compounds. In Vitro Cytotoxicity Evaluation All samples based on keratin and gelatin from donkey hide did not show a cytotoxic effect (values of cell viability > 80%), except the DKGS sample (Figure 6).After 24 h of cell treatment, cell viability ranged between 111.04% (Support sample) and 86.74% (DKG sample), while the DKGS sample showed a slightly cytotoxic effect (cell viability of 72.58%).The same pattern was observed after 72 h, with all samples showing a good degree of cytocompatibility, with cell viability ranging between 100.88% (Support sample) and 80.16% (DKGA), except for the DKGS sample, where cell viability dropped significantly to 56.25% (Figure 6). The DKGS and DKGC nanofibers showed a higher release in the first 10 min (74% and 72%).The high controlled release of sumac at 10 min and 60 min compared with that of curcumin can be related to the larger diameter of DKGS nanofibers.The high dissolution of curcumin at 10 min can be explained by the transformation of curcumin in the electrospinning process from a crystalline to an amorphous state, leading to an increase in its free energy [52].In other studies, due to the low solubility of curcumin in aqueous media, in vitro curcumin release investigations were conducted in PBS (pH 7.4) containing Tween 80 (0.4-0.5% (wt/v)) and ethanol (10% v/v) [19,59].The release of antimicrobial and antioxidant agents was also reported in 90:10 water:ethanol at different temperatures for polyelectrolyte multilayer (PEM) thin films loaded with 5% (wt/v) curcumin for transdermal drug delivery applications [60] and 70% ethanol in the case of clotrimazole-loaded fabric testing [61].The sumac release from nanosheets was assessed at neutral (7.4) and acidic (4.5) pH values, simulating different microenvironment conditions in intact and injured skin [62]. The differences in the release of bioactive agents over time can be associated with the composition and surface-specific area of nanofibers, as well as the concentration of bioactive compounds. In Vitro Cytotoxicity Evaluation All samples based on keratin and gelatin from donkey hide did not show a cytotoxic effect (values of cell viability > 80%), except the DKGS sample (Figure 6).After 24 h of cell treatment, cell viability ranged between 111.04% (Support sample) and 86.74% (DKG sample), while the DKGS sample showed a slightly cytotoxic effect (cell viability of 72.58%).The same pattern was observed after 72 h, with all samples showing a good degree of cytocompatibility, with cell viability ranging between 100.88% (Support sample) and 80.16% (DKGA), except for the DKGS sample, where cell viability dropped significantly to 56.25% (Figure 6).The viability and morphology of NCTC, clone L929, cells treated with the different nanofibers were also evaluated by fluorescence microscopy after concomitant live and dead cell staining with two different dyes, namely calcein (green) and of dead cells with ethidium homodimer (red), respectively (Figure 7).The viability and morphology of NCTC, clone L929, cells treated with the different nanofibers were also evaluated by fluorescence microscopy after concomitant live and dead cell staining with two different dyes, namely calcein (green) and of dead cells with ethidium homodimer (red), respectively (Figure 7). After 72 h of treatment with Support, DKGC, DKGA, and DKGR samples, the NCTC, clone L929, cells maintained their viability, morphological appearance, and cell density, being similar to the control sample (Figure 7).After 72 h of treatment with Support, DKGC, DKGA, and DKGR samples, the NCTC, clone L929, cells maintained their viability, morphological appearance, and cell density, being similar to the control sample (Figure 7). Additionally, the insignificant proportion of dead cells indicated the cytocompatibility of the nanofibers.Statistical analysis indicated a 9% and 2% increase in cell viability for the Support and DKGA samples, compared to the control, while the DKGC and DKGR samples determined a 5% and 8% decrease in cell viability compared to the control (Figure 8).Outstanding cell viability and cell attachment capacity were also reported for some nanofibers containing acorn extract for mouse fibroblast (L929) cells [24].Similar outcomes were found in the case of gelatin nanofibers enriched with propolis [50].On the other hand, the DKG sample showed a slightly cytotoxic effect, and the DKGS sample had a moderate cytotoxic effect.Although the cells maintained their viability, the cell density decreased compared to the control sample.The cytotoxicity of DKGS nanofibers could be explained by the quantity of sumac extract (2% (wt/v)) added to the formulations.Previous studies had reported the cytotoxic effect of sumac extract at lower concentrations than the one used in the present study.Thus, the IC50 values of sumac methanolic extract tested on human umbilical vein endothelial cells (HUVEC) and retinoblastoma Y79 cells were 43 µg mL −1 and 9.1 µg mL −1 , respectively [22,63].In another Additionally, the insignificant proportion of dead cells indicated the cytocompatibility of the nanofibers.Statistical analysis indicated a 9% and 2% increase in cell viability for the Support and DKGA samples, compared to the control, while the DKGC and DKGR samples determined a 5% and 8% decrease in cell viability compared to the control (Figure 8).Outstanding cell viability and cell attachment capacity were also reported for some nanofibers containing acorn extract for mouse fibroblast (L929) cells [24].Similar outcomes were found in the case of gelatin nanofibers enriched with propolis [50].After 72 h of treatment with Support, DKGC, DKGA, and DKGR samples, the NCTC, clone L929, cells maintained their viability, morphological appearance, and cell density, being similar to the control sample (Figure 7). Additionally, the insignificant proportion of dead cells indicated the cytocompatibility of the nanofibers.Statistical analysis indicated a 9% and 2% increase in cell viability for the Support and DKGA samples, compared to the control, while the DKGC and DKGR samples determined a 5% and 8% decrease in cell viability compared to the control (Figure 8).Outstanding cell viability and cell attachment capacity were also reported for some nanofibers containing acorn extract for mouse fibroblast (L929) cells [24].Similar outcomes were found in the case of gelatin nanofibers enriched with propolis [50].On the other hand, the DKG sample showed a slightly cytotoxic effect, and the DKGS sample had a moderate cytotoxic effect.Although the cells maintained their viability, the cell density decreased compared to the control sample.The cytotoxicity of DKGS nanofibers could be explained by the quantity of sumac extract (2% (wt/v)) added to the formulations.Previous studies had reported the cytotoxic effect of sumac extract at lower concentrations than the one used in the present study.Thus, the IC50 values of sumac methanolic extract tested on human umbilical vein endothelial cells (HUVEC) and retinoblastoma Y79 cells were 43 µg mL −1 and 9.1 µg mL −1 , respectively [22,63].In another On the other hand, the DKG sample showed a slightly cytotoxic effect, and the DKGS sample had a moderate cytotoxic effect.Although the cells maintained their viability, the cell density decreased compared to the control sample.The cytotoxicity of DKGS nanofibers could be explained by the quantity of sumac extract (2% (wt/v)) added to the formulations.Previous studies had reported the cytotoxic effect of sumac extract at lower concentrations than the one used in the present study.Thus, the IC 50 values of sumac methanolic extract tested on human umbilical vein endothelial cells (HUVEC) and retinoblastoma Y79 cells were 43 µg mL −1 and 9.1 µg mL −1 , respectively [22,63].In another study, the sumac extract showed no cytotoxic effect on HeLa cells at concentrations ranging between 31.25 µg mL −1 and 125 µg mL −1 after 48 h of treatment, whereas higher concentrations (250-2000 µg mL −1 ) induced a decrease in cell viability by around 30-65% compared to non-treated cells [64].Batiha et al. [65] tested acetone extract of sumac on three various cell lines, namely mouse embryonic fibroblast (NIH/3T3), Madin-Darby bovine kidney (MDBK), and human foreskin fibroblasts (HFF), and the results showed that the extract inhibited the MDBK cells with half-maximal effective concentrations (EC 50 ) of 737.7 µg mL −1 , but did not reduce the HFF and NIH/3T3 cell viability at 1500 µg mL −1 [65].Other authors also reported toxic effects for curcumin at concentrations above 150 µg mL −1 in methacrylated gelatin (GelMA) and methacrylated pectin (PeMA) hydrogels [66]. In Vitro Skin Wound Healing An in vitro model of skin injury (scratch assay) was implemented to assess the ability of samples based on keratin and gelatin from donkey hide to accelerate the proliferation and migration of cells and to cover the injured area, and therefore to induce the healing of a "wound".Thus, the DKGA sample presented the highest migration rate (89%) after 24 h, with 4% more than the control (85%), followed by the DKGC (85%) and DKGR (84%) samples, which had a migration rate similar to that of the control sample, and by the DKG sample with a migration rate of 80% (Figures 9 and 10).In conclusion, the DKGA sample was the most efficient in repairing the injured cell monolayer after 24 h of treatment, promoting cell proliferation and migration. In Vitro Skin Wound Healing An in vitro model of skin injury (scratch assay) was implemented to assess the ability of samples based on keratin and gelatin from donkey hide to accelerate the proliferation and migration of cells and to cover the injured area, and therefore to induce the healing of a "wound".Thus, the DKGA sample presented the highest migration rate (89%) after 24 h, with 4% more than the control (85%), followed by the DKGC (85%) and DKGR (84%) samples, which had a migration rate similar to that of the control sample, and by the DKG sample with a migration rate of 80% (Figures 9 and 10).In conclusion, the DKGA sample was the most efficient in repairing the injured cell monolayer after 24 h of treatment, promoting cell proliferation and migration.study, the sumac extract showed no cytotoxic effect on HeLa cells at concentrations ranging between 31.25 µg mL −1 and 125 µg mL −1 after 48 h of treatment, whereas higher concentrations (250-2000 µg mL −1 ) induced a decrease in cell viability by around 30-65% compared to non-treated cells [64].Batiha et al. [65] tested acetone extract of sumac on three various cell lines, namely mouse embryonic fibroblast (NIH/3T3), Madin-Darby bovine kidney (MDBK), and human foreskin fibroblasts (HFF), and the results showed that the extract inhibited the MDBK cells with half-maximal effective concentrations (EC50) of 737.7 µg mL −1 , but did not reduce the HFF and NIH/3T3 cell viability at 1500 µg mL −1 [65].Other authors also reported toxic effects for curcumin at concentrations above 150 µg mL −1 in methacrylated gelatin (GelMA) and methacrylated pectin (PeMA) hydrogels [66]. In Vitro Skin Wound Healing An in vitro model of skin injury (scratch assay) was implemented to assess the ability of samples based on keratin and gelatin from donkey hide to accelerate the proliferation and migration of cells and to cover the injured area, and therefore to induce the healing of a "wound".Thus, the DKGA sample presented the highest migration rate (89%) after 24 h, with 4% more than the control (85%), followed by the DKGC (85%) and DKGR (84%) samples, which had a migration rate similar to that of the control sample, and by the DKG sample with a migration rate of 80% (Figures 9 and 10).In conclusion, the DKGA sample was the most efficient in repairing the injured cell monolayer after 24 h of treatment, promoting cell proliferation and migration. Assessment of the Antimicrobial Activity of Donkey Gelatin-Based Nanofibers Loaded with Plant Extracts Tables 5 and 6 show that the plant-based extracts reduced the bacterial load of donkey gelatin and keratin nanofibers to an acceptable level for topical and pharmaceutical products.TAMC (total aerobic microbe count) and TYMC (total yeast and mold count) are the strains naturally developed on the nanofiber surfaces under favorable conditions (nutrient medium, temperature) for bacteria or fungi, respectively. Nanofibers with curcumin extract showed very similar antimicrobial and antifungal performances to nanofibers with rivanol (the reference sample), while nanofibers with acorn and sumac extracts demonstrated similar values for TAMC (10-11 CFU/g) and TYMC (3.33-4.33CFU/g).According to the Pharmacopoeia criteria, the results allow the use of donkey gelatin-based nanofibers with plant extracts and rivanol as topical or pharmaceutical products [67].It is obvious that the plant extracts and rivanol improved the antimicrobial and antifungal properties of donkey gelatin and keratin nanofibers. In a similar paper, the nanofibers loaded with acorn extract exhibited a 90% antibacterial activity against the Staphylococcus aureus bacterium, as determined by the quantitative standard test method [23]. Conclusions To the authors' knowledge, the gelatin and keratin extracted from donkey hide were exploited for the first time to fabricate nanofibrous wound dressings.The bioactive extracts of sumac, acorn, and curcumin added to gelatin/keratin nanofibers contributed to the enhancement of antioxidant activity and the obtaining of nanofibers that mimic the conditions of ideal wound dressings.Biocompatibility and healing properties depend on the concentration of bioactive extracts.Further studies are needed to find the correlation between the concentration of natural extracts and the in vitro biocompatibility of donkey keratin gelatin nanofibers loaded with antioxidant agents. Preparation of Gelatin and Keratin Loaded with Bioactive Agents Donkey hide was processed to remove impurities, interfibrillar substances, and hair, until the delimed hide stage; then 100 g pelt was successively washed to remove soluble salts, shredded in a mincer, immersed in water at a ratio of 350 wt%, and heated in a water bath at a temperature of 90 • C for 5-16 h.The resulting extract was separated by residue using a stainless-steel sieve with pores size < 150 µm, followed by cooling and drying in an oven at 60 • C, when gelatin granules were obtained (Figure 11a).Keratin was obtained from 100 g of donkey hair by heating to 80 • C in a solution of 1.5% (wt/v) NaOH for 5 h, then it was filtered and dried.Donkey gelatin-keratin (DKG) was obtained by mixing the original solutions (Table 1) in equal proportions (1:1% v/v) and drying in an oven at 60 • C, resulting in a solid composite with an estimated composition of 72% gelatin and 28% keratin (Figure 11b). Preparation of Gelatin and Keratin Loaded with Bioactive Agents Donkey hide was processed to remove impurities, interfibrillar substances, and hair, until the delimed hide stage; then 100 g pelt was successively washed to remove soluble salts, shredded in a mincer, immersed in water at a ratio of 350 wt%, and heated in a water bath at a temperature of 90 °C for 5-16 h.The resulting extract was separated by residue using a stainless-steel sieve with pores size < 150 µm, followed by cooling and drying in an oven at 60 °C, when gelatin granules were obtained (Figure 11a).Keratin was obtained from 100 g of donkey hair by heating to 80 °C in a solution of 1.5% (wt/v) NaOH for 5 h, then it was filtered and dried.Donkey gelatin-keratin (DKG) was obtained by mixing the original solutions (Table 1) in equal proportions (1:1% v/v) and drying in an oven at 60 °C, resulting in a solid composite with an estimated composition of 72% gelatin and 28% keratin (Figure 11b).The plant extracts were prepared by heating the plants at 90 • C for 4 h at water in a concentration of 4% (wt/v), higher than their minimum inhibitory concentration values (Table 7). Equal volumes of each plant extract type were added to the gelatin/keratin solution (DKGS, DKGA, and DKGA), followed by drying at 60 • C (Figure 11c,d).A 10% (v/v) solution of rivanol was introduced to the gelatin/keratin (DKGR) dispersion and used as a control (Figure 11f).The extracted gelatin and gelatin/keratin were investigated using physical-chemical methods, as follows: determination of dry matter, conducted in accordance with EN ISO 4684 [68], evaluation of pH levels following the guidelines outlined in STAS 8619/3 [69], and examination of electrical conductivity based on EN ISO 27883 [70].Total dissolved solids and salinity (the content of salts) were estimated as indirect measurements from electrical conductivity.These physical-chemical parameters were evaluated using a conductivity (C1010, Consort Turnhout, Belgium) and pH meter (Consort C831 Multiparameter analyzer, Turnhout, Belgium).The gelatin strength and relaxation of donkey and donkey gelatin mixed with donkey keratin (DKG) were determined by using the TEX'AN TOUCH 50 N texture analyzer (LAMY RHEOLOGY, Champagne au Mont d'Or, France) for 6.67% solution after cooling for 16-18 h at 10.0 ± 0.1 • C according to Gelatin Manufacturers Institute of America standard [71].Rheological parameters such as viscosity, shear stress, and shear rate were conducted using a Brookfield AMETEK DV2T Viscometer (Middleboro, MA, USA) with a spindle No. 21.Furthermore, the diameter and polydispersity index of gelatin/keratin loaded with bioactive agents were assessed using the Dynamic Light Scattering (DLS) method with the help of a Zetasizer Nano-ZSP device, which operated at λ = 633 nm, and a light source of He-Ne laser (Malvern Instruments Limited, Worcestershire, UK).An amount of 0.1 g of each granule was immersed in 5 mL of ultrapure water and subjected to sonication for 5 min.Subsequently, three drops of the resulting suspension were introduced into a 10 mL solution of 1 mM NaCl.This mixture was then thoroughly homogenized and analyzed using a 12 mm cell (DTS 0012).Zeta potential was determined using the electrophoretic technique (cell DTS 1070). Preparation of Bioactive Nanofibers and Analysis of Their Structure 20 g of gelatin-keratin granules extracted from donkey hide and hair loaded with an antioxidant extract such as sumac, curcumin, or acorn shell, a having ratio between gelatin:keratin:bioactive extract of 71:27:2 wt%, were dissolved in 100 mL of distilled water by stirring on a magnetic plate at a temperature of 50 • C and 400 rpm.Then, the homogeneous solutions were centrifuged for 3 min at 60 × 100 rpm. 10 mL of supernatant was mixed with 10 mL of a 10% (wt/v) solution of PEO.This solution was filled into a 20 mL Teflon syringe fitted with a tube and a G21-gauge metal needle attached to the other end within the electrospinning equipment (TL Pro-BM, Tong Li Tech Co., Ltd., Bao An, Shenzhen, China).The electrospinning technique took place at an ambient temperature of 22.6 • C and a relative humidity of 40%.The resulting nanofibers were collected on a drum covered with polypropylene mesh for medical use, denoted as support.Gelatin/keratin with rivanol was processed into nanofibers in a similar way to bioactive agents and used as control nanofibers with recognized antimicrobial activity.The electrospinning parameters are presented in Table 8.This process of obtaining bioactive nanofibers is simple, versatile, reproducible, and occurs at room temperature without high energy consumption or the use of potentially toxic solvents.Additionally, environmental sustainability is ensured by valuing existing protein resources and natural biocompounds.The morphology, fiber size diameters, and elemental compositions of fabricated bioactive nanofibers were investigated by using scanning electron microscopy (SEM)/energy dispersive X-ray spectrometry (EDS) analysis (FEI, QUANTA 450 FEG, Eindhoven, The Netherlands).SEM images were captured using an FEI Inspect S50 Scanning Electron Microscope.To mitigate the effect of charging, a thin gold layer was applied to all samples using a Cressington 108 auto sputter coater equipped with a Cressington mtm 20 thickness controller.Secondary electron imaging was obtained at a length of 10 mm, employing an acceleration voltage of 10 kV and magnifications of 50× and 10,000×, respectively.The mean thickness was determined by measuring the diameter of at least 50 nanofibers without beads and calculating the average using Origin Pro 21 and ImageJ software version 1.54d. Antioxidant Activity of Bioactive Nanofibers Antioxidant activity was determined both for plants and nanofibers containing protein and bioactive extracts by keeping in contact with the 2,2-azino-bis-3-ethylbenzthiazoline-6-sulphonic acid radical cation (ABTS •+ ) at 738 nm, according to the method described in [72].Sumac and curcumin plants, in powder form, were dissolved in ethanol, while that of keratin was dissolved in distilled water.Their concentrations were 4000 µg mL −1 .The standards were prepared at various concentrations between 3.2-16 µg mL −1 for sumac extract, 12.8-160 µg mL −1 in the case of curcumin extract, and 16-160 µg mL −1 in the case of keratin extract.For the radical scavenging activity (RSA, %) assay, 20 µL from each known concentration of natural plants were mixed with a 4 mL solution of ABTS •+ and incubated in the dark for 6 min.After that, their absorbance was measured at 738 nm in comparison to a blank using an ultraviolet-visible spectrometer (Orion UV-Vis AQUAMATE 8000, Thermo Fisher Scientific, Waltham, MA, USA).A 2 × 1.5 cm 2 surface of bioactive nanofibers deposited on a PP support, containing an amount of nanofibers in the range of 6.9 mg to 11.1 mg, was combined with a 4 mL solution of ABTS •+ , and the absorbance was measured spectrophotometrically after 10 min. %RSA = Abs blank − Abs sample Abs blank × 100 IC 50 values represent the maximum active compounds from bioactive agents needed to deactivate 50% of a given amount of ABTS •+ .Thus, low IC 50 values indicate a higher level of antiradical efficiency. The analyses were performed in triplicate, and the results are reported as the mean value ± standard deviation. Controlled Release of Sumac and Curcumin About 0.2 g of nanofibers containing sumac and curcumin (DKGS, DKGC) were immersed in 10 mL of 70 wt% ethanol.At intervals of 10, 30, and 60 min after sonication, an aliquot (4 mL) was taken and exposed to centrifugation with a speed of 4000 rpm and a time of 3 min.The absorbance of the supernatant was read at 270 nm for DKGS and 425 nm for DKGC, characteristic of the π-π * electronic transition, using a UV-Vis spectrophotometer.The controlled releases for sumac and curcumin were calculated according to the calibration curves obtained for 10, 50, 10, 200, and 300 ppm of sumac solution and 50, 100, and 500 ppm of curcumin solution in ethanol. The percentage of sumac and curcumin extracts released from the nanofibers was determined using Equation (2). % Release = The amount o f released extracts at a speci f ic time Calculated amount o f extract in nano f ibers × 100 (2) In Vitro Cytotoxicity Assessment NCTC, clone L929, and murine fibroblasts were used to assess the in vitro cytotoxicity of the nanofibers according to the [73].All samples were cut into discs of (5 × 5) mm 2 and sterilized with UV light for 4 h. The cell viability and morphology were assessed by the quantitative MTT assay and by fluorescence microscopy using the Live/Dead assay.The NCTC fibroblasts were seeded in MEM culture medium at a cell density of 5 × 10 4 cells/mL in 24-well culture plates and incubated overnight at 37 • C in a humid atmosphere with 5% CO 2 to allow cell adhesion.After 24 h, the nanofibers were added (1 disc/well) in fresh culture medium, and cells were further incubated for 24 and 72 h, respectively.After this period, the cells were incubated for 3 h at 37 • C with MTT solution (0.25 mg mL −1 ), after which the insoluble formazan crystals were dissolved in isopropanol.The plates were for incubated 15 min at room temperature with gentle shaking for color uniformity, after which the absorbance was measured at 570 nm using a Tecan Sunrise plate reader (Tecan, Austria).The obtained values are directly proportional to the number of living cells present at the end of the incubation.The results were reported as percentages of viability compared to the control sample (non-treated cells), considered to have 100% viability.All samples were evaluated in triplicate. A Live/Dead assay kit (Molecular Probes, Invitrogen, Eugene, OR, USA) was used to evaluate cell morphology and viability according to the manufacturer's instructions.The assay is based on the concomitant staining of live cells (green) and dead cells (red) with two specific reagents, namely calcein AM and ethidium homodimer-1, respectively.After treatment with the nanofibers for 72 h, the cells were stained with 2 µM calcein-AM (2 µM) and 4 µM ethidium homodimer-1 for 30 min at room temperature.A Zeiss Axio Observer D1 microscope was used to acquire the fluorescent images, which were further processed with ImageJ 1.51 software. In Vitro Wound Healing Assay (Scratch Assay) This method was used to investigate the capacity of tested samples to induce cell proliferation and migration into an injured cell monolayer.For this assay, only the samples that presented values of cell viability higher than 80% based on the quantitative MTT test were selected.Thus, NCTC, clone L929, and murine fibroblasts were cultivated at a cell density of 3 × 10 5 cells/mL and maintained at 37 • C in a humid atmosphere with 5% CO 2 , until a cellular monolayer was obtained.Subsequently, a linear wound was created with a sterile pipette tip in the cell monolayer, the sample extracts (sample incubation in MEM medium for 24 h at 37 • C) were added, and cells were incubated for another 24 h.Photographs were taken with an Axio Observer D1 microscope (Carl Zeiss, Oberkochen, Germany) at the beginning of the experiment (T0) and after 24 h of cell incubation in order to assess the cell migration rate.The ImageJ 1.51 software was used to quantify the percentage of cell migration into the injured area.Samples were run in triplicate.The data were presented as the mean value ± standard deviation of the recovery rates of the injured area.Statistical analyses were performed using the Student's t-test, with differences considered statistically significant at p ≤ 0.05. Gels 2024 , 10, 391 8 of23 ter-and intramolecular interactions, which may lead to the formation of beads.Interesting: DKGA and DKGC nanofibers show the smallest size dimensions of nanofibers, around 101 nm, as well as the occurrence of beads.The increase in the electrical conductivity of solutions due to the side components in the extracts of bioactive compounds leads to a decrease in the diameter of bioactive nanofibers, a ributed to jet elongation.Also, this observation can be explained by the increased viscosity of DKG loaded with bioactive extracts (Table Figure 5 Figure 5 shows the release of curcumin and sumac extracts from DKG-based nanofibers. Figure 5 Figure 5 shows the release of curcumin and sumac extracts from DKG-based nanofibers. Figure 5 . Figure 5. Release of sumac and curcumin extracts from the bioactive DKGS and DKGC nanofibers.Figure 5. Release of sumac and curcumin extracts from the bioactive DKGS and DKGC nanofibers. Figure 5 . Figure 5. Release of sumac and curcumin extracts from the bioactive DKGS and DKGC nanofibers.Figure 5. Release of sumac and curcumin extracts from the bioactive DKGS and DKGC nanofibers. Figure 7 . Figure 7. Fluorescence images of NCTC, clone L929, fibroblasts untreated (Control) and treated with Support, DKG, DKGS, DKGC, DKGA, and DKGR samples for 72 h; the Live/Dead test live cells labeled in green and dead cells labeled in red; scale bar = 100 µm. Figure 7 . Figure 7. Fluorescence images of NCTC, clone L929, fibroblasts untreated (Control) and treated with Support, DKG, DKGS, DKGC, DKGA, and DKGR samples for 72 h; the Live/Dead test live cells labeled in green and dead cells labeled in red; scale bar = 100 µm. Figure 7 . Figure 7. Fluorescence images of NCTC, clone L929, fibroblasts untreated (Control) and treated with Support, DKG, DKGS, DKGC, DKGA, and DKGR samples for 72 h; the Live/Dead test live cells labeled in green and dead cells labeled in red; scale bar = 100 µm. Figure 9 .Figure 9 . Figure 9. Optical microscopy photographs of the NCTC, clone L929, fibroblast monolayer after in vitro induction of a skin injury and application with the extraction medium of the Support, DKG, DKGC, DKGA, and DKGR samples for 24 h.Cell migration in the injured area can be observed.Scale bar = 100 µm Figure 9. Optical microscopy photographs of the NCTC, clone L929, fibroblast monolayer after in vitro induction of a skin injury and application with the extraction medium of the Support, DKG, DKGC, DKGA, and DKGR samples for 24 h.Cell migration in the injured area can be observed.Scale bar = 100 µm. Figure 9 .Figure 10 . Figure 9. Optical microscopy photographs of the NCTC, clone L929, fibroblast monolayer after in vitro induction of a skin injury and application with the extraction medium of the Support, DKG, DKGC, DKGA, and DKGR samples for 24 h.Cell migration in the injured area can be observed.Scale bar = 100 µm Table 1 . Characteristics of gelatin and gelatin/keratin extracted from donkey hide and donkey hair. Table 2 . Characteristics of gelatin/keratin extracted from donkey hide loaded with bioactive extracts. Table 3 . Elemental compositions of the fabricated bioactive nanofibers.Figure1shows the size distribution of particles measured for the solutions prepared for the electrospinning process after the centrifugation step. Table 5 . Total aerobic microbial count (TAMC) and total yeast and mold count (TYMC) of nanofibers. Table 6 . The sterility test against S. aureus, E. coli, and C. albicans.
2024-06-13T15:35:38.592Z
2024-06-01T00:00:00.000
{ "year": 2024, "sha1": "4b3f89628379582671b3085b458c041990a80362", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2310-2861/10/6/391/pdf?version=1717837040", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f0b64b4faa7328ee044a5190e93ea667e29c8c25", "s2fieldsofstudy": [ "Medicine", "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
119212965
pes2o/s2orc
v3-fos-license
Probing the transverse dynamics and polarization of gluons inside the proton at the LHC Transverse momentum dependent gluon distributions encode fundamental information on the structure of the proton. Here we show how they can be accessed in heavy quarkonium production in proton-proton collisions at the LHC. In particular, their first determination could come from the study of an isolated J/psi or Upsilon particle, produced back to back with a photon. Formalism Transverse momentum dependent (TMD) gluon distributions inside an unpolarized proton are defined by the hadron matrix element of a correlator of the gluon field strengths F µρ (0) and F νσ (ξ). Expanding the gluon four-momentum as p = x P + p T + p − n, with n being a lightlike vector conjugate to the momentum of the proton P , such correlator can be written as 1 Φ µν g (x, p T ) = n ρ n σ (p·n) 2 d(ξ·P ) d 2 ξ T (2π) 3 e ip·ξ P | Tr F µρ (0) F νσ (ξ) |P ξ·n=0 where gauge links have been omitted. The transverse projector g µν is defined as g µν T = g µν − P µ n ν /P ·n−n µ P ν /P ·n. Moreover, p 2 T = −p 2 T and M p is the proton mass. The gluon correlator of an unpolarized proton can therefore be expressed in terms of two independent TMD distribution functions: f g 1 (x, p 2 T ) is the unpolarized one, while h ⊥ g 1 (x, p 2 T ) denotes the T -even, helicity-flip distribution of linearly polarized gluons, which satisfies the model-independent positivity bound 1 Like any TMD distribution, h ⊥ g 1 might receive contributions from initial and final state interactions that can render it nonuniversal and even hamper its extraction in processes for which TMD factorization does not apply. Phenomenology Several processes have been suggested to measure the experimentally unkown distributions f g 1 and h ⊥ g 1 . Although it has been discussed how to isolate the contribution from h ⊥ g 1 by means of an azimuthal angular dependent weighting of the cross section for dijet production in hadronic collisions 2 , TMD factorization is expected to be broken in this case due to the presence of arXiv:1406.0525v1 [hep-ph] 2 Jun 2014 both initial and final state interactions 3 . A theoretically cleaner and safer way would be to study dijet or heavy quark pair production in electron-proton collisions, for instance at a future Electron-Ion Collider 4,5 . Another process where the problem of factorization breaking is absent is pp → γγX 6 , which however suffers from a huge background from π 0 decays and contaminations from quark-induced channels. In the following we show how TMD gluon distributions can be probed in heavy quarkonium production at the LHC. TMD factorization should hold in this case, provided that the two quarks that form the bound state are produced in a colorless state already at short distances. Transverse momentum distributions of C = + quarkonia We consider first the process p(P A ) + p(P B ) → Q(q) + X, where Q is a heavy quark-antiquark bound state with C = +, and the four-momenta of the particles are given between brackets. Assuming TMD factorization, the corresponding cross section can be written as dσ = 1 2s with s = (P A + P B ) 2 being the total energy squared in the hadronic center-of-mass frame and A denoting the hard scattering amplitude of the dominant subprocess g(p a ) + g(p b ) → Q(q). The amplitude A is evaluated at order α 2 s within the framework of the color-singlet model. Color octet contributions should be negligible, according to nonrelativistic QCD arguments 7 . For small transverse momentum, q 2 T M 2 Q , with M Q being the quarkonium mass, the resulting transverse momentum distributions for η Q and χ Q0,2 (Q = c, b) are given by where σ = dq 2 T dσ and y is the rapidity of the quarkonium along the direction of the incoming protons. Furthermore, and we have used the following definition of convolution of two TMD distributions f and g, Our numerical estimates are shown in Fig.1, where we have assumed that the gluon distributions have a simple Gaussian dependence on transverse momentum. Namely, where f g 1 (x) is the collinear gluon distribution and the width p 2 T is taken to be independent of x and the energy scale, set by M Q . The bound in Eq. (2) is satisfied, although not everywhere saturated, by the form with 0 < r < 1. The distributions for η c,b and χ c,b 0 are similar to the ones for a pseudoscalar and a scalar Higgs boson 8,9 , and can be used to extract h ⊥ g 1 , while f g 1 can be accessed by looking at χ c,b 2 . A comparison among the different spectra could help to cancel out uncertainties. This experiment requires forward detectors like the LHCb, which hopefully will be able to provide such data in the near future. C = − quarkonium production in association with a photon Along the lines of the previous section, we study the process p(P A )+p(P B ) → Q(P Q )+γ(P γ )+X, where now Q is a C = − quarkonium (J/ψ or Υ) produced almost back to back with the photon. Hence the imbalance q T = P QT + P γT will be small, but not the individual transverse momenta of the two particles. No forward detector is therefore needed in this case. The cross section has the following structure, where Q and Y are the invariant mass and the rapidity of the pair, to be measured, like q T , in the hadronic center-of-mass frame. On the other hand, the solid angle Ω = (θ, φ) is measured in the Collins-Soper frame, where the final pair is at rest and thexẑ-plane is spanned by P A and P B , with thex-axis set by their bisector. The transverse weights are given by and the light-cone momentum fractions are x a,b = exp[±Y ] Q/ √ s. Explicit expressions for F 1,3,4 can be found elsewhere 10 . We propose the measurement of the following three observables, with n = 0, 2, 4, and where the q 2 T integration in the denominator is up to (Q/2) 2 . In this way we are able to single out the three terms in Eq. 9: q T and S (4) q T for the process p(PA) + p(PB) → Q(PQ) + γ(Pγ) + X at √ s = 14 TeV in the kinematic region defined by Q = 20 GeV, Y = 0, θ = π/2, and xa = x b 1.4 × 10 −3 . Our model predictions are presented in Fig. 2 for Υ + γ production, in a kinematic region where color octect contributions are suppressed 10 . The size of S (0) q T should be sufficient to allow for a determination of the shape of f g 1 as a function of q T . Since S q T and S (4) q T are considerably smaller, one would need to integrate them over q 2 T [up to (Q/2) 2 ] to get at least an experimental evidence of a nonzero h ⊥ g 1 . Conclusions The distribution of linearly polarized gluons inside an unpolarized proton h ⊥ g 1 leads to a modulation of the transverse momentum distribution of scalar (χ c0 , χ b0 ) and pseudoscalar (η c , η b ) quarkonia that depends on their parity. It does not contribute to the transverse spectra of χ c2 and χ b2 , which can be used to probe the unpolarized gluon distribution f g 1 . No angular analysis is needed for such measurements and experimental opportunities are offered by LHCb and the proposed fixed-target experiment AFTER at LHC 11 . Furthermore, a first determination of h ⊥ g 1 and f g 1 could come from J/ψ(Υ) + γ production at the running experiments at the LHC, where yields are large enough to perform these analyses with existing data at √ s = 7 and 8 TeV. We have shown that, together with similar studies in the Higgs sector, quarkonium production can be used to extract gluon TMDs and investigate their process and energy scale dependences.
2014-06-02T20:16:54.000Z
2014-06-02T00:00:00.000
{ "year": 2014, "sha1": "f26d932263926325d0e7e3fc8290e7c7eb5f28b8", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "f26d932263926325d0e7e3fc8290e7c7eb5f28b8", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
1356726
pes2o/s2orc
v3-fos-license
On-chip low loss heralded source of pure single photons A key obstacle to the experimental realization of many photonic quantum-enhanced technologies is the lack of low-loss sources of single photons in pure quantum states. We demonstrate a promising solution: generation of heralded single photons in a silica photonic chip by spontaneous four-wave mixing. A heralding efficiency of 40%, corresponding to a preparation efficiency of 80% accounting for detector performance, is achieved due to efficient coupling of the low-loss source to optical fibers. A single photon purity of 0.86 is measured from the source number statistics without filtering, and confirmed by direct measurement of the joint spectral intensity. We calculate that similar high-heralded-purity output can be obtained from visible to telecom spectral regions using this approach. On-chip silica sources can have immediate application in a wide range of single-photon quantum optics applications which employ silica photonics. Introduction Photonic quantum-enhanced technologies aim to employ nonclassical states of light to surpass classical performance limits in diverse fields including computation [1,2], metrology [3], and communication [4]. An impediment to faster progress is the quality of available single photon sources. Building the low-loss sources of high purity single photons necessary for quantumenhanced performance has proven challenging [5,6,7,8,9]. Heralding spontaneous emission from a nonlinear-optical material has to date been the most common method of generating single photons [10,11,12,13]. Nonlinear processes such as spontaneous parametric down-conversion (SPDC) or spontaneous four-wave mixing (SFWM) can be used to create pairs of photons. Due to this pairwise emission, detection of one photon, the heralding photon, indicates the creation of its partner, the heralded single photon. A key source metric is the preparation efficiency η P , the conditional probability that a heralded photon is delivered to its application given detection of the heralding photon. For example, the rate at which multiphoton states are generated from single-photon sources scales exponentially with η P , even with multiplexing strategies which ideally achieve near-deterministic emission [14]. In numerous applications, η P is a crucial parameter for a quantum method to demonstrate true advantage over a classical approach [6,7,8,9]. Four-wave mixing in a silica waveguide is a promising route to achieving exceptionally high η P [12,15]. In heralded photon sources, reduction in η P results from loss of the heralded photon, due to scattering or imperfect mode-matching at interfaces. Waveguides in commonly available silica glasses minimize these effects due to their exceedingly low optical loss and excellent mode-matching to optical fiber, a ubiquitous component in quantum photonics. While silica's relatively small χ (3) nonlinearity affects the emitted photon flux, in many applications it is loss that is fundamental to demonstrating quantum enhancement, not flux. In fact, for heralded single photon sources one must deliberately keep emission rates low to avoid unwanted heralding of more than one photon. The transverse confinement of the waveguide allows these desired photon production rates to be reached at readily available pump powers. Typical quantum applications require heralded photons in pure quantum states in addition to high source η P . In general, however, the heralded pair source generates mixed quantum states. Energy and momentum conservation can lead to entanglement between multiple spatial and spectral modes of the emitted photon pairs [16,17]. If the heralding photon is detected but its mode is not resolved, then the heralded photon is left in a mixed quantum state of all possible modes. One approach to achieve a high-purity heralded photon is to use spectral and spatial filters on the heralding field which ensures the detector responds to only a single spatiotemporal mode. Such filters reduce the rate at which heralded photons are emitted. On-chip SPDC sources are capable of high photon flux [18,19,20,21] that allows acceptable count rates even when filters are used to achieve good photon purity. For SFWM in silica, on the other hand, the reduced count rate from this approach could lead to unacceptably long data collection times. An alternate approach we employ here is to engineer the source to emit photons into a single pair of modes. As a consequence, the heralding and heralded photon are not entangled. The resulting factorable output allows heralded photons of high purity without filtering. Guidedwave photonics allows the precise control of optical modes needed to construct such a source. Previous silica sources have demonstrated this strategy using optical fibers. In these structures, however, fabrication inhomogeneity [11,22] or sensitivity to the local environment [12,23,24,25] prevented a robust scalable solution. While on-chip SFWM sources have been demonstrated in chalcogenide glass [26] and silicon [27], these devices exhibited relatively high loss and did not pursue the factorable source design strategy described here. Here we report the first photon source on a silica photonic chip. Our SFWM source generates heralded single photons of purity P = 0.86 and η P = 80% without any filtering. Heralded single photons are emitted at a rate of 3.1 · 10 5 photons per second with a pump field power of 150 mW. We use a birefringent waveguide to phasematch SFWM at frequencies where the reddetuned photon lies far beyond silica's Raman gain peak. This minimizes spontaneous Raman scattering [28,12,15], which is the principal noise source in many silica sources [29,30]. A solid-clad silica waveguide provides a convenient and robust platform for our source which we foresee finding immediate application in integrated quantum optics experiments that frequently rely on similar integrated silica architectures [2,13,31,32,33,34,35]. Experimental overview Our source uses a 4 cm long waveguide fabricated by femtosecond laser writing in an undoped silica chip (Lithosil Q1) [36,37]. We use adaptive optics to shape the writing beam [38] and produce an elliptical transverse mode yielding a birefringence of ∆n = 10 −4 . By measuring the insertion loss and imaging the spatial mode, we determine the propagation loss in our waveguide to be less than 0.4 dB/cm [38]. A single pump field with central wavelength λ p = 729 nm is generated by an 80 MHz Ti-Sapphire oscillator whose spectral bandwidth ∆λ p is adjusted with a mechanical slit in a 4-f line, as shown in Fig. 1(a). We describe the high (low) frequency photon in the emitted pair as the signal (idler). A dichroic beam splitter (Semrock FF740) separates these signal and idler After the chip, the pump is filtered out while the signal and idler fields are separated by a dichroic mirror (DM) and coupled into separate single-mode fibers (SMFs). (b) Source count rate and cross-correlation g (2) si (0) are measured with the SMFs coupled directly into avalanche photodiodes (APDs). (c) The autocorrelation g (2) ii (0) is measured by connecting the idler fiber to a fiber beam splitter (BS) with a reflectivity of 50% and ignoring the signal field, while the signal field is used as a herald when determining g (2) H (0). We measure g (2) ss (0) by inserting the signal fiber into the BS and ignoring the idler field (not shown). (d) To measure joint spectral intensities, the source SMFs are connected to separate monochromators with multi-mode fibers (MMFs) at the output performing a raster scan over the joint spectral range while APDs count in coincidence. photons which have central wavelengths of λ s = 676 and λ i = 790 nm. A combination of spectral (Semrock 684/24 and 800/12) and polarization filters extinguish the pump field before the signal and idler modes are coupled into single-mode fibers. Silicon avalanche photodiodes (APDs) (Perkin Elmer SPCM-AQ4C) and an FPGA-based coincidence counter (Xilinx SP-605) are used to measure marginal and joint photon statistics as shown in Fig. 1(b)-(d) and discussed below. Nonclassical emission and heralding of single photons We first confirm nonclassical operation of our source by measuring the cross-and autocorrelations of the signal and idler modes with the setup shown in Fig. 1(b)-(c). Classical fields must satisfy the Cauchy-Schwarz inequality (g (2) xy (τ) is the second order coherence between modes x and y at relative time delay τ [39]. For our pulsed source, we calculate where N x (N xy ) correspond to single (temporally coincident) detection events on mode x (x and y). The number of trials is given by N p , the number of pulses from the Ti-sapphire pump. Autocorrelations are described by g (2) x 1 x 2 (0) where x 1,2 refer to the two output ports of a fiber beamsplitter with a reflectivity of 50% as shown in Fig. 1(c). With the pump bandwidth ∆λ p = 3.1 nm and 100 mW average power, we measure g The count rate for the signal mode alone (N s ) and in temporal coincidence with the idler mode (N si ) is shown as a function of the pump power (blue circles). The heralding efficiency, which includes APD detector losses, is calculated according to η H = N si /N s (green). At low pump powers, η H decreases due to the heightened importance of detector dark counts, which cause false herald events. The heralded autocorrelation of the idler photon with the signal field as herald, g H (0), increases linearly due to spontaneous Raman scattering (red squares). Quadratic (blue) and linear (red) fits to the data are shown with dotted lines. Error bars are smaller than the markers for the count data and g (2) H (0). 73.5 ± 1.1, g (2) ss (0) = 1.82 ± 0.03, and g (2) ii (0) = 1.26 ± 0.02 with no background subtraction. The Cauchy-Schwarz inequality is violated here by 49 standard deviations which demonstrates a nonclassical correlation between signal and idler modes in the photon number basis. It is this correlation in photon number that makes such sources suitable for heralding single photons. Using the signal photon as the heralding photon, we measure the heralding efficiency η H = N si /N s . We calculate η H > 40% for pump powers of 75 to 150 mW as seen in Fig. 2. Since η H is limited by the detector efficiency, which is not our concern here, we estimate η P , which corresponds to the probability that the idler photon arrives at its detector given detection of the heralding signal photon. Using the manufacturer's specified detector efficiency, we estimate η P = 80%. The remaining inefficiency is primarily due to loss from coupling to single mode fiber, reflection at interfaces, and scattering. Neither the chip nor the fibers were AR-coated. We quantify the suppression of undesired higher order photon emission via the heralded second-order correlation at zero relative time delay for which all counts are conditional on detecting a heralding signal photon as shown in Fig. 1(c). An ideal single-photon source would give g H (0) = 0. We measure g H (0) = 0.0092 ± 0.0004 for a pump power of 25 mW. Spontaneous Raman scattering is the primary noise source, which explains the linear increase in g (2) H (0) as the source is pumped harder. Our low g (2) H (0) values are comparable to other SFWM sources [15,25] that take advantage of birefringent phase matching [28], which allows significant suppression of this Raman noise which has inhibited other SFWM sources [29,30]. Controlling the heralded state purity The pairwise emission that allows heralding of single photons is not sufficient for many applications; these photons must also be in pure states. Undesired correlations between the signal and idler fields that are not resolved by the heralding detector instead result in heralding of mixed states. Such correlations imply multimode emission in the frequency, polarization, or spatial degrees of freedom. Phasematching constraints generally force the signal and idler fields to be emitted into single polarizations. Furthermore, our waveguide constrains each of the three fields (pump, signal, idler) to a single spatial mode. This is more readily achieved for a SFWM source, in contrast to SPDC, due to the reduced disparity in field frequencies. We focus on the remaining possibility that correlations are generated in the spectral degree of freedom. The effective SFWM Hamiltonian can thus be approximated as [16] where the joint spectral amplitude f (ω s , ω i )= dω α(ω )α(ω s +ω i −ω )φ (ω s , ω i ) is a function of the pump envelope α(ω) and phasematching function φ (ω s , ω i ), while ζ depends on both the pump intensity and magnitude of the χ (3) nonlinearity. We approximate α(ω) as a Gaussian function and specify ∆λ p as the full-width at half maximum of |α(ω)| 2 . The phasematching function results from integrating over the length of the guide φ (ω s , ω i ) = L 0 e i∆kz dz ∝ e i∆kL sinc(∆kL/2), where the wavevector mismatch is ∆k = 2k p − k s − k i . We neglect the phase mismatch arising from the pump pulse intensity, since this is small for our source. In the normal dispersion regime, SFWM is phase-matched (∆k = 0) when the pump field is polarized along the slow axis and both signal and idler fields are polarized along the fast axis of the birefringent waveguide. The spectral entanglement generated by the Hamiltonian in Eq. 3 can be viewed as a consequence of energy and momentum conservation. To quantify these correlations, we rewrite the Hamiltonian in terms of a minimal set of broadband modes via the Schmidt decomposition [40] where † m (ω s ) = dω s ξ m (ω s )â † (ω s ) andB † m (ω i ) = dω i ψ m (ω i )â † (ω i ). The set of functions {ξ m }, {ψ m } are called Schmidt modes, which define an orthonormal basis for the signal and idler Hilbert spaces respectively. This decomposition shows the evolution due to SFWM is equivalent to an ensemble of two-mode squeezing operators e iĤ SFWM =Ŝ A 1 ,B 1 ⊗Ŝ A 2 ,B 2 ⊗ ... wherê S A m ,B m is a two-mode squeezing operator on modes A m and B m [21]. In general, the Schmidt decomposition in Eq. 4 includes significant contributions from multiple modes so that N > 1. As previously discussed, a detector that cannot resolve these different frequency modes leads to heralding of mixed quantum states. To restore high purity, one can employ spectral filters to remove higher modes but at the cost of reduced count rate. In contrast, the approach we adopt is to design the source to emit only in a single pair of Schmidt modes [16,17,12,41]. This allows heralding of high purity states without filtering. We use the singular value decomposition to numerically perform the Schmidt decomposition in Eq. 4. This allows one to predict the heralded photon purity given source parameters α(ω) and φ (ω s , ω i ) [41]. To investigate the frequency-mode structure of our source, we measure the marginal photon number distribution. An ideal single-mode source would exhibit a thermal distribution, while ss of the signal field shows control over the amount of spectral entanglement between the signal and idler fields as ∆λ p is adjusted (red). Theoretical curve (dashed) and data (points) are shown. Filtering out the peripheral lobes in the joint spectral intensity with a 4.5 nm filter on the signal field is calculated to give g (2) ss = 1.98 at ∆λ p = 3 nm (blue). Insets: joint spectral intensity (JSI) measurements demonstrate spectral entanglement control. The FWHM of the pump envelope |α| 4 (white) and phase-matching function |φ | 2 (purple) accurately predict JSI orientation. a highly multi-mode emitter gives Poissonian statistics [42]. In our source, the transition from single-mode to increasingly multi-mode behavior can be readily adjusted via the pump bandwidth ∆λ p [15]. To demonstrate this, we measure the autocorrelation g (2) ss (0) as ∆λ p is varied, as shown in Fig. 3. For ∆λ p = 3.1 nm we measure our optimal g (2) ss (0) = 1.86 ± 0.02 which is close to the ideal thermal result g (2) (0) = 2. Only APD dark counts are subtracted from the data used to calculate g (2) ss (0) in Fig. 3. One can relate the number of excited Schmidt modes to these statistics using g (2) ss (0) = 1 + 1/ ∑ m |c m | 4 = 1 + P where P is the heralded purity. Thus, we have demonstrated P = 0.86 without any filtering of the signal and idler modes. To our knowledge, no previous on-chip source has simultaneously demonstrated purities and efficiencies as high as P = 0.86 and η p = 80%. The near single-mode emission of our source is further supported by joint spectral intensity measurement. A spectrally uncorrelated state has a factorable joint spectral amplitude, and thus a factorable joint spectral intensity. In the middle inset of Fig. 3, we find that the major and minor axes of the central ellipse are parallel to the λ i,s axes, which shows a factorable intensity I(λ i , λ s ) = I i (λ i )I s (λ s ). As ∆λ p is adjusted away from this optimal value, the joint spectral intensities in the left and right insets of Fig. 3 indicate entanglement with an increasingly tilted central lobe. These spectral correlations are in agreement with the measured decrease in g (2) ss (0). At the optimal ∆λ p , the slight deviation from ideal thermal statistics, and corresponding reduction in heralded state purity, is principally due to peripheral lobes in the phasematching function. These arise from the hard-edge boundaries of the waveguide and are faintly observed in joint spectral intensity plots. A 4.5 nm filter on λ s would suppress the small lobes, which would yield g (2) ss (0) = 1.98, and corresponding P = 0.98, while still passing 90% of photons. Such a filter leaves η P unchanged, as it would only be applied to the heralding signal photon. Our source offers large spectral tunability of the signal and idler fields. Fig. 4 shows that SFWM is phase-matched over a wide range of pump wavelengths. The inset figures illustrate that over this entire range ∆λ p can be adjusted to achieve factorable output. For larger λ p the Fig. 4. Theoretical phasematching curves show the signal (blue) and idler (red) wavelengths that satisfy the condition ∆k = 0 for a range of pump wavelengths. Insets show predicted SFWM joint spectral intensities. A factorable output suitable for heralded production of pure photons can be produced over a wide spectral range, from visible to telecom wavelengths, by adjusting λ p and ∆λ p (left to right: ∆λ p = 3.1, 7.5, 20 nm). These calculations assume L = 4 cm and ∆n = 10 −4 . group velocities of the pump and idler photons become comparable, which results in a large spectral bandwidth for the idler. This relatively simple route to broadband emission could be useful, for example in quantum optical coherence tomography [43]. In contrast, many applications, such as quantum memories, instead require narrowband emission. A theory for cavityenhanced SFWM has been developed that allows emission down to MHz bandwidths [44]. Cavities can be implemented using the refined Bragg grating technology available in silica [45,46]. Birefringence homogeneity Our analysis so far assumes the waveguide, and thus the wavevectors k p,s,i , are constant throughout the source. Imperfections in this regard can diminish the purity of heralded photons. Due to our operation of a solid-clad guide far from its zero dispersion wavelength along with the excellent material uniformity of commercial silica [47,15], the dominant source of inhomogeneity is the birefringence. The spectral output is extremely sensitive to variation in the birefringence since the phase-matched wavelengths depend critically on this parameter [12]. We consider two models of birefringence inhomogeneity and their effect on heralded state purity in Fig. 5. The effects of gradual changes during fabrication, for example variation in the writing laser power or local environment, are modeled as a birefringence that varies linearly along the length of the guide. Rapid fluctuations, on the other hand, are described as a random variation in the birefringence of a specified mean and standard deviation. For both models, the resulting phase-matching function is found by numerically integrating φ (ω s , ω i ) = L 0 e iz∆k(z) dz, where ∆k(z) has an explicit spatial dependence. The corresponding joint spectral amplitude in Eq. 3 is then used to determine both the heralded purity via the Schmidt decomposition and the joint spectral intensities in the insets of Fig. 5. These simple models suggest our measured purity, to two standard deviations (gray band, Fig. 5), corresponds to a birefringence inhomogeneity of δ max (∆n) ≤ 3 · 10 −6 . Undesirable birefringence variations in sources can reduce the quality of quantum interference. Hong-Ou-Mandel interference of two single photons from identical sources produces a visibility equal to the photon purities. In the ideal case of δ (∆n) = 0 (left inset of Fig. 5), we calculated a heralded purity of 0.98 when using a 4.5 nm filter on the heralding signal photon arm which still transmits 90% of photons. For an inhomogeneity δ (∆n) = 3 · 10 −6 , the heralded purity, and thus interference visibility, remains high at 0.95 while the filter transmission is now 84%. Accounting for source inhomogeneity, current fabrication methods thus appear sufficient to obtain high visibility interference from heralded sources. Conclusion We have demonstrated an on-chip source of heralded single photons that achieves extremely low loss (η P = 80%) and high output state purity (P = 0.86) without any single-photon filtering. To our knowledge, no previous on-chip source has simultaneously demonstrated such low loss heralding of high purity states. Achieving this performance relied on spectrally factorable photon emission which was enabled by the mode control allowed by source integration. An estimate of birefringence inhomogeneity suggests current fabrication methods are sufficient for high visibility interference between multiple sources on the same chip. Our source meets several loss thresholds for quantum-enhanced applications. Interferometric phase estimation, using single-photon sources with η P = 80%, can achieve a precision better than any classical probe field with the same number of photons [7]. For linear optics quantum computing, the high η P and low g (2) H (0) demonstrated here enables entangling gates to violate Bell inequalities without postselection [8]. Exploiting this performance in future work is facilitated by the silica-chip architecture shared by our source and many recent integrated quantum optics experiments [2,13,31,32,33,34,35]. In the longer term, multiplexing of sources either spatially with a switching network [48,49] or temporally with quantum memories [50] may provide a route to construct many-photon quantum states [8,14]. Even with multiplexing, η P and P still bound the composite source performance. Therefore, optimizing individual source performance remains critical. Our choice of SFWM in silica was motivated by the desire to minimize source loss. One direction for future work is to build similar SFWM sources at telecom wavelengths, where silica loss is even lower. As we have shown, the spectral flexibility of SFWM allows factorable states to be generated at these wavelengths with proper adjustment of ∆λ p . Quantum optics experiments in this spectral region, supplied by silica SFWM sources, could investigate a variety of fundamental scientific questions that can feasibly be tested with larger numbers of single photons.
2013-04-29T20:02:03.000Z
2013-04-29T00:00:00.000
{ "year": 2013, "sha1": "efeee7cedc6adada37f6fc82325000808af24c49", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1364/oe.21.013522", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "efeee7cedc6adada37f6fc82325000808af24c49", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Medicine", "Physics" ] }
198287024
pes2o/s2orc
v3-fos-license
Predictive value of C‐reactive protein and the Pediatric Risk of Mortality III Score for occurrence of postoperative ventilator‐associated pneumonia in pediatric patients with congenital heart disease Abstract Importance Ventilator‐associated pneumonia (VAP) is one of the most common complications after cardiac surgery in children with congenital heart disease (CHD). Early prediction of the incidence of VAP is important for clinical prevention and treatment. Objective To determine the value of serum C‐reactive protein (CRP) levels and the Pediatric Risk of Mortality III (PRISM III) score in predicting the risk of postoperative VAP in pediatric patients with CHD. Methods We performed a retrospective review of clinical data of 481 pediatric patients with CHD who were admitted to our pediatric intensive care unit. These patients received mechanical ventilation for 48 hours or longer after corrective surgery. On the basis of their clinical manifestations and laboratory results, patients were separated into two groups of those with VAP and those without VAP. CRP levels were measured and PRISM III scores were collected within 12 hours of admission to the pediatric intensive care unit. The Pearson correlation coefficient was used to evaluate the association of CRP levels and the PRISM score with the occurrence of postoperative VAP. A linear regression model was constructed to obtain a joint function and receiver operating curves were used to assess the predictive value. Results CRP levels and the PRISM III score in the VAP group were significantly higher than those in the non‐VAP group (P < 0.05). Receiver operating curves suggested that using CRP + the PRISM III score to predict the incidence of VAP after congenial heart surgery was more accurate than using either of them alone (CRP + the PRISM III score: sensitivity: 53.2%, specificity: 85.7%). When CRP + the PRISM III score was greater than 45.460, patients were more likely to have VAP. Interpretation Although using CRP levels plus the PRISM III score to predict the incidence of VAP after congenial heart surgery is more accurate than using either of them alone, its predictive value is still limited. INTRODUCTION Congenital heart disease (CHD) is one of the most common birth defects in children, and it seriously endangers the physical and mental health of children. However, with technical progress in cardiopulmonary bypass (CPB) and mechanical ventilation (MV), the postoperative survival rate of children with CHD has been greatly improved. MV is an important auxiliary breathing support technique, and it uses a ventilator to maintain the pressure difference between the respiratory tract and the alveoli. MV can effectively maintain gas exchange and improve the internal environment by reducing the work of the respiratory muscles and ensuring appropriate ventilation. This is effectively used to treat multiple organ dysfunction syndrome. 1 In recent years, operations on pediatric patients with CHD have become increasingly more difficult. Furthermore, use of a ventilator after surgery for patients with CHD is significantly prolonged, resulting in a significant increase in complications, which seriously affects the recovery of patients. 2 Among them, ventilator-associated pneumonia (VAP) is one of the most common complications after cardiac surgery. Most children with CHD need to use ventilator-assisted ventilation after surgery until spontaneous breathing is restored. However, CHD can be severe and be associated with possible malnutrition, lung infection, low immunity, and other factors, as well as intraoperative intubation, establishment of cardiopulmonary bypass, and improper use of antibiotics. Therefore, children with CHD are more likely to develop VAP during MV, resulting in prolonged use of MV and endangering children's lives. 3 Therefore, early prediction of the incidence of VAP is important for clinical prevention and treatment. This study retrospectively analyzed children with CHD who underwent cardiac surgery in our hospital and received mechanical ventilation for more than 48 hours. This study assessed the predictive value of a method of combining serum CRP levels and the Pediatric Risk of Mortality III (PRISM III) score for predicting the incidence of postoperative VAP in pediatric patients. Study subjects From January 1, 2012 to December 31, 2015, 496 pediatric patients with CHD received cardiac surgery in our hospital. All of the children were transferred to the pediatric intensive care unit (PICU) where they received mechanical ventilation for different durations. Patients who received mechanical ventilation for 48 hours or longer were included in our study. We collected clinical data, such as age, gender, weight, surgical records, and physiological variables. Standard for diagnosis of VAP The diagnosis of VAP strictly followed the "Guidelines for the Diagnosis, Treatment and Prevention of Nosocomial Pneumonia" 4 issued by the Chinese Medical Association in 2013. VAP was diagnosed if the patient met the following (1) (2) and (3) or (1) (2) and (4): (1) mechanical ventilation for ≥ 48 hours or within 48 hours after weaning from mechanical ventilation; (2) new or progressive radiographic pulmonary infiltrate; (3) a pathogenic test with cultures of respiratory secretion suggesting new pathogens; and (4) other signs of infection. Signs of infection (at least one) were as follows: high body temperature (> 38°C); increased respiratory secretion that was purulent; new-onset rales; and a routine blood test that indicated an abnormal inflammation index, with a white blood cell count > 10.0 × 10 9 /L or < 4.0 × 10 9 /L and an increased proportion of neutrophils. Detection of CRP levels Each patient had serum CRP levels measured within 12 hours of admission to the PICU (enzyme-linked immunosorbent assay; Roche). The test procedure was in strict accordance with the manufacturer's instructions. If the CRP level was < 8 mg/L, this was below the level of detection and no value was assigned. Calculation of the PRISM III score Pollack et al presented the PRISM III scoring system on the basis of the Physiologic Stability Index. The PRISM III has 17 physiological variables, including systolic blood pressure, diastolic blood pressure, heart rate, respiratory rate, oxygenation index (PaO 2 /FiO 2 ), partial pressure of carbon dioxide, pupillary reactions, prothrombin time/partial thromboplastin time, and serum potassium, sodium, and glucose levels. 5 In this study, two researchers independently analyzed and recorded the worst values of relevant physiological variables that were obtained within 12 hours of admission to the PICU. These values were used to calculate the PRISM III score. If the PRISM III scores of one patient were different as computed by two researchers, a third researcher was introduced to determine the appropriate score. Statistical analysis Statistical analysis of collected data was performed using the statistical software SPSS 18.0 (Chicago, American). Measurement data are expressed as mean ± standard deviation (x -± s) and the t test was used for comparison of inter-group data. Count data are expressed as a percentage or composition ratio (%), and the χ 2 test was used for inter-group comparison. Correlations were assessed using the Pearson correlation coefficient and a joint function was obtained from the linear regression model. SPSS software was used to draw the receiver operating characteristic (ROC) curve and the area under the curve was compared. P < 0.05 was considered to be statistically significant. General data Among the 496 pediatric patients, 481 who received mechanical ventilation for 48 hours or longer were included in our study. The patients were separated into two groups of those with VAP (VAP group) and those without VAP (non-VAP group). The VAP group comprised 47 patients, of whom 38 (80.85%) recovered and 9 (19.15%) died. The non-VAP group comprised 434 patients, of whom 380 (87.56%) recovered and 54 (12.44%) died. The clinical data of the two groups were collected within 12 hours of admission to the PICU and the PRISM III score was calculated. There were no significant differences in gender, age, weight, CPB time, aortic cross-clamping time, operation duration, duration of urethral catheter placement, and blood loss in surgery between the two groups (Table 1). Inter-group differences in CRP levels and PRISM III scores Patients were divided into two groups by CRP levels as follows: CRP levels < 8 mg/L and CRP levels ≥ 8 mg/L Patients were also divided into two groups by the PRISM III score as follows: moderately ill with a score < 10 and critically ill with a score ≥ 10. 7 We found significant differences in the serum CRP level and the PRISM III score between the VAP and non-VAP groups (P < 0.05) ( Table 2). Notably, CRP + PRISM III showed better consistency than did CRP levels or the PRISM III alone (both P < 0.05) ( Table 3). Predictive value of CRP levels, the PRISM III score, and CRP+PRISM III for use of VAP On the basis of the gold standard for diagnosis of VAP, the ROC curves showed that the area under the curve of CRP serum levels was 0.684 and that of the PRISM III score was 0.677. The optimal point for CRP levels and the PRISM III score was 9.500 mg/L and 9.500, respectively. Diagnostic accuracy was higher when serum CRP levels were > 9.500 mg/L or the PRISM III score was > 9.500. Additionally, when the value of CRP + PRISM III was greater than 45.460, it had better specificity and sensitivity (Table 4 and Figure 1). Data are presented as n or mean ± standard deviation. * χ 2 value. CPB, cardiopulmonary bypass. DISCUSSION CHD is one of the most important causes of infant and child death. In recent years, there has been continuous improvement in diagnosis and treatment of CHD and widespread application of mechanical ventilation. Therefore, early surgical treatment of CHD has become an inevitable trend in clinical practice, and has greatly improved the survival rate of children with CHD. However, abnormal hemodynamics in children with CHD can easily affect their respiratory function, leading to pulmonary infection. With some other unfavorable conditions, such as malnutrition and low immunity, as well as tracheal intubation, establishment of CPB, and use of broad-spectrum antibiotics, the incidence of postoperative VAP has significantly increased in children with CHD. 6 VAP is one of the most common nosocomial infections in patients in the PICU who have received mechanical ventilation for 48 hours or longer. Previous studies have shown that the incidence of VAP in the PICU ranges from 3%-13.3%. 7-10 According to reports by Melsen et al, the mortality rate of children caused by VAP is 13%. 11 The mortality rate of patients with VAP and high risk factors, such as those after CHD surgery, can be up to 49.2%. 12,13 In clinical work, CRP is often used as a marker of inflammation to indicate the severity of infection. Serum CRP levels of healthy people are low, but CRP levels are significantly elevated in patients with autoimmune disease, tissue necrosis, and infectious disease. 14,15 When the immune response subsides, serum CRP levels rapidly drop. In congenial heart surgery, long periods of CPB and aortic cross-clamping may affect the immune response, leading to elevated levels of inflammatory markers. 16 Therefore, whether CRP can be used to predict the incidence of VAP in the early stage is controversial, and the conclusions from relevant clinical studies are also inconsistent. Póvoa In the present study, we measured serum CRP levels and performed PRISM III scoring in children with or without VAP after cardiac surgery. We found that the CRP value at the optimal point for predicting the incidence of VAP was > 9.500 mg/L, but its sensitivity (46.8%) and specificity (10%) were low. The PRISM III score at the optimal point for predicting the incidence of VAP was > 9.500, and its sensitivity was average (57.4%), but the specificity was relatively high (70.5%). However, the combined CRP + PRISM III was highly specific (85.7%) and sensitive (53.2%) for predicting the incidence of VAP. Additionally, the mean CRP level and PRISM III score were significantly higher in the VAP group than in the non-VAP group. However, correlation analysis showed that the PRISM III score did not increase correspondingly with an increasing CRP level, while CRP + PRISM III and CRP or PRISM III had better consistency. ROC analysis showed that when CRP + PRISM III > 45.460 was used as the predicted value of VAP, the area under the ROC curve was 0.736, the sensitivity was 53.20%, and the specificity was 85.70%, which was more accurate than each one alone. Our results of the current retrospective study suggest that the sensitivity and specificity of serum CRP levels are poor and not suitable as a predictor of VAP. The PRISM III score alone is also not suitable for predicting the incidence of VAP because of its low sensitivity. Although using CRP + PRISM III to predict the incidence of VAP after congenial heart surgery is more accurate than using either of these indicators alone, its predictive value is still limited. Therefore, whether CRP + PRISM III can be used for predicting VAP requires further study. In this study, a relatively small sample size may have affected the accuracy of the diagnosis. The next step is to expand the sample size and use daily monitoring of serum CRP levels and the PRISM III score to further assess the clinical utility in predicting VAP. Additionally, multicenter clinical research on this issue should be performed in the future. The level of sensitivity and specificity found in our study was not satisfactory. Further prospective, large-sample studies need to be performed to obtain more effective prediction results.
2019-07-26T07:23:42.825Z
2019-06-01T00:00:00.000
{ "year": 2019, "sha1": "2709afd7136f882e465be9c76af9f598d5e6ab3e", "oa_license": "CCBYNCND", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ped4.12128", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ecabe3ec592263999d9056a9b0779ccf97fee03e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
203660352
pes2o/s2orc
v3-fos-license
Achiral Zeolites as Reaction Media for Chiral Photochemistry Obtaining enantiomerically-enriched photoproducts from achiral reactants has been a long-sought goal. The various methods developed to achieve chiral induction in photoproducts during the last fifty years still suffer from a lack of predictability, generality, and simplicity. With the current emphasis on green chemistry, obtaining enantiomerically enriched products via photochemistry is a likely viable alternative for the future. Of the various approaches developed during the last three decades, the one pioneered in the author’s laboratory involved the use of commercially-available and inexpensive achiral zeolites as the media. This approach does not use any solvent for the reaction. Examples from these studies are highlighted in this article. Since no chiral zeolites were available, when the work was initiated in the author’s laboratory, commercially-available zeolites X and Y were modified with chiral inductors so that the reaction space becomes chiral. The results obtained established the value of chirally-modified, commercial zeolites as media for achieving chiral induction in photochemical reactions. A recent report of the synthesis of a chiral zeolite is likely to stimulate zeolite-based chiral photochemistry in synthesizing enantiomerically-pure organic molecules. The availability of chiral zeolites in future is likely to energize research in this area. Our earlier observations on this topic, we believe, would be valuable for progress of the field. Keeping this in mind, I have summarized the work carried out in our laboratory on chiral photochemistry on chirally-modified zeolites. This review does not include examples where high chiral induction has been obtained via a strategy that examines molecules appended with chiral auxiliary within achiral and chirally-modified zeolites. The latter approach yields products with diastereomeric excess >80%. Introduction The origin and continued existence of life on earth depends on light absorption by molecules [1]. The science of light, especially in the context of organic chemistry, has a history exceeding a century. Several photochemical reactions employed today were discovered in the early 1900's [2]. Interestingly, early photoreactions were performed in solid state [3,4]. Between 1950-70, several important photoreactions in solution, as well as in solid state, were discovered, and their mechanistic details elucidated [4]. In conjunction with discoveries of new photoreactions, the development of theoretical and physical concepts related to triplet state, radiative and radiationless transitions, and energy-and electron transfer have contributed to the vigorous growth of the field of organic photochemistry [1]. These concepts, verified by several pioneering studies from 1960-1990 with time resolved techniques, have placed the field of photochemistry on a firm footing and have allowed it to permeate other disciplines of chemistry. In recent times, photochemistry has advanced from a purely basic science to a more applied discipline. In this progression, well-established concepts are finding use with new nomenclatures. Unfortunately, this results in the younger generation being unaware of the fundamental work done in various fields of photochemistry by the pioneers. Two such recent examples are 'visible light photocatalysis' [5] and 'up-conversion' [6], which have their origin in well investigated energy-and electron transfer processes. In the near future, similar to the aforementioned two processes, asymmetric photochemistry is likely to play an important role in the construction of complex chiral molecules. Believing a brief summary of the early contributions to this topic would be valuable and appropriate, I have outlined our work with zeolites as the media during the generation of optically-enriched products from achiral reactants. Due to space constraints, this article is limited to asymmetric photochemistry in zeolites, i.e., the work carried out in the author's laboratory [7,8]. A number of reviews are available on this process in solution [9][10][11][12][13][14][15][16][17][18][19][20][21]. Several supramolecular assemblies have been successfully used as reaction media to achieve chiral induction [22,23]. The monograph on asymmetric photochemistry in various media by Inoue and Ramamurthy provides detailed coverage of this topic [24]. The Beginnings of Asymmetric Photochemistry To our knowledge, the first report on chiral photochemistry in solution came from the laboratory of Hammond [25]. Hammond and Cole employed optically-active triplet sensitizers to bring about the geometric isomerization of achiral cis-diphenylcyclopropane. Although the enatiomeric excess obtained was small, this report served to seed interest in this research problem. Since that first report, several groups have performed enantio-and diastereo-selective phototransformations, both in solution [9,10] and in solid state [3]. Although several photoreactions in the crystalline state were reported in the early 1900, only in 1960s was a systematic study on this topic performed by Schmidt and coworkers [3]. Since knowledge of both crystallography and photochemistry is essential to making a significant contribution to this topic, the growth of solid-state photochemistry was slow. During the early days, the progress in asymmetric solidstate photochemistry was dependent on serendipity. Often, fortuitous crystallization of achiral molecules in chiral space groups prompted a project [26][27][28][29]. Due to this, unfortunately, relatively few examples of chiral induction during the photolysis of achiral molecules in crystalline state [30,31] were reported during this period [32]. Thus, the direct transformation of achiral molecules to optically-enriched molecules by light-induced reactions in the crystalline state has proved to be a challenge. Slow progress, as well as a lack of clear technique to pack pure achiral molecules as chiral crystals, led to exploration into the use of chiral hosts to form chiral clathrates in the crystalline state. In this strategy, an achiral reactant molecule was enclosed within a chiral host molecule to produce a crystalline host-guest complex. The report by Natta, in which he used optically-active perhydrotriphenylene [33] as the chiral host and 1,3-dienes as the achiral guest, paved the way for future research on this topic. This approach has been elaborated upon successfully by Toda and coworkers, who employed organic chiral diol hosts [34][35][36]. Importantly, Toda and coworkers were able to achieve quantitative chiral induction in a few examples. In spite of this, no clear understanding of the rules that govern the types of molecules that would complex with the host has been reached. Therefore, this approach, while being highly promising, continues to be unpredictable. Thus, the two approaches mentioned above that are guaranteed to yield chiral products from achiral reactants are not universal, and predictability is poor. Recognizing the need for a more general methodology, Scheffer introduced a technique known as the "ionic chiral auxiliary approach" [37][38][39][40][41]. In this method, achiral molecules are prompted to crystallize in chiral space groups by forming a chiral ionic salt from an achiral reactant (acid or base) and chiral auxiliary (base or acid). By this approach, near quantitative chiral induction has been achieved in solid-state photochemistry. Although the salt yields the products as diastereomers with respectable diastereomeric excess (de), it can be readily hydrolyzed to give the products as enantiomers (enantiomeric excess; ee). While this approach is more general, predictability is still not 100%, although there are exceptions. The main problem with photoreactions in the crystalline state is they are driven by molecular packing. However, the rules of molecular packing controlling the reactivity of molecules in the crystalline state and in solid host−guest assemblies are yet to be fully developed [42,43]. This led us to explore zeolites as media for chiral photoreactions. As outlined below, zeolites are microcrystalline solids with well-defined void spaces inside. In these spaces, organic molecules could be packed as in organic clathrates. Research from our laboratory that is briefly summarized in this review reveals that chiral induction on photoproducts could be achieved if the zeolite that is used as the reaction medium is modified with photoinert chiral inductors. Zeolites as Media for Asymmetric Photoreactions: Chiral and Achiral Zeolites Zeolites are inorganic microporous and microcrystalline materials that have the ability to adsorb small-and medium-sized organic molecules [44,45]. Zeolites, being porous, tend to adsorb organic molecules, both on the external and internal surfaces. For adsorption within the zeolites, the guest molecule should be able to diffuse into the interior through channels and cages that initiate on the external surface. Since the area on a zeolite's internal surface largely exceeds that of its external surface, most adsorption takes place on the interior surface. However, the feasibility of intracrystalline adsorption depends on the kinetic diameter of the guest channels; it has to be smaller than the diameter of the passage to the intracrystalline cages and channels. The internal structure of zeolites is porous and contains well-defined cages and channels. Of the various zeolites, ZSM and faujasite types have attracted the attention of photochemists as reaction media. ZSM zeolites have narrow channels (dia: 5.5 Å), while faujasites contain cages (dia:~13 Å). Also, while the faujasites contain a large number of cations, ZSM zeolites have very few. The number of cations in zeolites is controlled by the framework Si/Al ratio. For chiral photochemistry, faujasites that possess larger cages and a large number of cations are valuable. Synthetic faujasite zeolites, known as X and Y, have the following unit cell composition: where M is a monovalent cation. The ratio of AlO 2 to SiO 2 varies; this controls the number of cations that are associated with the framework. The internal structure of faujasites consists of two types of cages, i.e., sodalite and supercage. Smaller sodalite cages assemble to form large supercages wherein the guest organic molecules could be accommodated. Sodalite cages are too small to accommodate organic molecules. The supercages that are nearly spherical have a diameter of~13 Å in diameter. The entry to the cage is controlled by smaller windows that are 7.5 Å in diameter. There are four windows to a cage, and they are tetrahedrally distributed about the center of the supercage. The internal porous structure of faujasites are formed by a three-dimensional network of sodalite and supercages. Since all cages are interconnected, a molecule that enters the faujasite supercage has access to all the interior cages, and can spread between multiple cages. The cations present within zeolite cages also play an important role during the photoreactions of the included guests. Since Al that replaces Si carries a negative charge, a positively-charged cation neutralizes the structure. Therefore, the number of cations present in a zeolite is equivalent to the number of Al ions present in the framework. In the context of reactions within zeolites, the cations present in the supercage play an important role. Because of sodalite cages are too small, the Type-I cations (16 cations per unit cell in both X and Y) present in sodalite cages are not important in the context of photochemistry. Organic molecules would be able to interact with Type-II (32 per unit cell in both X and Y) and Type-III cations (38 per unit cell in the case of X type and only eight per unit cell in the case of Y type) located within the supercage. Figure 1 identifies the locations of cations within X and Y zeolites. In addition to the dimensions and the cations, the nature of the wall also influences the reaction that occurs within a supercage. In general, the cavity in which a molecule is accommodate could be classified as 'hard' or 'soft" and 'active' or 'passive' [46,47]. Since the walls of zeolite cages are inflexible, these cages are considered hard. Because of their hardness, the walls of zeolites do not change their shape over the course of a reaction. Also, since the cations could interact with the guest molecules through weak interactions, the cages are considered to be active. Such a property would make it possible to manipulate the behavior of reactants through cations. Since the walls are inflexible, the shape and the free volume of the supercage will determine, to some extent, the nature of the product obtained from a guest molecule. The volume available for an organic molecule within a supercage depends on the number and nature of the cation. For example, it varies from 873 Å 3 in Na X to 732 Å 3 in Cs X). The structural characteristics of the zeolites which have been described above make them ideal hosts to carry out photochemical reactions. Ready commercial availability at cheap prices and known easy methods to quantitatively exchange cations (both organic and inorganic) make them attractive as media with which to conduct chiral photoreactions. In order for a system to be an ideal host, it should satisfy certain criteria, namely: (i) there should be a size match between the host and the guest; (ii) the guest should possess a distinctly different electronic absorption to that of the host; (iii) the host emission, if any, should not interfere with the emission from the included guest; and (iv) included guests, upon excitation, should not undergo reaction with the host. Zeolites satisfy all the above criteria. Zeolites, like silica, scatter light and do not show emission. In spite of scattering, light can penetrate the zeolite particles and reach the molecules which are adsorbed into the interior surface. Thus, included guest molecules can be excited without any complications. To effect chiral induction during the course of a reaction, it must occur in a chiral environment. In the case of zeolites X and Y, the reaction medium is not chiral. The importance of chiral zeolites in commercial applications have motivated several groups to attempt the synthesis of such zeolites. Success on this front has been very limited. Until a few years ago, no stable chiral zeolites had been reported. Theoretically, many can exist in chiral forms (e.g., ZSM-5 and ZSM-11). The synthesis of a few unstable chiral polymorphs of zeolite beta and titanosilicate ETS-10 have been reported [48,49]. Because of their poor stability, none of these is useful as media for chiral photoreactions. The most notable exception on this front is a report by Davis and coworkers in 1992. They were able to isolate zeolite beta enriched with polymorph A that is chiral [50]. Using this as the medium, they were able to achieve the (R,R) diol with an enantiomeric excess of 5% during the ring-opening reaction of trans-stilbene oxide. A breakthrough in the synthesis of enantiomerically-pure zeolite came after almost thirty years of initial experiments [51]. A few years ago, the Davis group reported the synthesis of a stable chiral zeolite. This publication provides a new thrust to this topic and is likely to change the value of zeolites as media for chiral photochemistry [52]. Keeping this in mind, our early work on chiral photochemistry within achiral zeolites is d in this mini-review. In 1995, the author's laboratory created an asymmetric environment within achiral zeolites by adsorbing chiral organic molecules within the supercages of X and Y zeolites [53,54]. The work described here uses achiral zeolites as the reaction media, but its internal characteristics are modified by the inclusion of a chiral inductor along with the reactant. Achiral Zeolites Rendered Chiral In our studies, we employed three approaches to realize chiral induction in photoreactions within zeolites. The first, known as the chiral inductor methodology, involved the adsorption of a pure chiral isomer of a spectator molecule (coguest) within zeolites. By this process, the nonchiral interior space of the zeolite is rendered chiral [55]. This spectator molecule, as the name implies, does not undergo transformation during irradiation. It only provides a chiral space around the reactant molecule. The nature of the chiral inductor used to modify the zeolite surface and the extent of its interaction with the achiral reactant will determine the magnitude of enantioselectivity observed in the photoproduct. This strategy requires the adsorption of two different molecules (a chiral inductor and an achiral reactant) within the zeolite supercage (reactant (R) and chiral inductor (I)). The possible distribution of organic molecules within zeolite is represented in Figure 2: cages having two reactant molecules (R) (cages B and D), cages containing two chiral inductors (I) (cage C), cages containing just reactant molecule (R) (cage E) or chiral inductor (I) (cage F), and cages containing both reactant molecule (R) and chiral inductor (I) (Cage A). A similar distribution repeats within the zeolite. Since for chiral induction to occur, a reactant has to be closer to the chiral inductor, only cages containing both the reactant molecule (R) and chiral inductor (I) will yield appreciable enantioselectivity. Cages containing only the reactant molecule (cages B, D, and E) will give a product with no enantioselectivity. Hence, the observed enantioselectivity will be an average of the enantioselectivity observed from cages A, B, D, and E. The extent of enantioselectivity in cage A, as stated above, will depend on the nature of chiral inductor and its interaction with the reactant molecule which, in turn, depends on the size of the charge-compensating cations. Thus, the choice of chiral inductor and cations is critical in achieving respectable chiral induction in a photochemical reaction. Chiral Inductor as an Active Co-guest: Photoreduction Prompted by Electron Transfer In the approach described above, there is very little control over the distribution of the reactant and the chiral inductor. It is obvious, without a clear understanding of the factors that control the distribution (Figure 2), that it is hard to predict the outcome of the chiral induction in this medium. Since the goal in the initial stages of this study was to establish the feasibility of using achiral commercial zeolites as media for chiral induction, a reaction that would occur only if the reactant is next to a chiral inductor was chosen as the probe reaction. This choice restricts the reaction to cage A in Figure 2. To fulfill this criterion, we chose an amine-induced photoreduction of carbonyl compounds (Scheme 1) [56][57][58]. This reaction is prompted by an electron transfer from amine to an excited ketone. If the amine is a chiral amine, it can also serve as a chiral inductor, in addition to being an electron donor. Carbonyl radical anions generated by electron transfer would abstract a proton and yield an optically-active alcohol [59]. Normally in the absence of a chiral bias, a racemic mixture of alcohol would be produced. However, when the generated carbonyl radical anion intermediate is adjacent to a chiral inductor, there is a good possibility of a preference for the abstraction of a proton from one of the two pro-chiral faces of the ketone. To test this hypothesis, we examined the photoreduction of phenyl cyclohexyl ketone 1 that normally undergoes an intramolecular, Type-II reaction in solution. However, in the presence of an electron transfer agent, it could undergo photoreduction to give 3. In contrast to isotropic solution, within a zeolite in the presence of chiral amines such as ephedrine, pseudoephedrine, or norephedrine intermolecular reduction product, α-cyclohexyl benzyl alcohol 3 was the main product (Scheme 1). Chiral amine is indeed the electron transfer agent, as was revealed by the dependence of the ratio of intermolecular reduction product vs. Type-II product (3/2) on the nature of the amine (primary, secondary or tertiary). Amongst the various amines used, tertiary amine functioned better as the electron donor, and yielded larger amounts of the intermolecular reduction product. The ee obtained with various amines are listed in Scheme 2. Amongst the various chiral inductors listed, norephedrine, possessing a primary amine functionality, worked better as a chiral inductor (ee 68%). Norephedrine is the source of chiral induction, as confirmed by using the antipode of norephedrine (SR vs. RS). It is important to note that under similar conditions, in solution, no chiral induction was obtained in the reduction product. The generality of this chiral induction within zeolite was established with a number of aryl alkyl and diaryl ketones (15 in total; Scheme 3), as well as chiral inductors (Scheme 2). This example served as the proof of principle of the viability of chiral induction within achiral zeolites. However, the % ee was not high, and the overall process was not synthetically useful. Yet, this is the first example of the chiral induction of a photoreaction within a zeolite, and also possibly a photoreaction with high chiral induction. Scheme 1. Product of electron transfer mediated reduction ανδ γ−ηψδρoγεν abstraction of phenylcyclohexyl ketone. Scheme 2. Enantiomeric excess (ee) obtained in the case of product 3 from 1 with various chiral inductors within NaY zeolite. Scheme 3. Enantiomeric excess (ee) obtained in the case of various arylalkyl ketones within NaY zeolite with the chiral inductor giving the best ee's. Chiral Inductor as a Passive Co-guest: Chiral Induction on Photoproducts within Zeolites In the method described in Section 5, the chiral inductor initiated the reduction process by getting involved in electron transfer. This required the chiral inductor and the reactant to interact closely. In this section, we provide examples where the chiral inductor acts only as the chiral inductor, and does not get involved in the reaction at any stage. Thus, there is only physical interaction between the chiral inductor and the guest; there is no chemical interaction. In these examples, the confinement provided by the zeolite medium and the weak interaction between the charge-compensating cations and the guest forces an interaction between the achiral reactant and the chiral inductor. The moderate optical induction obtained in these examples shows that even achiral zeolites modified with chiral inductors could serve as a valuable reaction media for photoreactions that involve the transformation of achiral reactants to chiral products. The examples provided below not only illustrate that this approach works well with selected systems, but also provides encouragement for future studies on this topic. Photocyclization of Tropolones Tropolone alkyl ether (4), upon excitation, undergoes 4e 'dis' rotatory ring closure. As shown in Scheme 4, opposite optical isomers are obtained upon disrotation, that could occur in two different ways (5a and 5b). When the molecule is adsorbed on the surface of a zeolite, one mode of rotation will be restricted, leaving the other to dominate; in principle, this could lead to chiral induction. However, since tropolone could adsorb through both enantiotopic faces of the molecule, even if one mode of rotation is restricted, equal amounts of the two enantiomers of the products would be obtained by rotation through one mode of rotation from both enantiotopic faces (Scheme 4). Thus, the adsorption on a surface alone is not expected to bring about enantioselectivity. Most important is to encourage a tropolone molecule to adsorb on the zeolite surface through only one pro-chiral surface. This is likely if the surface is artificially altered to be chiral. This was achieved by adsorbing optically-pure chiral molecules within achiral zeolites. In Scheme 4, the means by which a chiral inductor present on a surface may direct the mode of adsorption by tropolone alkyl ether is illustrated. A surface that can hold the chiral inductor as well as the reactant tropolone alkyl ether firmly in only one fashion is needed to achieve the desired goal. In this context, zeolitic cations are expected to strongly interact with chiral inductors, and thus present them in certain geometries to the reactant molecule [60,61]. The results presented in Schemes 5-7 show that electrocyclization can proceed in a stereoselective fashion within zeolites [62][63][64][65]. Maximum ee in the case of 6 and 7 (Scheme 5) was obtained with bifunctional chiral agents, suggesting that multipoint interaction is essential for chiral induction to occur within zeolites. This suggested the need to identify bifunctional chiral inductors to modify the interior of zeolites. It is important to note that the extent of chiral induction depends on the nature of cations (Scheme 7). [65] Apparently, weak interactions between the cation, the chiral inductor, and reactant play a crucial role in the chiral induction process within zeolites. This is also evident from the results observed with wet and dry zeolites. [63] The inability of hydrated cations to anchor the tropolone alkyl ether to the zeolite surface is reflected in the decreased ee. Figure 3 provides a model that should aid one to visualize the mechanism of chiral induction within zeolites [63]. The recognition points, in most cases, are likely the hydroxyl, amino, and aryl groups of the inductor, the cations of the zeolite, and the carbonyl and methoxy groups of tropolone alkyl ether. The fact that the extent of chiral induction (% e.e.) and the direction (i.e., which isomer is enhanced) depend on the nature of the zeolite, X vs. Y (X and Y differ only in terms of the number of cations within a supercage), and the cation suggests that the presence of smaller cations like Li + , Na + , and K + , and the absence of water molecules are essential to the chiral induction process (Scheme 7). Scheme 5. Enantiomeric excess obtained with various chiral inductors during photocyclization of tropolone alkyl ether within NaY zeolite [63]. Photocyclization of Pyridones The photobehavior of N-alkylpyridones (8) within zeolites provides further support to the claim that chirally-modified zeolite is a very useful medium to obtain chirally-enriched products from achiral reactants (Scheme 8). Achiral pyridones, upon irradiation, undergo intramolecular 4π disrotatory photocyclization, similar to tropolones, to yield chiral β-lactams (9a and 9b). As in the case of tropolones, controlling the direction of photochemical ring closure should result in asymmetric induction in the photoproduct. Various methodologies have been reported in the literature for conducting asymmetric photocyclization of pyridones. The maximum enantioselectivity (~20 %ee) in solution was achieved by Bach et. al. in the presence of a chiral host [66]. The author's group succeeded in employing a chirally-modified zeolite as a medium to achieve chirally-enriched product from pyridones via photocyclization [67]. Of the three examples listed in Scheme 9, two yield cyclized products with moderate enantioselectivity (ee > 50%). In isotropic media, there will be an equal opportunity for cyclization from both tropolones and pyridones. To obtain stereoselectivity, it is necessary to exert control over the mode of cyclization. Similar to that in the case of tropolones, the chiral inductor within a zeolite helps the pyridone molecule to adsorb on the surface from one of the two enantiotopic faces (Schemes 4 and 8). The fact that the reaction, when performed within zeolite without a chiral inductor gives a racemic mixture, suggests the important role played by chiral inductors in achieving moderate enantioselectivity. Even though, the cation and the zeolite framework help to control the observed stereoselectivity, and the stereoselectivity is not quantitative, stressing that more needs to be understood about the zeolite-based chiral inductor strategy. Photoisomerization of 1, 2-Diphenylcyclopropanes As outlined in Section 2, the photoisomerization of 1,2-diphenylcyclopropane is the first photoreaction that showed promise for the feasibility of chiral induction in solution [25]. Cis-1,2-diphenylcyclopropane is optically-inactive due to the presence of a plane of symmetry in the molecule. However, since the plane of symmetry is compromised in trans-1,2-diphenylcyclopropane, it is optically active. The photoisomerization of optically-inactive cis-1,2-diphenylcyclopropane to its optically-active trans from could be brought about by both direct and triplet-sensitized irradiations and electron transfer sensitization [25,68,69]. In the examples discussed in this section, the chiral inductor that induces enantioselectivity is adsorbed on the zeolite interior surface to provide a 'local chiral environment', and is not linked to the reactant through either covalent or ionic bonds. Asymmetric induction ensues as a result of the close proximity between the reactant and the chiral inductor within the confined space of the zeolite supercage. For this method to work effectively, the chiral inductor must interact with the reactant and the cation via non-covalent bonding. In this context, the photoisomerization of 2,3-diphenylcyclopropane-1-carboxylic acid derivatives (10 and 11) within zeolites was investigated (Scheme 10). In the two reactions listed in Scheme 10, low but significant ee was obtained with several chiral inductors [70][71][72]. For example, ethyl ester undergoing photoisomerization in an isotropic medium yielding a racemic mixture of the corresponding trans-isomer gave the same isomer in 17% ee within chirally-modified NaY, with optically-pure cyclohexylethylamine. The use of the optical antipode of the chiral inductor gave the opposite enantiomer to the same extent as expected, indicating that the system is well behaved inside the zeolite. It is encouraging to note the fact that, unlike solution, chirally-modified zeolite was able to bring about some degree of chiral discrimination. Further work is needed to understand the reasons for the low ee. It may be of interest to note that these systems present high diastereomeric excess by the chiral auxiliary strategy within zeolites. Within chirally-modified zeolite, the isomerization could be stereoselective. Oxa-di-π-Methane Rearrangement of Cyclohexadienones As illustrated in Scheme 11, 6,6-dimethyl-2,4-cyclohexadienone (12) undergoes oxa-di-π methane rearrangement to give a bicyclic product. In this reaction, the chirality is induced into the product during the first step of the reaction. In the species where radicals are centered at the oxygen and the tertiary carbon, formed from the triplet of the reactant, the cyclopropane ring can either be above or below the plane of paper. Similar to tropolone and pyridone discussed above, the cyclic dienones do not bind to one particular face preferentially to the zeolite cavities, and so in the absence any external chiral reagent, the photoreaction yields racemic products. This is also the case in solution. The irradiation of 12 included in zeolite NaY (hexane slurry) gave the oxa-di-π-methane rearranged isomer as the sole product, with equal amounts of both enantiomers (13a and 13b). In solution, even in the presence of a chiral inductor, ephedrine, a racemic product mixture resulted. However, the irradiation of a hexane slurry of the above compound included in dry (−)-ephedrine-modified NaY made the product enantiomerically enriched to the extent of 30 ± 3% [67,72,73]. As expected, the optical antipode (+)-ephedrine gave the opposite enantiomer in 28 ± 3% excess. Among the various chiral inductors examined, pseudoephedrine gave respectable ee ((+) isomer 26% and (−) isomer 24%), whereas all others (menthol, valinol, methylbenzylamine, norephedrine, and diethyltartrate) yielded the product in less than 20% ee. Variation of the irradiation temperature had a distinct effect on ee: with (−)-ephedrine as chiral inductor, the ee, at −55 • C was 49%, while at 100 • C, it was 7%. Scheme 11. Products of photorearrangments (geometric isomerization and oxa-di-π-methane rearrangements and ee obtained within NaY in presence of a chiral inductor. Norrish-Yang Photocyclizations The generality of chiral induction within chirally-modified zeolites was tested with the classic Norrish Type-II γ-hydrogen abstraction reaction. In this reaction the reactant is achiral and the Yang product (cyclobutanol) is chiral. Examples of this reaction are provided in Scheme 12. In all cases, the products in solution (even in the presence of chiral inductors) are racemic. Enantiomeric excess in the range of 30% is routine within chirally-modified NaY modified zeolites [60,74,75]. The examples presented in this section show that chiral inductors function better within a zeolite than in solution. Summary In this review, we have d chiral induction in a variety of photoreactions of achiral molecules within achiral zeolites with the help of chiral inductors. The approach outlined here has employed readily-available and inexpensive zeolites for this purpose. The achiral nature of the commercially-available zeolites was overcome by modifying them with chiral inductors. The examples demonstrate that such chirally-modified zeolites could serve as a chiral medium to achieve low-to-moderate enantiomeric excess in photochemical reactions [55]. In this review, we d, with examples, the effectiveness of zeolites in bringing about chiral induction on products from achiral reactants. In these chiral inductors, which are not linked to the reactant, achiral reactants and chiral inductors are two independent molecules. Cations within zeolites serve as connectors to bring the chiral inductor and the reactor closer, and thus, to differentiate between the two prochiral faces of the reactant. In addition to these, we have also examined a number of systems where the chiral perturber is covalently linked to the reactant at a remote site. In these systems, the chiral inductor is known as chiral auxiliary. The effectiveness of the chiral auxiliary was more enhanced within zeolites than in solution. One such example, shown in Figure 4, illustrates the power of a zeolite in chiral photochemistry. In this reaction, diastereomeric excess (de) as high as 90% is achieved within zeolites, while in solution the de was zero. For additional examples, please read the listed references [55,60,65,67,70,[72][73][74][75][76][77][78][79][80][81][82]. Thus, the value of achiral zeolites in bringing about asymmetric induction in photoreactions of achiral and molecules appended with chiral auxiliaries has been established. The approach described here is likely to gain momentum with the ready availability of the exciting, recently-reported, synthesis of an enantiomerically-enriched zeolite.
2019-10-05T13:27:28.721Z
2019-10-01T00:00:00.000
{ "year": 2019, "sha1": "0c7a33a590297c748735090535c76460b0897f7e", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/24/19/3570/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1e13239854e77ccffdfd1c942a631ef4df0dccc5", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
264425409
pes2o/s2orc
v3-fos-license
The origins and early development of the ILAE/IBE/WHO global campaign against epilepsy: Out of the shadows Abstract The International League Against Epilepsy (ILAE)/International Bureau for Epilepsy (IBE)/World Health Organization (WHO) Global Campaign Against Epilepsy was launched in Geneva and Dublin in the summer of 1997. The second phase of the Campaign was launched by a major event in Geneva, led by WHO Director General Dr. Gro Harlem Brundtland in February 2001. Since then, the Campaign has been gathering momentum around the world culminating in the WHO General Assembly Resolution (WHA 68.20) on Epilepsy in May 2015 supported by 194 countries. Recently, the World Federation of Neurology and other neurological non‐governmental organizations (NGOs) have joined forces with the Epilepsy Campaign, leading to the WHO General Assembly Resolution (WHA 73.10) in May 2022 promoting a 10‐year Intersectoral Global Action Plan (IGAP) for Epilepsy and Other Neurological Disorders. I was privileged to serve as the first Chairperson of the Global Campaign Against Epilepsy and this year all my documents and correspondence relating to the Campaign have been delivered to the Wellcome Collection in London. These are the basis for this detailed account of the origins and early development of the Campaign. I describe the events leading to the birth of the concept, planning for the Campaign, the launch, development, and the achievements of phase one. This first phase focused on awareness raising, education, and involvement, especially within WHO, ILAE, and IBE, including a series of five Regional Public Health meetings and Declarations on Epilepsy. In 1999, the WHO raised the status of the Campaign to the highest level, the first ever for a Non‐Communicable Disease, resulting in the high profile launch of phase two in 2001, paving the way to the continuing global momentum and achievements, including the 2015 and 2022 WHO Resolutions. | INTRODUCTION The International League Against Epilepsy (ILAE)/ International Bureau for Epilepsy (IBE)/World Health Organization (WHO) Global Campaign Against Epilepsy was launched in Geneva and Dublin in the summer of 1997.The strategic aims of the Campaign included: 1. To raise public, political, and professional awareness and understanding of epilepsy as a universal treatable brain disorder.2. To identify the needs of people with epilepsy on a global, regional, and national basis and to encourage Departments of Health to develop their own national Campaigns to promote the prevention, diagnosis, treatment, services, and care of people with epilepsy. 1 February 2001, the new Director General of WHO, Dr. Gro Harlem Brundtland, led the launch of the second phase in Geneva, having raised the status of the Campaign to the highest WHO level, the first ever for a Non-Communicable Disease. 2 Since then the Campaign has been gathering momentum around the world, including Demonstration Projects in developing countries, notably China.In May 2015, the General Assembly of WHO unanimously approved Resolution WHA.68.20 which urges all member states to develop national health care plans for epilepsy management, particularly in low-and middle-income countries. 3,4More recently, the World Federation of Neurology (WFN) and other neurological non-governmental organizations (NGOs) have joined forces with the Epilepsy Campaign and in May 2022 the General Assembly of WHO unanimously approved Resolution WHA.73.10 promoting a 10-year Intersectoral Global Action Plan (IGAP) for Epilepsy and Other Neurological Disorders. 5,6hat began 25 years ago as a global initiative for people with epilepsy has now evolved into a global action plan to include people with other neurological disorders.I was privileged to serve as President of ILAE from 1993 to 1997 and the first Chairperson of the ILAE/IBE/WHO Global Campaign Against Epilepsy from 1997 to 2001.Recently, all documents relating to my Presidency and Chairmanship have been gathered up for storage and future access at the Wellcome Collection in the United Kingdom (SA/ILE/Acc2676.Jan 2023).This has given me the opportunity to review the origins and early development of the Global Campaign based on these documents covering the period 1993-2001. | THE BIRTH OF THE CONCEPT The ILAE and IBE are two among several neurological/ neuroscience NGOs affiliated to WHO.As President of ILAE I attended my first two annual meetings of these NGOs with WHO in Geneva in December 1993 and 1994.It seemed to me that these were merely forums for the exchange of information and for WHO to ask for some help and advice.No-one asked the WHO for help or advice.It occurred to me that the potential existed for a much more active and productive relationship with WHO and that a partnership between the professional (ILAE), the public/ patients (IBE) and the political (WHO) could be a powerful one for addressing the needs of people with the common, universal, hidden, neglected, and stigmatized brain disorder of epilepsy. Therefore, on May 27, 1994, I wrote to the new Director of the Division of Mental Health at WHO, Jorge Alberto Costa e Silva, asking if I could meet with him in Geneva to discuss ILAE/WHO relationships in more detail?Jorge Alberto Costa e Silva readily agreed but the meeting was postponed three times in 1994/1995.I eventually went to Geneva to meet him on January 16, 1996.I brought with me Pierre Jallon, who we had appointed Chairperson of the ILAE/IBE Commission on Developing Countries, not only because of his commitment to developing countries but also because he was based at the University of Geneva in close proximity to WHO.Jorge Alberto Costa e Silva brought with him, as expected, Leonid Prilipko, Head of the Section of Neuroscience within his Department, and also unexpectedly Shichuo Li, who I did not know.I was delighted to learn not only that Shichuo Li had a personal interest and commitment to people with epilepsy in China but also he was the current Chairman of the Executive Board of WHO.There was a considerable meeting of minds that day on which the concept of the ILAE/IBE/ WHO Global Campaign Against Epilepsy was born.I flew home, much encouraged, to sell the idea to the League and the Bureau, while Jorge Alberto Costa e Silva and Shichuo Li set about seeking WHO support. | PLANNING THE CAMPAIGN The need for a Global Campaign for the huge, hidden, and neglected global problem of epilepsy was readily understood by the Executives of ILAE and IBE, especially as recent studies had confirmed the treatment gap in developing countries varied between 60% and 98%, especially in rural areas, and that a cheap and largely effective medication, that is, phenobarbitone, was theoretically available. 7Some in IBE had reservations about a partnership with the unfathomable and bureaucratic WHO, especially if there were any financial demands, the IBE's finances being much less substantial than those of the ILAE.A few in both Executives questioned whether a Division of Mental Health was an appropriate vehicle for a Campaign for a brain disorder, albeit that many people with epilepsy had additional mental health issues.I pointed out that a joint fundraising campaign would be mounted and that epilepsy is a bridge between neurology and psychiatry, as well as a window on brain and psychological function.Furthermore, the Epilepsy Campaign could be a model for other neurological NGOs. At a meeting of the Executive and Long Range Planning Committees of the ILAE in London in November 1994 I persuaded the ILAE to invest for the first time in a parttime public relations officer within a joint Public Relations Committee of ILAE and IBE.Don Whiting of Harrison Cowley UK was appointed.This Committee met for the first time in London on April 04, 1996 when ideas for a global awareness raising Campaign, including key messages and promotional events, were first discussed. On May 24, 1996, Pierre Jallon and I met again in Geneva with Jorge Alberto Costa e Silva, Leonid Prilipko, and Shichuo Li, this time with Don Whiting and Chris Powell from the Public Relations Department of WHO, when further details of a global awareness raising campaign were discussed and developed. On June 06-07, 1996, an International Workshop on Epilepsy in Developing Countries was organized in Geneva by Pierre Jallon with the financial support of ILAE and WHO.The 40 delegates included representatives from Africa (Ethiopia, Senegal, South Africa, Togo, Tunisia), South America (Brazil, Colombia, Uruguay, Ecuador, Venezuela), Asia (India, Indonesia, Pakistan, Sri Lanka, the Philippines), China, and Eastern Europe (Russia, Slovenia, Turkey).Others participating included the members of the Developing Countries Commission; the Chairpersons of the ILAE Commissions on Antiepileptic Drugs, Epidemiology, Education, Economics, and Tropical Diseases; the President of IBE, Hanneke de Boer, and three representatives of WHO (Leonid Prilipko, Aleksandar Janca, C. Liana Bolis).The Workshop focused on the epidemiology, diagnosis, treatment, care, and needs of people with epilepsy in different countries and regions of the developing world as a basis for global, regional, and national actions within the proposed Global Campaign.The proceedings were published in Epilepsia. 8mediately following the Workshop an International Advisory Committee (initially referred to as a "Task Force") met for the first time on June 10, 1996 At the same time, all ILAE and IBE chapters were informed of the concept and planning for the Campaign and were asked to complete a questionnaire on their interest, experience, and willingness to participate.It was emphasized that the ultimate objective of the Campaign was to stimulate, encourage, and support their own Departments of Health to develop their own national Campaigns involving professionals, public, and politicians, according to local needs and resources. The Advisory Committee met for the third and final time on December 06, 1996 in San Francisco at the time of the 50th anniversary meeting of the American Epilepsy Society in association with the Epilepsy Foundation of America.At this event, the IBE Executive formally agreed to be a partner in the Campaign, together with ILAE and WHO, and President Hanneke de Boer became a dedicated advocate.Thereafter, Hanneke de Boer, Leonid Prilipko, and I served as a Management Committee/Secretariat answerable to the ILAE and IBE Executives and to the Division of Mental Health at WHO as we finalized the details for an agreed launch and action plan for the Campaign in Geneva and in Dublin at the time of the 23rd ILAE/IBE International Congress in the summer of 1997.This involved regular day trips by Hanneke and myself to Geneva, both before and after the launch of the Campaign. In the months prior to the Launch, much time and effort were spent in developing Campaign information, brochures, videos, articles for medical journals, [9][10][11] a logo and press releases in collaboration with public relations advisors, Igor Rozov (WHO) and Don Whiting (ILAE/IBE), all designed to bring epilepsy "Out of the Shadows."The Campaign information included a mission statement and objectives (Table 2), as well as summaries of the history, etiology, epidemiology, prognosis, social consequences, economics, scientific, and medical advances of epilepsy (WHO Fact Sheets 165-168). The WHO alerted all its six Regional Offices, that is, AFRO in Harare, EMRO in Alexandria, EURO in Copenhagen, PAHO in Washington DC, SEARO in New Delhi and WPRO in Manila.The ILAE and IBE planned to develop their Regional structures, which at that time were only fully developed in Europe. GLOBAL CAMPAIGN The "Out of the Shadows" Campaign formally launched in Geneva on June 19, 1997 at a press conference attended by officers of the League, Bureau and the Division of Mental Health of WHO.Interestingly, it was not attended by the then Director-General of WHO, Dr. Hiroshi Nakajima, but it was supported by the Assistant Director-General, Dr. Fernando Antezena.Also supporting were John Bowis MP, OBE, a Minister of Health in the UK Government, and Congressman Tony Coelho, Chairman of the US Presidential Committee on the Employment of People with Disabilities, who earlier had initiated the Americans with Disabilities Act.In retrospect, this was a relatively low-key launch of phase 1 of the Campaign, in comparison with the much bigger launch of the second phase of the Campaign with the new Director-General of WHO, Dr. Gro Harlem Brundtland, in Geneva in February 2001. 1,2owever, the Campaign was also launched in more style and visibility 3 weeks later, on July 07, 1997, at the 23rd ILAE/IBE International Congress in Dublin, now supported by Irish President, Mary Robinson, all the officers of the League and Bureau, together with Jorge Alberto Costa e Silva and Leonid Prilipko of WHO, and many of the approximately 5000 delegates at the Congress.On this occasion a Symposium was held on "The Politics of Epilepsy" with contributions from John Bowis; US Congressman Tony Coelho; Irish Senator Joe Doyle and Irish MEP Mary Benotti. | EARLY DEVELOPMENT: THE FIRST FOUR YEARS The achievements of phase 1 are summarized in Table 3, but here I will describe these in more detail and how they came about. Following the launch of the Campaign in 1997 major changes in the structure and function of WHO were initiated in 1998 by the new Director General Dr. Gro MISSION: To improve acceptability, treatment, services, and prevention of epilepsy worldwide. OBJECTIVES: 1. To increase public and professional awareness of epilepsy as a universal, treatable brain disorder.2. To promote public and professional education about epilepsy.3. To change attitudes, dispel myths, and raise epilepsy on to a new plane of acceptability in the public domain.4. To identify the needs of people with epilepsy on a national, regional, and global basis.5. To encourage governments and departments of health to develop their own national campaigns to improve prevention, diagnosis, treatment, care, services, and public attitudes. to develop the Cabinet Paper.With this in mind the Consultative Committee met in Geneva on April 26-27, 1999, together with the Regional Advisors for Mental Health in AFRO (Custodia Mandlhate) and PAHO (Itzhak Levav) to promote sustainable action plans for addressing the treatment gap and the needs of people with epilepsy in developing countries, including the concept and planning of Demonstration Projects in different Regions to guide and encourage local national initiatives in those Regions, including China (WHO/MSD/MBD/00.11).The Proceedings of that meeting were also published by WHO (WHO/MHH/ND/99.3)and fed into the Cabinet Paper.Earlier in the autumn of 1998, as a result of the initiative of ILAE Secretary General, Peter Wolf, a Conference was held in Heidelberg on October 24-25 with the financial support of the Federal German Ministry of Health on the theme of "Epilepsy as a Public Health Problem in Europe."Over 100 leading professional and lay delegates from chapters and organizations in almost every country in Europe, including Russia, supported by Wolfgang Rutz, Regional Mental Health Adviser for Europe, and John Bowis MEP, now a leading spokesperson for public health in the European Parliament, highlighted the needs of 6 000 000 people with epilepsy in Europe.This led to the European Declaration on Epilepsy which called on the Governments of Europe, including the European Parliament and all healthcare providers, to take strong and decisive action to meet the objectives of the Global Campaign Against Epilepsy. 12he main thrust of phase one was in raising awareness, acceptance, and involvement, not least in the League, the Bureau and WHO.With this in mind throughout 1998 and 1999, the Management Committee was in regular contact with the ILAE/IBE Chapters.By November 1999, 50 ILAE/IBE Chapters had informed the Committee that they had begun or were in the process of developing their own awareness-raising or political initiatives, also fed into the Cabinet Paper.At the same time, the League and Bureau Executives were planning Regional structures, beginning with the ILAE Asian and Oceanian Commission.The business case for the Campaign was also being developed with the guidance of Walt Schaw, a US Business Consultant, temporarily employed by the ILAE.In order to ensure maximum support within WHO the Management Committee had several meeting in Geneva in 1999 with the Heads of potentially important related Sections or Clusters, including "Child and Adolescent Health and Development," "Essential Drugs," "Resource Mobilization," "Non-Communicable Diseases," and "Public Relations."Earlier in December 1998, the WFN and all neurological/neuroscience NGOs had given their support to the Campaign. On December 03, 1999 the Cabinet Paper was approved by WHO, giving an enormous boost to the Campaign.This paved the way for a new major launch of the Campaign, led by the Director General, which took place on February 12, 2001 with a new more ambitious goal to improve treatment, care, prevention, and social acceptance of epilepsy worldwide, including a 4-year Action Plan involving Demonstration Projects to promote national initiatives, including China, encouraged by the Regional Offices of WHO.The Cabinet approval also facilitated four further Regional Conferences on Public Health Aspects of Epilepsy, which led to Regional Declarations on Epilepsy in Africa (Senegal) on May 06, 2000, Latin America (Chile) on September 06, 2000, Asia and Oceania (India) on November 13, 2000 and North America (Washington) on December 01, 2000.In all 1200 experts from more than 100 countries participated in these Regional Conferences and Declarations. 12 AND THEREAFTER The launch of the second phase of the Campaign in Geneva on February 12, 2001 has been described in detail in a Supplement of Epilepsia and will not be amplified here. 14In summary, it was a much more high-profile event than the initial launch in 1997, now including speeches by Director-General Dr. Gro Harlem Brundtland; the Head of Non-Communicable Diseases Derek Yach; the Head of Mental Health Benedetto Saraceno; Presidents Jerome Engel Jr of ILAE and Philip Lee of IBE; Hanneke de Boer and myself; and John Bowis MEP 2 (Figure 1).It was attended by all the WHO Regional Health Advisers; the Officers of the ILAE and IBE, together with representatives of their Regional Commissions and representatives of other neurological NGOs.Following the morning launch and press event a Symposium on "Public Health Aspects of Epilepsy and the Role of the Global Campaign Against Epilepsy" was held in the afternoon. 2The launch and especially the speech by the Director-General was a milestone in the social history of Epilepsy. 2 am not qualified to describe in any detail the further developments of the Campaign.I am aware, however, how all successive Executives of the League and the Bureau played their part in expanding the momentum and reach of the Campaign, together with Tarun Dua in WHO, who succeeded Leonid Prilipko, who sadly died in 2007.Sadly in 2015, we also lost Hanneke de Boer who had succeeded me as Chairperson of the Global Campaign and who had contributed so much to the Campaign. 15 am aware that the 2009-2013 ILAE Executive led by Solomon (Nico) L. Moshé initiated a major boost to the Campaign with the Presidents of ILAE and IBE now chairing.16 Furthermore, on September 15, 2011, the European Parliament passed the "Written Declaration on Epilepsy," promoted by Irish MEP Gay Mitchell, which led to a considerable increase in European Union funding for epilepsy research in succeeding years.17 I also know how the 2013-2017 ILAE and IBE Executives, led by Emilio Perucca and Athanasios Covanis and supported by WFN President Raad Shakir and his Executive, successfully promoted the 2015 WHO General Assembly Resolution (WHA 68.20) 3,4 and how, with the initiative of succeeding Executives, this evolved into the current 2022 WHO IGAP Resolution (WHA 73.10).5,6 I also know that any WHO General Assembly Resolution requires the unanimous or almost unanimous support of all the 194 member states.The 2015 and 2022 Resolutions therefore reflect the degree to which the Campaign has become truly global.Finally, I am aware of the leading role of China and Russia in promoting both the WHO General Assembly Resolutions, the former through Shichuo Li in both China and WHO, and the latter through Alla Guekht, who has had a significant influence, both as an officer of ILAE and WFN. On March 22, 2001 John Bowis MEP, presented a "White Paper" on Epilepsy to the European Parliament based on the European Declaration. 136 | THE LAUNCH OF PHASE 2 in Geneva to begin planning the objectives, policies, messages, and organizational details of the Campaign prior to a formal launch in 1997.The Committee consisted of officers of WHO, ILAE, and IBE, including Leonid Prilipko for WHO, Hanneke de Boer for IBE and myself for ILAE, together with Pierre Jallon, several ILAE/IBE Commission chairpersons, Don Whiting and Igor Rozov, the latter replacing Chris Powell from WHO Public Relations.The Committee identified several events over the forthcoming 12 months which could serve as platforms for promoting and planning the Campaign (Table 1).The Committee met again for the second time on September 01, 1996 at the time of the first announcement, press conference and press release of the Campaign at the ILAE European Congress of Epileptology in The Hague, Netherlands.This was immediately followed by a further announcement and press conference at the first Asian and Oceanian Congress of ILAE/IBE in Seoul, South Korea on September 06, 1996. Major platforms for announcing and raising awareness of the ILAE/IBE/WHO global campaign against epilepsy September 1996-July 1997.September 01-05, 1996 ILAE European regional congress (The Hague) September 06-07, 1996 ILAE/IBE Asian and Oceanian Regional Congress (Seoul) December 06-09, 1996 50th anniversary meeting of the American Epilepsy Society in association with the Epilepsy Foundation of America (San Francisco) T A B L E 1 Harlem Brundtland, which considerably facilitated the Epilepsy Campaign.For the first time ever Dr. Brundtland, a Norwegian physician, gave equal priority to Non-Communicable Diseases, including Mental Health, and to Communicable Diseases, hitherto WHO's highest priority.This led to new Sections or "Clusters" within the above two priorities.The Division of Mental Health, which was now headed by Benedetto Saraceno, who replaced Jorge Alberto Costa e Silva, was now a sub-division of a new Section entitled "Social Change and Mental Health," led by Y. Suzuki of Japan.On December 07, 1998 Leonid, Hanneke and I had a very encouraging meeting in Geneva with Y. Suzuki and Benedetto Saraceno, encouraged by Shichuo Li who by then was an Assistant Director General.Y. Suzuki suggested that the Epilepsy Campaign could be greatly boosted by developing a "Cabinet Paper" which, if approved by the Cabinet, the new highest decision-making structure of WHO, would raise the status of the Campaign to the highest WHO level with Director-General support.Throughout 1999, Leonid, Hanneke, and I, with the support of the Division of Mental Health together with input from the Executives of ILAE and IBE and the International Consultative Committee, worked tirelessly Mission and objectives of the ILAE/IBE/WHO global campaign against epilepsy.
2023-10-24T06:18:12.354Z
2023-10-23T00:00:00.000
{ "year": 2023, "sha1": "42105c74d7fa1e40da787b1e850db5f3769e1928", "oa_license": "CCBYNCND", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/epi4.12850", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9ffbda36aed79d8dd9ea55705d5919958ec92f38", "s2fieldsofstudy": [ "Medicine", "History" ], "extfieldsofstudy": [ "Medicine" ] }
207894353
pes2o/s2orc
v3-fos-license
Optimising pharmacotherapy in older cancer patients with polypharmacy Abstract Objective Polypharmacy is frequent among older cancer patients and increases the risk of potential drug‐related problems (DRPs). DRPs are associated with adverse drug events, drug‐drug interactions and hospitalisations. Since no standardised polypharmacy assessment methods for oncology patients exist, we aimed to develop one that can be integrated into routine care. Methods Based on the Systematic Tool to Reduce Inappropriate Prescribing (STRIP), we developed OncoSTRIP, which includes a polypharmacy anamnesis, a concise geriatric assessment, a polypharmacy analysis taking life expectancy into account and an optimised treatment plan. Patients ≥65 years with ≥5 chronic drugs visiting our outpatient oncology clinic were eligible for the polypharmacy assessment. Results OncoSTRIP was integrated into routine care of our older cancer patients. In 47 of 60 patients (78%), potential DRPs (n = 101) were found. In total, 85 optimisations were recommended, with an acceptance rate of 41%. It was possible to reduce the number of potential DRPs by 41% and the number of patients with at least one potential DRP by 30%. Mean time spent per patient was 71 min. Conclusions Polypharmacy assessment of older cancer patients identifies many pharmacotherapeutic optimisations. With OncoSTRIP, polypharmacy assessments can be integrated into routine care. polypharmacy and occurrence of PIMs (Reis, Santos, Jesus Souza, & Reis, 2017). Both polypharmacy and the occurrence of PIMs are frequently seen in this population, with polypharmacy in up to 84% of patients (Nightingale et al., 2015) and the prevalence of PIMs is reported to be around 50% (Nightingale et al., 2015;Reis et al., 2017). The risk of adverse effects may be even more relevant for cancer patients because of the exposure to highly active antitumor therapies and the risk of drug-drug interactions with cancer treatment. Polypharmacy in cancer patients is associated with more grade III-IV chemotherapy-related toxicity (Hamaker et al., 2014). In case of a reduced life expectancy, it is appropriate to consider new goals of treatment, including all co-morbidities. Medication intended for long-term prevention can often be safely discontinued, as was demonstrated for statins (Kutner et al., 2015). The appropriateness for older cancer patients of generic medication screening tools that exist for older patients have been previously reviewed (Whitman, DeGregory, Morris, & Ramsdale, 2016), such as the Screening Tool of Older Peoples' Prescriptions (STOPP) and Screening Tool to Alert doctors to Right Treatment (START) criteria (Gallagher, Ryan, Byrne, Kennedy, & O'Mahony, 2008), Beers criteria (American Geriatrics Society, 2015, Beers Criteria Update Expert Panel, 2015 and the Medication Appropriateness Index (MAI) (Hanlon et al., 1992). While older cancer patients can benefit from applying any of these tools, none of these include all relevant aspects for this specific population such as potentially unnecessary medication, the patient's condition and the treatment goals (Whitman et al., 2016). Medication screening tools specifically designed for cancer patients are sparse. One good example of a practical cancer orientated tool is the "OncoPal deprescribing guideline" (Lindsay et al., 2015) which can be applied in the terminal six months of a patients' life. However, incurable cancer patients on active treatment often have a life expectancy beyond six months, making the tool less applicable to these patients. Another valuable cancer-specific tool is the individualised medication assessment and planning (iMAP) for older outpatient cancer patients (Nightingale et al., 2017). iMAP is a structured assessment including a patient-involved medication assessment and an analysis of medication based on the identification of potential drug-related problems (DRPs). By looking for these potential DRPs, and not only PIMs, iMAP provides a more complete medication assessment, since DRPs include problems such as overtreatment, undertreatment and potential adverse drug events (Nightingale et al., 2017;Strand, Morley, Cipolle, Ramsey, & Lamsam, 1990). iMAP has many similarities with the Systematic Tool to Reduce Inappropriate Prescribing (STRIP) method. This method is embed- (Dutch General Practitioners, 2012). While STRIP is already commonly used in Dutch primary care, the method can be specified for cancer patients, for example by adding a practical (de)prescribing guide suitable for cancer patients. Therefore, we developed the "OncoSTRIP," a polypharmacy assessment method specifically optimised for cancer patients with the aim to integrate it into routine care of the older cancer patient. | Setting and study population The study protocol was designed as an exploratory prospective study. While not yet systematically embedded in the routine care of cancer patients, a polypharmacy assessment using STRIP is considered to be part of regular care in the Netherlands. The institutional review board concluded that the Medical Research Involving Human Subjects Act (WMO) did not apply to the study protocol and that an official ethics approval was not required. Although written informed consent was therefore not necessary, patients were informed by their oncologist/haematologist using a protocol summary before any data was collected. The OncoSTRIP was offered to patients ≥65 years with ≥5 chronic medications on active treatment that visited the outpatient oncology/haematology clinic of our community-based hospital between February 2016 and April 2017. Patients were free to decline participation. Patients that agreed to participate were scheduled for the OncoSTRIP method in alignment of their regular visits to the outpatient clinic, infusion centre or outpatient hospital pharmacy. | OncoSTRIP method With OncoSTRIP, the patients followed a structured stepwise polypharmacy assessment, which consecutively consisted of four individual components described in detail in the following section. | Polypharmacy anamnesis The goal of the polypharmacy anamnesis step was to collect all relevant information on the patients' medication use. Prior to the anamnesis visit with the patient, the pharmacist collected relevant background data, such as medical history and medication use according to the hospital and/or community pharmacy records. For the polypharmacy anamnesis visit, a structured questionnaire was used ( Figure S1), in which the oncology drugs, supportive drugs, prescription drugs and possible over-the-counter drugs were discussed with the patient by a pharmacist. The following aspects were included: Type of drug, dose, indication, date of start, initial prescriber, effect, adverse drug effects, practical problems (including compliance) and if relevant, extra information on medical history. To allow shared decision-making, patients were asked which drugs they were willing to discontinue and which they highly valued. | Concise geriatric assessment In parallel to the polypharmacy anamnesis, a nurse specialist or oncology nurse performed a concise geriatric assessment with the patient. The concise geriatric assessment consisted of scoring systems Adult Comorbidity Evaluation-27 (ACE-27) (Piccirillo, Tierney, Costas, Grove, & Spitznagel, 2004), Eastern Cooperative Oncology Group performance status (ECOG-PS) (Oken et al., 1982) and Geriatric-8 (G8) (Bellera et al., 2012), to evaluate comorbidity, performance and frailty respectively. Comorbidity, performance status and frailty are essential determinants of the treatment options, prognosis and goals of care, and therefore these were factors to consider when making the treatment plan. | Polypharmacy analysis The pharmaceutical analysis was structured by the evaluation of eight potential DRPs: requirement of additional drug therapy, unnecessary drug therapy, ineffective treatment, (potential) adverse effects, clinically relevant contraindications or interactions, underdosing, overdosing and practical drug use problems/optimisations. PIMs were identified by our newly developed "OncoSTRIP list of drugs suitable for deprescribing in older cancer patients" (Table S1) and categorised within the potential DRPs. This deprescribing checklist was based on the STOPP criteria (Gallagher et al., 2008), Beers criteria (American Geriatrics Society, Beers Criteria Update Expert Panel, 2015, "OncPal deprescribing guideline" (Lindsay et al., 2015), "Checklist for symptom stability after withdrawing medicines" (Potter, Flicker, Page, & Etherton-Beer, 2016) and available literature. Besides these explicit criteria, potential DRPs were also identified through the expertise of the clinical pharmacist and treating physician. If necessary, initial prescribers were contacted for further information. | Polypharmacy treatment plan After the analysis, the pharmacist's recommendations were reported in the patient's electronical medical record to the treating oncologist/haematologist for reviewing. Upon agreeing with the recommendations, the treating physician discussed the intended medication adjustments with the patient. | Outcomes Outcomes were the prevalence of potential DRPs and the proportion of pharmacotherapeutic recommendations. Furthermore, the acceptance rate of the recommendations was evaluated by reviewing patient's electronical medical and/or pharmacy records directly after the patient's consultation with the treating physician, and after median follow-up period of four months. Time invested in the different steps of the polypharmacy assessment was recorded as well. Finally, with univariate analyses (Fisher's or Fisher-Freeman-Halton exact test, statistical significance at p < .05), the outcomes of the concise geriatric assessment were tested for the prediction of the occurrence of recommendations, to identify patients most likely to benefit from polypharmacy assessment. For the statistical analyses, IBM SPSS version 21 was used. | Patient characteristics None of the patients declined to participate in this study. Characteristics of the 60 patients that underwent a polypharmacy assessment are and nine were other chronic drugs (range 4-20). The most commonly used chronic drugs were for the treatment of cardiovascular, lipid and/ or gastrointestinal disorders ( Table 2). | Optimisation recommendations In total, 101 potential DRPs were found among 47 of 60 patients (78%), resulting in a mean of 1.7 per patient. As shown in Table 4, the three most commonly found potential DRPs were unnecessary drug therapy (n = 39), (potential) adverse effects (n = 17) and practical problems/optimisations (n = 14). In total, these potential DRPs led to 85 pharmacotherapeu- Table 5. | Follow-up of recommendations Of the 85 recommendations, 35 (41%) were implemented by the treating physician directly after reviewing and discussing it with the patient. After the median follow-up of 4 months, 32 of the 35 (91%) implemented recommendations were still maintained. | Reduction in polypharmacy For 17 of 60 patients (28%), it was possible to reduce the pill burden for the complete follow-up period. In 12 patients (20%), at least one drug could be discontinued. Reducing the dosing frequency could be accomplished in six patients (10%). An attempt to reduce the pill burden was tried for an additional two patients (3.3%). However, due to symptom recurrence the recommended change had to be reversed. Table 4, the number of potential DRPs was reduced from 101 to 60 (41% reduction). The number of patients with TA B L E 2 Most commonly used chronic drugs according to their pharmacologic category, with exception of the oncology drugs at least one potential DRP could be reduced from 47 to 33 patients (30% reduction). | Geriatric assessment subpopulations To identify patients most likely to benefit from polypharmacy assessment, the outcomes of the concise geriatric assessment were assessed for possible associations with the occurrence of recommendations. No such subpopulation could be identified at statistical significance, although a trend towards significance (p = .079) was seen for people classified as "vulnerable" with the G8 screening (Table S2). | Time investment The mean time spent per patient is summarised in Table 6. On average, collecting the relevant data took about 15 min, the concise geriatric assessment 10 min, the polypharmacy anamnesis 24 min and the polypharmacy analysis including providing the treatment plan to the treating physician 22 min. In total, the mean duration of a polypharmacy assessment was 71 min. | D ISCUSS I ON In this study, a pharmacist-led polypharmacy assessment led to the identification and implementation of many possible pharmacotherapeutic optimisations among the majority of older cancer patients. Within this population, there was a high prevalence of patients with at least one potential DRP, which is comparable to previous studies with older cancer patients (around 90%-95%) (Nightingale et al., 2017;Yeoh, Si, & Chew, 2013;Yeoh, Tay, Si, & Chew, 2015). Due to many pharmacotherapeutic recommendations, with OncoSTRIP, it was possible to reduce the total number of potential DRPs and the number of patients with at least one potential DRP. In comparison, polypharmacy assessment through iMAP resulted in the identification of three potential DRPs per patient on average. Additionally, the total number of DRPs could be reduced by 45.5% and the number of patients with at least one potential DRP by 20.5%. The recommendation acceptance rate was 46% (Nightingale et al., 2017). Thus, despite the identification of a higher number of DRPs per patient, the reductions in DRPs were comparable between iMAP and OncoSTRIP. Cumulatively, OncoSTRIP and iMAP provide reproducible and encouraging results that support routine implementation of polypharmacy assessment in this population. It is anticipated that a recommendation suggested or discussed by a large team, as used in iMAP, is more likely to be adapted than a recommendation suggested by one clinical pharmacist. However, with respect to the comparable acceptance rates of OncoSTRIP and iMAP, no clear preference exists between the two methods. In our view, reporting recommendations directly in the patients' electronic medical records is efficient, especially since the majority of recommendations do not require immediate action. Discussing polypharmacy recommendations in a multidisciplinary team could be beneficial in selected patients. In this study, we did not record the reasons why prescribers may have chosen not to follow a recommendation. However, the suboptimal acceptance rate in our study can be partly explained by the observation that approximately one-third of all suggested recommendations were conditional ("if life expectancy is estimated below 2 years, than…"), as the pharmacist generally did not know the estimated life expectancy at the time of providing recommendations. It is likely that for some patients the life expectancy was higher than the prerequisite for the recommendation, thereby making it irrelevant. Pill reduction can decrease the risk of adverse drug events and medication errors, and positively influence compliance by simplifying intake regimens. Pill reduction was accomplished and maintained in a substantial part of patients. Undoing a pill reduction due to symptom recurrence was minimal, suggesting it is feasible for patients to stop the discontinued drug(s) for a longer period. In conclusion, the OncoSTRIP polypharmacy assessment resulted in the identification of a high number of possible pharmacotherapeutic optimisations among older cancer patients. An essential aspect for this specific population is to consider the changed goals of care with respect to a reduced life expectancy. OncoSTRIP made it possible to integrate polypharmacy assessments into routine care of this population. Future studies are needed to identify possible high-risk subpopulations and to assess the effects of OncoSTRIP polypharmacy assessments on (longterm) patients' outcomes. ACK N OWLED G EM ENTS This work was supported by the Dutch Cancer Society [grant number: MST 2015MST -8008, 2015. CO N FLI C T O F I NTE R E S T None to declare.
2019-11-07T14:10:55.811Z
2019-11-06T00:00:00.000
{ "year": 2019, "sha1": "7f553bcc2f47d50fcaaa3927f6fe5e146dc4b9c4", "oa_license": "CCBYNC", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/ecc.13185", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "1f13f948629acf0c145ef1cdef3310df396e5338", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
55993439
pes2o/s2orc
v3-fos-license
The Semigroup and the Inverse of the Laplacian on the Heisenberg Group By decomposing the Laplacian on the Heisenberg group into a family of parametrized partial differential operators L̃τ,τ ∈ R \ {0}, and using parametrized Fourier-Wigner transforms, we give formulas and estimates for the strongly continuous one-parameter semigroup generated by L̃τ, and the inverse of L̃τ. Using these formulas and estimates, we obtain Sobolev estimates for the one-parameter semigroup and the inverse of the Laplacian. 1This research has been supported by the Natural Sciences and Engineering Research Council of Canada. 84 Aparajita Dasgupta & M.W. Wong CUBO The Laplacian on the Heisenberg Group If we identify R 2 with the complex plane C via In fact, H is a unimodular Lie group on which the Haar measure is just the ordinary Lebesgue measure dz dt. Let h be the Lie algebra of left-invariant vector fields on H.A basis for h is then given by X , Y and T, where and The Laplacian ∆ H on H is defined by A simple computation gives where Let g be the Riemannian metric on R 3 given by g(x, y, t) for all (x, y, t) ∈ R 3 .Then ∆ H is also given by where ∂ 1 = ∂/∂x, ∂ 2 = ∂/∂y, ∂ 3 = ∂/∂t.Since the symbol σ(∆ H ) of ∆ H is given by σ(∆ H )(x, y, t;ξ,η,τ) for all (x, y, t) and (ξ,η,τ) in R 3 , it is easy to see that ∆ H is an elliptic partial differential operator on R 3 but not globally elliptic in the sense of Shubin [11].Let us recall that ∆ H is globally elliptic if there exist positive constants C and R such that The aim of this paper is to give new estimates for the strongly continuous one-parameter semigroup e −u∆ H , u > 0, generated by ∆ H and the inverse ∆ −1 H of ∆ H .More precisely, we use the Sobolev spaces L 2 s (H), s ∈ R, as in [1,2] to estimate e −u∆ H f L 2 s (H) , u > 0, in terms of f L 2 (H) for all f in L 2 (H), and to give an estimate for e −u∆ H f L 2 (H) in terms of f L 2 s (H) .These Sobolev spaces are also used to estimate The function F on H × (0,∞) given by for the Laplacian ∆ H . Using the same techniques as in [1], we get for all f ∈ L 2 (H) and u > 0, where Lτ , τ ∈ R \ {0}, is given by and f τ is the function on C given by provided that the integral exists.In fact, f τ (z) is the inverse Fourier transform of f (z, t) with respect to t evaluated at τ.In this paper, the nonzero parameter τ can be looked at as Planck's constant. To obtain the estimates in this paper, we use formulas for e −u Lτ and L−1 τ in terms of the τ-Weyl transforms and the τ-Fourier-Wigner transforms of Hermite functions, τ ∈ R\{0}, which we recall in, respectively, Section 2 and Section 3. The L 2 -boundedness and the Hilbert-Schimdt property of τ-Weyl transforms are instrumental in obtaining the estimates. Basic information on the classical Fourier-Wigner transforms, Wigner transforms and Weyl transforms can be found in [13] among others. In Section 2, we introduce the τ-Weyl transforms and prove results on the L 2 -boundedness and the Hilbert-Schmidt property of the τ-Weyl transforms.The τ-Fourier-Wigner transforms of Hermite functions are recalled in Section 3. A formula for e −u Lτ f , u > 0, for every function f in L 2 (C) and an estimate for are given in Section 4. This formula gives a formula for e −u∆ H , u > 0, immediately using the inverse Fourier transform as indicated by (1.1).In Section 5, we use the family L 2 s (H), s ∈ R, of Sobolev spaces with respect to the center of the Heisenberg group as in [1,2] to obtain Sobolev estimates for e −u∆ H f , u > 0, in terms of f L 2 (H) , and Sobolev estimates for In Section 6, we obtain a formula for L−1 τ and estimates for L−1 τ which are then used to estimate ∆ −1 H .In Section 7, estimates for We end this section by putting in perspectives the results in this paper.While the semigroup and the inverse can be studied in the framework of functional analysis as explained in [3,4,5,8,9,16], the results and methods in this paper are based on explicit formulas in hard analysis and are related to the works in [1,2,6,7,10,12,14,15]. τ-Weyl Transforms Let f and g be functions in L 2 (R).Then for τ in R\{0}, the τ-Fourier-Wigner transform V τ ( f , g) is defined by for all q and p in R. In fact, where V ( f , g) is the classical Fourier-Wigner transform of f and g.A proof can be found in [1]. It can be proved that V τ ( f , g) is a function in L 2 (C) and we have the Moyal identity stating that We define the τ-Wigner transform W τ ( f , g) of f and g by Then we have the following connection of the τ-Wigner transform with the usual Wigner transform. Theorem 2.1.Let τ ∈ R \ {0}.Then for all functions f and g in L 2 (R), where W( f , g) is the classical Wigner transform of f and g. It is obvious that Then for all τ in R\{0} and all functions f in the Schwartz space S (R) on R, we define W τ σ f to be the tempered distribution on R by for all g in S (R), where (F,G) is defined by for all measurable functions F and G on R n , provided that the integral exists.We call W τ σ the τ-Weyl transform associated to the symbol σ.It is easy to see that if σ is a symbol in the We have the following estimate for the norm of the Weyl transform W τ σ in terms of the Proof Let f and g be functions in S (R).Then where σ 1/τ is the dilation of σ with respect to the first variable by the amount 1/τ.More precisely, where W σ 1/τ is the classical Weyl transform with symbol σ 1/τ .Thus, it follows from Theorem 21.1 in [14] that W τ σ : L 2 (R) → L 2 (R) is a bounded linear operator and We have the following result for the Hilbert-Schmidt norm of the Weyl transform W τ σ in terms of the L 2 norm of the symbol σ when σ ∈ L 2 (C). CUBO 12, 3 (2010) 3 Fourier-Wigner Transforms of Hermite Functions For τ ∈ R \ {0} and for k = 0,1,2,... , we define e τ k to be the function on R by Here, e k is the Hermite function of order k defined by where H k is the Hermite polynomial of degree k given by For j, k = 0,1,2,... , we define e τ j,k on R 2 by e τ j,k = V τ (e τ j , e τ k ). A Formula and an Estimate for e −u Lτ , u > 0 Let τ ∈ R \ {0}.Then a formula for e −u Lτ , u > 0, is given by the following theorem. Theorem 4.1.Let f ∈ L 2 (C).Then for u > 0, where the convergence of the series is understood to be in L 2 (C). Proof Let f ∈ L 2 (C).Then from Theorem 3.3 we have for u > 0 where the series is convergent in L 2 (C).Now, using the formula for e −uL τ f in [2] and (4.1), we get for all f in L 2 (C) and u > 0. For all τ in R \ {0}, we have the following estimate for the L 2 norm of e −u Lτ f , u > 0, in terms of the L p norm of f . Proof By Theorem 4.1, the Moyal identity (2.1) and the fact that we get Applying Theorem 2.2 to (4.2), we get 5 Sobolev Estimates for e −∆ H , u > 0 Let s ∈ R. Then we define L 2 s (H) to be the set of all tempered distributions f in S ′ (H) such that f τ (z) is a measurable function and For every f in L 2 s (H), we define the norm f L 2 s (H) by Then it can be shown easily that L 2 s (H) is an inner product space in which the inner product ( , ) where Proof Let u > 0 and f ∈ L 2 (H).Then by (1.1), Fubini's theorem, Plancherel's theorem and Theorem 4.2 with p = 2, where f is the inverse Fourier transform of f with respect to t.So, using a simple change of variable and letting we get and this completes the proof. The following result complements Theorem 5.1. Theorem 5.2.Let s ≤ −1.Then for u > 0, e −u∆ H : L 2 s (H) → L 2 (H) is a bounded linear operator and The proof of Theorem 5.2 is very similar to that of Theorem 5.1 and is hence omitted. 6 Two Formulas and an Estimate for L−1 τ Let τ ∈ R \ {0}.Then a formula for L −1 τ is given by the following theorem. where the convergence of the series is understood to be in L 2 (C). The formula (6.4) is an important formula in its own right and we upgrade it to the status of a theorem.Theorem 6.2.For all τ ∈ R \ {0}, the inverse L−1 τ of the parametrized partial differential operators Lτ is given by The Semigroup and the Inverse of the Laplacian ... 95 For all τ in R\{0}, we have the following estimate for the L 2 norm of L−1 τ f in terms of the L 2 norm of f .Theorem 6.3.Let τ ∈ R \ {0}.Then for all functions f in L 2 (C), Proof Let f and g be functions in L 2 (R).Then by Theorems 2.3 and 6.2, and this completes the proof. H We have the following simple result giving the connection of ∆ −1 H with L−1 τ , τ ∈ R \ {0}, which can be proved easily using the elementary properties of the Fourier transform and the Fourier inversion formula.We can now give the following theorem, which can be seen as another manifestation of the ellipticity of ∆ H .
2018-12-10T08:36:48.376Z
2010-01-01T00:00:00.000
{ "year": 2010, "sha1": "18e4cbc9fbc62bf8fab769ff43b94529d4793fc6", "oa_license": "CCBYNC", "oa_url": "http://www.scielo.cl/pdf/cubo/v12n3/art06.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "18e4cbc9fbc62bf8fab769ff43b94529d4793fc6", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Mathematics" ] }
70583855
pes2o/s2orc
v3-fos-license
Are we strong enough to assert our rights in quality healthcare ? The title of this speech is an important challenge for me, for a patient I mean, to face because it’s not easy to state today if we are really strong enough to assert our rights for a quality healthcare. At first sight, and in an optimistic vision, we could answer to this question YES, we are, but I think we need to explore better the field before to confirm that this is the right answer to the question. The first thing to assess is what we mean with the pronoun WE: the patients and parents’ community represented from TIF? The whole community that plays around thalassemia and hemoglobinopathies, meaning patients and parents and scientists? What else? Are we strong enough to assert our rights in quality healthcare? TIF is a fantastic synthesis of our presence at every level today, it's big enough to represent with successful the patients, as actually does, but needs to move forward, trying to become more productive and more conscious of the great role that patients can play in the future, it's time to have faith in their capacities. The scientific community acts every day in a context really complicated from the scarce resources they have available for moving forward in quality healthcare, but despite this difficulties it achieved extraordinary results and the outcomes are better where there is a patients/parents' association that works closely with the scientists. With any evidence we know very well what we need to move forward in the field of hemoglobinopathies and for us what we will try to achieve in the near future is what we mean for "quality healthcare.And, in my opinion, this is a common and concrete necessity, therefore we should need to move forward together. As you know, I am a patient and so I will try to explore the perspective, from a patient's point of view.It needs to be considered from two separate fields: the scientific and the social field. The scientific field requires particular attention to the new developments in the clinical treatment, the new frontier of Gene Therapy but also some problems that are not anymore emerging but now consolidated, as osteoporosis and pain that are reflecting in worsening the quality of patients' life, some heavy problems as the fight against the C Hepatitis and the new frontiers of the treatment for this disease.Give to our patients the possibility to have an easy access to the best treatments worldwide should be our first goal to achieve as soon as possible, trying to overcome the differences from country to country. No less important is the social field especially in times of economic crisis where the difficult situation that many countries are living put at high risk to maintain and to respect many rights that the patients achieved in the last decades, through many strong efforts by our associations. Furthermore the immigration flow is bringing an increased number of people suffering from thalassemia and, above all, sickle cell disease, that require to have an adequate treatment.This is happening in some areas of Europe, never touched before, so the mapping of the needs is changing very quickly and we should be ready to face this new emergency. The economic crisis is the more visible problem, but it's not the only one we have to face.It's clear that many problems cannot be addressed by few countries but they need to be governed by the European Institutions that, unfortunately, seem to be not fitted to face the challenges of the present times.As representatives of patients, we need to become more productive in our action at institutional level, but the European institutions need to be more effective and quick in the decision making process. For facing all these important issues we need to have strong leaderships both in scientific and in social field and a close cooperation between the two parts so to cope with the process putting on the carpet all our best resources and able to provide to the European institutions all our support. The very recent Manifesto presented from TIF in this Conference goes in right direction in my opinion, really congratulations to TIF's President Mr. Englezos for this, is exactly what we mean today for close cooperation of all the resources that plays around thalassemia and hemoglobinopathies. But beyond the words we need to put on the field an effective strong willing to work together in the awareness that joining the resources, always respecting the different roles, it's the only chance we have for moving forward or for defending what we have achieved in the last decades. Without any doubt the leading role of some European countries in the field of hemoglobinopathies, like Italy and UK, must be the guide through this dramatic period to maintain the excellence of our thalassemia centers all across the Europe and to achieve more important results in the future of clinical research for the benefit of all our patients. The excellent assistance provided in many European centers, especially in Italy, Cyprus, Greece and UK, is pivotal for the patients.Being Times are changing very fast, if some time ago the patients played simply a passive role into healthcare, today they can represent a strong voice and a qualified support in the framework of public health. There are many resolutions of European Union that try to give to the patients a more significant active role in the decision making process, but the procedures are often too complicated for convincing the patients to adhere to these programs.It's pivotal for the institutions to facilitate the access to the patients, showing to trust on them. To the patients I have to say that for to consolidate this new status, and for being credible we need to be organized and to evaluate with particular attention every step forward in order to avoid dangerous misunderstandings. We must understand that the real challenge today is played at European level on the field of Rare Disease.Joining together into a strong European context is important for strengthen our action and ensuring our patients that we are monitoring very closely what happens. Being at this level means to have access to the funds that the European Community provides for high level projects, in the field of therapy and at social level, means to monitor carefully all the procedures for the registration of new drugs, avoiding dangerous delay, means to have visibility and voice where the decision making process happens, and much more. Of course for working at such a level is necessary to promote an alliance between the patients and the scientists, with respect of the role that each one has to play, so to have stronger voice. For promoting this alliance is extremely urgent to come back to work closer than in the past, by recognizing that the developments in the scientific field are also due to the hard work done by the patients and parents' Associations all along the last decades. 'Till this moment I have continuously referred to the patients because I trust that really we can be the bosses of our future but we cannot forget the impressive work and the strong willingness of our parents and the role that many of them are still playing in our society. It's time to go back to share opinions and to accept each other visions, in order to give a new and more modern impulse to our action. Forbidding to the patients the access to the scientific sessions at the conferences, even the more experts amongst them, therefore, it's not the best thing to do.We need a new and modern vision of the patient, not anymore as an object of study and a consumer of drugs but as a subject carrying rights that have to be respected. Even Thalassemia International Federation, has to change partially its vision, finding the courage to invest much more on the patients, to trust them much more than ever done in the past, to promote their involvement at highest level. For doing a productive work together, the patients need to be prepared and educated; Thalassemia International Federation has prepared program for building the capacity of Associations' members and for educating patients to become experts, this event is an example of that, we need to join TIF that is putting in action every possible effort to unite all the patients worldwide. Let's go back to the question now!When and if all these issues will be put into the same logical frame we can answer to the question: YES, now we are strong enough for assert our rights.It would be a terrible mistake to consider us on the finishing line, because up to now we have still some road in front of us to run but we are on the right way. specialized center is quite different than being treated outside it, in terms of lifespan, therefore in terms of quality of life.It's necessary to have clear in our mind a vision on the future, an open mind for facing with creativity and consciousness the problems we are living, a pool of expert patients' very involved in the work at high level with the institutions.
2018-12-11T05:54:42.813Z
2014-12-04T00:00:00.000
{ "year": 2014, "sha1": "952e6a993435fa327243729f6d5e5b5ca213d469", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2039-4365/4/3/4882/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "952e6a993435fa327243729f6d5e5b5ca213d469", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
121869560
pes2o/s2orc
v3-fos-license
SINGULAR PERTURBATION FOR NONLINEAR BOUNDARY-VALUE PROBLEMS Asymptotic solutions of a class of nonlinear boundary-value problems are studied. The problem is a model arising in nuclear energy distribution. For large values of the parameter, the differential equations are of the singularperturbation type and approximations are constructed by the method of matched asymptotic expansions. where r2(x) and f(x) are positive functions, n is a positive integer > 2, a > 0 and b > 0, by the method of matched asymptotic expansions. The problem arises in connection with the distribution of the energy released r 2 in a nuclear power reactor as a result of a power excursion; (x) is the space- dependent perturbation in the neutron multiplication of a reactor, and (I.I) and (1.2) give the distribution of the energy release from the start of the perturbation till the neutron population again becomes zero, see Ergen (3).The case of zero boundary conditions and constant coefficients has been investigated in Canosa and Cole (4).It would be assumed that y is positive and bounded.An upper bound for the solution is given by y'(x) < ximum of 63 Note that if both r 2 and f are constant, then (2.2) implies that y cannot have any relative minimum.For definitness, it would be assumed in general that y'(0) > 0 and y'(1) < 0 But the results can be easily modified for the other possibilities. SINGULAR PERTURBATION PROBLEM. Consider the asymptotic case where al (e) / 0 as e / 0. Let f( ) be expanded as f(g ) f(0) + al (g) f R + In matching with the outer solution, the constant in (4.5) can be obtained. If f' (0) 0, then (4.5) becomes 2 2 f (0') (go) is a polynomial of degree (n-i) in go" Similar can be obtained for the boundary layer at x i, with g0(R) replaced by say h0() where (I x)/e 1/2 f(0) by f(1) and f'(0) by f'(1) 5. EXPLICIT ASYMPTOTIC SOLUTIONS FOR SPECIAL CASES AND DISCUSSION OF RESULTS. There are two special cases in which explicit asymptotic solutions can be obtained.For n 2, the first term in the outer expansion is given by I Y0 t() Transforming back to the origional variable y, we see that, away from the boundaries, r 2 the solution is given asymptotically by f(x) Equation (4.6) becomes 2 /2 f(0)3 (go f('10)') /go + 2 f(0)l or d go y2 f(0) Near the boundary x I, the first term of the boundary-layer solution is given by h (5.B) Equations (5.2) and (5.3) show the exponential decay of the boundary solutions into the outer solution, and the symmetry of the solution about the domain center if f has such symmetry and a b.The first term outer solution and (5.2) reduce to the ones given in Canose and Cole (4) when the coefficient f(x) m 1 and the boundary conditions are zero. When n 3, the first term of the outer solution is Y0 /f(x) and so away from the boundaries, the solution is given asymptotically by Equation (4.6) now becomes v/f (x) In this case, we see from (5.5) and (5.6), the exponential growth of the boundary- layer solutions into the outer solution, and again the symmetry of the solution about the domaln center If f is symmetric and a b.The first term outer solutlon and (5.5) reduce to the ones given in Canosa and Cole (4) when the coefficient f(x) 1 and the boundary conditions are zero. 2 . UPPER BOUND FOR THE MAXIMUM OF THE SOLUTION.Let the maximum value of the solution occur at x c d2 Y + p(x)Y f(x) yn 0(3.5) dx Equation (3.5) is a singular perturbation equation, the asymptotic expansions of which and (1.2) will be studied in the remaining sections. Outer SolutionAssuming the solution in the form of an asymptotic series ) + e Yl(X) + and substituting it into (4.1), the functions Y. (x) can be determined recursively.The first two terms are given by I f(e1/2 )yn 0 (4.2)The boundary-layer solution has the form Y(, e) go () + al (e) gl () +(4.3)
2017-07-29T18:58:51.500Z
1979-01-01T00:00:00.000
{ "year": 1979, "sha1": "9687ac94ea4ec5dcdc83a26007d1fee7a937375a", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/ijmms/1979/162053.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "9687ac94ea4ec5dcdc83a26007d1fee7a937375a", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
11057790
pes2o/s2orc
v3-fos-license
Performance Evaluation of Hematologic Scoring System in Early Diagnosis of Neonatal Sepsis Objectives: The present study was undertaken to evaluate and highlight the importance of hematological scoring system (HSS) in the early detection of neonatal sepsis. Materials and Methods: The cross-sectional study enrolled 110 neonates who were clinically suspected of infection (study group) and normal neonates for comparison (controls), during the 1st week of life. All peripheral blood smears were analyzed using HSS of Rodwell et al., by pathologists blinded to the infection status of the newborns. HSS assigns a score of 1 for each of seven findings significantly associated with sepsis: Abnormal total leukocyte count, abnormal total polymorphonuclear neutrophils (PMN) count, elevated immature PMN count, elevated immature: Total (I:T) PMN ratio, immature: Mature (I:M) PMN ratio ≥0.3, platelet count ≤150,000/mm3, and pronounced degenerative or toxic changes in PMNs. Score of ≤2 was interpreted as sepsis unlikely; score 3-4: Sepsis is possible and ≥5 sepsis or infection is very likely. Blood culture was taken as a standard indicator for septicemia. The perinatal history, clinical profile and laboratory data were recorded and correlated in each case. Each hematological parameter was assessed for its individual performance and also with the culture-proven sepsis. Sensitivity, specificity, positive and negative predictive values (NPVs) were calculated for each parameter and for different gestational ages. P value was also calculated for different parameters. Results: Out of the 110 infants, based on clinical findings and laboratory data were classified into three categories: Sepsis (n=42), probable infection (n=22) and normal (n=46). Out of these, 42 (38.2%) newborns had positive blood culture. 63 (57%) neonates were preterm and 47 (43%) term. Male: female ratio was 0.96:1. The P value was significant for the different gestational ages (0.0002) and sex ratio (0.003). Immature polymorphonuclear neutrophils (PMN) count was the most sensitive and I:M PMN ratio, the most specific indicator of sepsis. For sepsis and probable sepsis, I:T PMN count and immature PMN count have highest sensitivity whereas I:T and I:M PMN ratio have highest specificity. HSS has much higher sensitivity and specificity in preterms than in term neonates. Positive predictive value and NPV is also higher in preterm than term for HSS. It was also seen that with increasing scores, the likelihood of sepsis also increased. Conclusion: The sensitivities of the various screening parameters were found to be satisfactory in identifying early onset neonatal sepsis. It is a simple and feasible diagnostic tool to guide towards the decision-making for a rationale treatment. INTRODUCTION Sepsis neonatorum continues to be a major cause of morbidity and mortality in developing countries, but is treatable if diagnosed on time. [1] One to eight cases of neonatal septicemia are reported in all live births. [2] Neonatal sepsis is the response of neonates to any kind of infections. It can be early or late in onset. In early onset, maximum cases are observed within 24 h of life, and smaller percentage thereafter up to 7 days. The infection can be contracted from the mother via transplacental route, ascending infection, during passage through an infected birth canal, or exposure to infected blood at delivery. [3] The newborn infants are more prone to bacterial invasion than the older children or adults, due to their weaker immune system, premature babies being even more susceptible. [4] The major concern of the clinicians is its non-specific presentation, sometimes the rapid progression of sepsis and So, the significance of various screening tests, either singly or in combination is observed. The need is for an infallible test for bacteremia's that is easily performed, quick, simple, and cost-effective. Monroe devised a criteria which used three parameters of total PMN count, immature PMN count and I:T ratio, whereas in this hematologic scoring system, we used even more indices. Here, in this study, we undertake to evaluate the performance of the hematological scoring system (HSS) of Rodwell et al. (1988) in 110 neonates for the early detection of sepsis in high risk infants, which should improve the diagnostic accuracy of the complete blood cell count as a screening test. The present study was undertaken to evaluate and highlight the importance of HSS in the early detection of neonatal sepsis. mATeRIAlS AND meThODS This study is a hospital based cross-sectional study of all the neonates during the 1 st week after birth reporting to Pediatrics Department of Maharishi Markandeshwar Institute of Medical Sciences and Research, Mullana, Ambala from April to July 2011. Neonates who were clinically suspected to have bacterial infection within 1 st week of life, based on perinatal risk factors and clinical features were taken as study group. For comparison neonates reporting to the department for immunization or attending well baby clinics were taken as controls. Infants with <37 weeks gestational age were regarded as preterm and >37 weeks, term. [10] The study included three categories; category (1) infants with sepsis and positive blood culture; category (2) infants with probable infection and strong clinical history but negative blood culture; category (3) normal infants without any evidence of sepsis. Under complete aseptic conditions, 0.5-1 ml of blood sample was obtained by peripheral venipuncture. The samples were collected in tripotassium ethylene diamine tetra-acetic acid containing non-siliconized vaccutainer tubes. Sepsis work-up involved complete blood counts along with hematological score and culture. Peripheral blood smears were prepared immediately, stained with Leishman stain and examined under oil-immersion lens of light microscope at a magnification of ×1000. Total leucocyte count reading was obtained by MS 95 automated analyzer and later, corrected for nucleated red blood cells. Differential counts were performed on these smears by counting at least 200 cells. All the peripheral blood smears were analyzed by pathologists blinded to the infection status of these infants, using HSS of Rodwell et al. HSS [11] includes: • White blood cell (WBC) count and its differential • Platelet count • Nucleated red blood cell count (to correct total WBCs count) • Assessment of degenerative and toxic changes in PMNs. HSS assigns a score of 1 for each of seven findings significantly associated with sepsis: Abnormal total leukocyte count, abnormal total PMN count, elevated immature PMN count, elevated immature to total (I:T) PMN ratio, immature to mature (I:M) PMN ratio ≥0.3, platelet count ≤150,000/mm 3 , and pronounced degenerative or toxic changes in PMNs. An abnormal total PMN count is assigned score of two instead of 1, if no mature polymorphs are seen on the peripheral smear to compensate for the low I:M ratio [ Table 1]. Immature polymorphs include promyelocyte, myelocyte, metamyelocyte and band forms. Band cell is described as a PMN in which the nucleus is indented by more than one-half, but in which the isthmus between the lobes is wide enough to reveal two distinct margins with nuclear material in between. Degenerative changes include vacuolization, toxic granulations and Dohle bodies [ Figure 1]. Score of ≤2 was interpreted as sepsis unlikely; score 3-4: Sepsis is possible and ≥5 sepsis or infection is very likely. Minimum score that can be obtained is 0 and maximum score, 8. Statistical analysis Sensitivity, specificity, positive and negative predictive values (NPVs) were calculated for each parameter and for different gestational ages. P value was also calculated for different parameters. Data was compiled and statistically analyzed by using SPSS software. The research work was approved by the institutional ethical committee and the informed consent was also obtained from the parents of all the neonates. The P value for the various gestational ages and sex ratio was found to be significant [ Table 5]. HSS has much higher sensitivity and specificity in pre-terms than in term neonates. PPV and NPV are also higher in preterm than term for HSS [ Table 6]. DISCUSSION The mortality and the morbidity caused by neonatal sepsis make HSS an important tool in its early diagnosis. It helps in the early intervention compared to the culture reports, which may take days for the positive result and thus, saving the life of many neonates and avoid the unnecessary and prolonged exposure to the antibiotics. Hence, aids to provide a more rational approach in the antibiotic usage. The culture reports may follow later. The results of this study, found out to be more accurate in pre-terms (57%) than in full-term infants (43%), which is further a boon in diagnosing sepsis. This is because, preterm are more susceptible to infections than are the terms, which is due to their poor immune system, low levels of immunoglobulins and low weight at birth. [12] This scoring system is significant in many other ways also, regarding its easy availability, accessibility, low cost, less time consuming and practically possible in all the laboratories, which makes it convenient for any common man to get a high risk infant tested and diagnosed on time. Neonatal sepsis is also known as sepsis neonatorum and neonatal septicemia. It is the infection occurring in the newborn infants, the cause for which can be bacterial, viral etc., its diagnosis is very crucial as its presentation is very nonspecific and the death toll is very high, especially in the developing countries. Maximum cases present on the 1 st day of life, with majority in less than 12 h, [13] as is the case in our study, i.e., 51.81% neonates were less than 24 h. Monroe devised a criteria which used three parameters of total PMN count, immature PMN count and I:T ratio. [14] whereas in this hematologic scoring system, we used even more indices. In our study, we correlated the sensitivity, specificity, PPV, and NPV of the various parameters with different groups and also with the other studies. Elevated I:T ratio was found to be the most reliable indicator of sepsis in our study, and also in various other studies like those done by Ghosh et al. [4] and Narasimha et al. [15] Immature PMN count and I:T PMN ratio was also a very sensitive indicator of neonatal sepsis. Degenerative changes in the PMNs made no significant contribution in the diagnosis, in this study. Moreover the presence of toxic granules indicates the production of unusual PMNs during infection and stress induced leucopoiesis. They are never seen in healthy babies. Their presence invariably indicates sepsis, but their count is not always increased. [15,16] Also, in our study, the total PMN count had a limited role in sepsis screening. This finding correlated well with the study done by Akenzua, who inferred that these patients had normal PMN count but the bands forms are raised, and the elevation was often very late and inconsistent. [17] Thrombocytopenia was frequently associated with sepsis and indicated poor prognosis. This is thought to be due to increased platelet destruction, sequestration secondary to infections, failure in platelet production due to reduced megakaryocytes or damaging effects of endotoxin. [18] [11,[19][20][21] We also found out that higher the score, more are the chances of sepsis and vice versa. The simplification and standardization of the interpretation of this global test is still required. [11] Variety of other rapid detection methods of microorganisms, like DNA probes, automated blood culture system and fluorometric detection systems are available, but HSS can still be used as a screening test for diagnosing sepsis and to differentiate infected neonates from the non-infected ones. Furthermore, the sensitivity and the specificity of the test is also high, with certainty of sepsis increasing with the score. [15,22]
2018-04-03T00:21:11.354Z
2013-01-01T00:00:00.000
{ "year": 2013, "sha1": "a5faf10b9904da4bedd8e60e93312c85c85215cc", "oa_license": "CCBYNCSA", "oa_url": "https://europepmc.org/articles/pmc3761960", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "af0e9fa599102fa6f653f781ad323f92bb6c8942", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
13561880
pes2o/s2orc
v3-fos-license
Soft Expert Sets , Introduction Most of the problems in engineering, medical science, economics, environments, and so forth, have various uncertainties. Molodtsov Many researchers have studied this theory, and they created some models to solve problems in decision making and medical diagnosis, but most of these models deal only with one expert, and if we want to take the opinion of more than one expert, we need to do some operations such as union, intersection, and so forth. This causes a problem with the user, especially with those who use questionnaires in their work and studies. In our model the user can know the opinion of all experts in one model without any operations. Even after any operation on our model the user can know the opinion of all experts. So in this paper we introduce the concept of a soft expert set, which will be more effective and useful. We 2 Advances in Decision Sciences also define its basic operations, namely, complement, union intersection AND and OR and study their properties. Finally, we give an application of this concept in a decision-making problem. Preliminaries In this section, we recall some basic notions in soft set theory. Molodtsov 1 defined soft set in the following way. Let U be a universe and E be a set of parameters. Let P U denote the power set of U and A ⊆ E. In other words, a soft set over U is a parameterized family of subsets of the universe U. For ε ∈ A, F ε may be considered as the set of ε-approximate elements of the soft set F, A . The following definitions are due to Maji Definition 2.10. The union of two soft sets F, A and G, B over a common universe U is the 2.3 The following definition is due to Ali et al. 8 since they discovered that Maji et al.'s definition of intersection in 3 is not correct. Definition 2.11. The extended intersection of two soft sets F, A and G, B over a common Soft Expert Set In this section, we introduce the concept of a soft expert set, and give definitions of its basic operations, namely, complement, union, intersection, AND, and OR. We give examples for these concepts. Basic properties of the operations are also given. Let U be a universe, E a set of parameters, and X a set of experts agents .LetO be a set of opinions, Z E × X × O and A ⊆ Z. where P U denotes the power set of U. Note 3.2. For simplicity we assume in this paper, two-valued opinions only in set O,thatis, O {0 disagree, 1 agree}, but multivalued opinions may be assumed as well. Example 3.3. Suppose that a company produced new types of its products and wishes to take the opinion of some experts about concerning these products. Let U {u 1 ,u 2 ,u 3 ,u 4 } be a set of products, E {e 1 ,e 2 ,e 3 } a set of decision parameters where e i i 1, 2, 3 denotes the 4 Advances in Decision Sciences decision "easy to use," "quality," and "cheap," respectively, and let X {p, q, r} be a set of experts. Suppose that the company has distributed a questionnaire to three experts to make decisions on the company's products, and we get the following: 3.2 Then we can view the soft expert set F, Z as consisting of the following collection of approximations: 3.3 Notice that in this example the first expert, p, "agrees" that the "easy to use" products are u 1 ,u 2 ,a n du 4 . The second expert, q, "agrees" that the "easy to use" products are u 1 and u 4 , and the third expert, r, "agrees" that the "easy to use" products are u 3 and u 4 . Notice also that all of them "agree" that product u 4 is "easy to use." Example 3.13. Consider Example 3.3. Then the disagree-soft expert set F, A 0 over U is 3.10 Proposition 3.14. If F, A is a soft expert set over U,then Proof. The proof is straightforward. Advances in Decision Sciences Assume that a company wants to fill a position. There are eight candidates who form the universe U {u 1 ,u 2 ,u 3 ,u 4 ,u 5 ,u 6 ,u 7 ,u 8 }. The hiring committee considers a set of parameters, E {e 1 ,e 2 ,e 3 ,e 4 ,e 5 } where the parameters e i i 1, 2, 3, 4, 5 stand for "experience," "computer knowledge," "young age," "good speaking," and "friendly," respectively. Let X {p, q, r} be a set of experts committee members .Suppose In Tables 1 and 2 we present the agree-soft expert set and disagree-soft expert set, respectively, such that if u i ∈ F 1 ε then u ij 1 otherwise u ij 0, and if u i ∈ F 0 ε then u ij 1 otherwise u ij 0 where u ij are the entries in Tables 1 and 2. The following algorithm may be followed by the company to fill the position.
2016-01-13T18:10:52.408Z
2011-01-04T00:00:00.000
{ "year": 2011, "sha1": "1aea08d4043b6878e3903cb8f93cc62e6195bdb9", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/archive/2011/757868.pdf", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "619c49e5f2b6d19e7b6f9c1304278dff2f870391", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
255589746
pes2o/s2orc
v3-fos-license
Effects of intestinal microbes on rheumatic diseases: A bibliometric analysis Background Rheumatic diseases (RD) are a group of multi-system inflammatory autoimmune diseases whose causes are still under study. In the past few decades, researchers have found traces of the association between rheumatic diseases and intestinal microbiota, which can partially explain the pathogenesis of rheumatic diseases. We aimed to describe the research trend and main divisions on how gut flora interreacts with rheumatic diseases, and discussed about the possible clinical applications. Methods We analyzed bibliometric data from the Web of Science core collection (dated 15th May 2022). Biblioshiny R language software packages (bibliometrix) were used to obtain the annual publication and citations, core sources according to Bradford’s law, and country collaboration map. We designed and verified the keyword co-occurrence network and strategic diagram with the help of VOSviewer and CiteSpace, subdivided the research topic into several themes and identified research dimensions. The tables of most local cited documents and core sources were processed manually. Furthermore, the Altmetric Attention Score and the annual Altmetric Top 100 were applied to analyze the annual publication and citation. Results From a total of 541 documents, we found that the overall trend of annual publication and citation is increasing. The major research method is to compare the intestinal microbial composition of patients with certain rheumatic disease and that of the control group to determine microbial alterations related to the disease’s occurrence and development. According to Bradford’s law, the core sources are Arthritis and Rheumatology, Annals of the Rheumatic Diseases, Current Opinion in Rheumatology, Nutrients, Rheumatology, and Journal of Rheumatology. Since 1976, 101 countries or regions have participated in studies of rheumatology and intestinal microbes. The United States ranks at the top and has the broadest academic association with other countries. Five themes were identified, including the pivotal role of inflammation caused by intestinal bacteria in the rheumatic pathogenesis, the close relationship between rheumatic diseases and inflammatory bowel disease, immunoregulation mechanism as a mediator of the interaction between rheumatic diseases and gut flora, dysbiosis and decreased diversity in intestine of patients with rheumatic diseases, and the influence of oral flora on rheumatic diseases. Additionally, four research dimensions were identified, including pathology, treatment, disease, and experiments. Conclusion Studies on rheumatic diseases and the intestinal microbiota are growing. Attention should be paid to the mechanism of their interaction, such as the microbe-immune-RD crosstalk. Hopefully, the research achievements can be applied to diseases’ prevention, diagnosis, and treatment, and our work can contribute to the readers’ future research. Introduction Rheumatic diseases (RD) are a group of multi-system inflammatory autoimmune diseases with unknown causes. Its pathological lesions involve joints and the tissues around, mainly affecting small joints such as those of the hands and feet. Patients with early rheumatic diseases often have symptoms such as joint pain, swelling, and dysfunction. In advanced stages, patients may face joint stiffness and deformity, muscle atrophy, and even develop disability. As chronic inflammatory diseases with long courses, severe damage and a high disability rate, rheumatic diseases seriously threaten patients' health and bring great burden to their families and society. Take rheumatoid arthritis (RA) for example, according to the Global Burden of Disease 2010 Study, the global prevalence of RA was 0.24% (Cross et al., 2014). In the United States, estimates of RA prevalence tended to be higher, typically between 0.5% and 1% (Myasoedova et al., 2010;Hunter et al., 2017). Women are more likely to suffer from inflammatory autoimmune rheumatic diseases. It is estimated that one in 12 women will develop a rheumatic disease during their lifetime (Crowson et al., 2011). Additionally, patients may have some adverse long-term outcomes, such as physical disability (Guevara-Pacheco et al., 2017), work incapacity (Xiang et al., 2020), decreased life quality (Matcham et al., 2014), and even premature death (Bournia et al., 2021). A study shows that the disability score deteriorated by 1.8% per year in the Swiss national RA cohort (Heinimann et al., 2018). The pathogenesis of rheumatic diseases involves multiple factors such as environment, inheritance, and immunity dysregulation, which is complicated and still not well-illustrated. As for treatment, non-steroidal anti-inflammatory drugs (NSAIDs), glucocorticoids, and disease-modifying antirheumatic drugs (DMARDs) are often required. However, multiple side effects are hard to avoid. NSAIDs may cause cardiovascular events, such as myocardial ischemia and stroke (Bhala et al., 2013). Glucocorticoid treatment can lead to osteoporosis and increase the risk of fracture (Rossini et al., 2017). The most common adverse reactions of DMARDs are gastrointestinal toxicity (such as nausea, vomiting, and diarrhea), hepatotoxicity, pulmonary toxicity, and so on . Therefore, mechanism research and drug discovery are of great urgency. Scientists are exploring unknown pathogenic mechanisms in the hope of finding new drug targets. Microbes coexist with their human hosts all through their lives, acting as a crucial factor in human health and homeostasis. Among all parts of the human body, the gastrointestinal tract contains the largest proportion of commensal bacteria (Simon and Gorbach, 1984). There are over 1000 species of intestinal bacteria and at least 160 species in each individual (Qin et al., 2010). The microbiota in intestine is relatively stable all through one's life (Reid et al., 2011), and was proved to have profound effects on the host's local and systemic immune system (Honda and Littman, 2016). Therefore, maintenance of a balanced symbiosis is indispensable in preventing autoimmune rheumatic diseases. Over the past few decades, numerous studies have linked microbiomes to various diseases, including obesity (Torres-Fuentes et al., 2017), diabetes (Gurung et al., 2020), asthma (Barcik et al., 2020), atherosclerosis (Jonsson and Bäckhed, 2017), inflammatory bowel disease Frontiers in Microbiology 02 frontiersin.org (IBD) (Nishida et al., 2018), and many others related to the immune system. Among rheumatic diseases, rheumatoid arthritis (RA) (Maeda and Takeda, 2017), ankylosing spondylitis (AS) (Guggino et al., 2021), systemic lupus erythematosus (SLE) (Luo et al., 2018), psoriatic arthritis (PsA) (Myers et al., 2019), gout (Chu et al., 2021), scleroderma (Volkmann, 2017), and Behcet's disease (Ye et al., 2018) have been confirmed to have connections with the microbiome. For example, Prevotella species are involved in the pathogenesis of RA. It was found that Prevotella histicola in intestinal microbiota suppressed the development of arthritis (Maeda and Takeda, 2017). Besides, SLE patients possessed an alteration in gut microbiota, including greater abundance of the bacterial phylum Proteobacteria and lower abundance in bacterial genera Odoribacter and Blautia (Luo et al., 2018). Consequently, it's necessary for us to sort and summarize the currently known relationship between rheumatic disorders and intestinal microbes. It is of great importance to elucidate the pathogenesis of RD and it may provide new ideas for treatment in the future. In recent years, many scholars have applied bibliometric analysis to the development of knowledge in health. The term "bibliometrics" was originally invented by Pritchard (1969), as "the application of mathematical and statistical methods to books and other media of communication". It was created to meet the need for quantitative studies of scientific publications. From then on, bibliometrics has gradually evolved and was technically perfected. Since the 1980s, more and more researchers applied bibliometrics in the field of medicine and health care. Nowadays, application of bibliometrics in medical topics has translated into everyday clinical activities, which enables us to understand the evolution and main development directions of certain medical subject through scientific approach and provides guidance for further research (Kokol et al., 2021). Similarly, bibliometric analysis can be applied to explore the relationship between gut flora and rheumatic diseases. So far, there is no bibliometric study on the association between rheumatic diseases and intestinal microbiota. Therefore, we collected the related articles and analyzed them with the help of bibliometric analysis, with the purpose of identifying the distribution structure, quantity relations, and major research trends in the field. We hope our work will serve as a guide for identifying key knowledge and research priorities. Data sources and retrieval strategies Data source was obtained from the Web of Science on 15th May 2022. Web of Science is the world's largest and most comprehensive collection of academic resources, which contains more than 21,000 authoritative, high-impact journals and more than 300,000 academic conferences around the world, covering the fields of natural sciences, engineering technology, biomedicine, social sciences, arts, and humanities. Data analysis We analyzed the bibliographic data through several bibliometric analysis applications. Biblioshiny R language software packages were used to analyze and visualize article and journal performance, country influence and collaboration, research trends, and keywords. Biblioshiny, developed by Massimo Aria and Corrado Cuccurullo from the University of Naples and the University of Campania "Luigi Vanvitelli" (Italy), is powered by Bibliometrix and programmed in the R language, providing an intuitive and well-structured interface (Xiang et al., 2020). To analyze the annual publication and citation, Altmetric Attention Score (AAS) was applied. AAS is a weighted count of all the attention a research article receives online, which can be used for measuring a study's influence in a comprehensive way, including social, academic, and other aspects. AAS and the list of annual Altmetric Top 100 was obtained from Altmetric.com website (Altmetric, London, United Kingdom). As regards to keyword analysis, with the help of VOSviewer (van Eck and Waltman, 2010) and CiteSpace (Chen, 2006), the co-occurrence network and the strategic diagram were performed, verified, and then optimized, both of which were constructed with top 250 keywords of high frequency selected by "keyword plus" with the algorithm "walktrap." The tables of most local cited documents and core sources information were designed manually. Additionally, for a better understanding of the research trends and hotspots, we carefully read the related documents, subdivided the research topic into several themes, identified research dimensions, and discussed their future directions respectively. Annual publication and citation Annual publication and citation offer us a view of the general situation and development tendency of the field. As shown Frontiers in Microbiology 03 frontiersin.org in Figure 1, from 1976 to 2022, a total of 541 documents have received 13762 citations. Despite the slight fluctuations in the number of annual publications, the overall trend is growing. We can approximately divide the development into three stages. 1976-1995 is an exploration period with sporadic production where only five documents were published and nearly zero was cited. 1996-2012 is a sprouting period. The number of annual publications and citations remained low while there were documents published and cited almost every year. The next period 2013-2021 is a period of expansion, which witnessed a rapid increase in annual publications, reaching a maximum of 97. At the same time, annual citations grew at an exponential rate. In order to interpret such rapid growth from 2013 to 2021, we have explored the Altmetric Top 100 over the years, which includes the top 100 studies with the highest AAS of the year. It is found that from 2013 to 2015, there was one article each year discussing gut microbiota and reaching the annual Altmetric Top 100, the titles of which are "Intestinal microbiota metabolism of l-carnitine, a nutrient in red meat, promotes atherosclerosis" (Koeth et al., 2013), "Artificial sweeteners induce glucose intolerance by altering the gut microbiota (Suez et al., 2014), " and "Dietary emulsifiers impact the mouse gut microbiota promoting colitis and metabolic syndrome (Chassaing et al., 2015)." Their AASs are 1089, 4794, and 1835, while their annual ranks are 64, 3, and 57 respectively (Supplementary Figure S1). It follows that the intestinal flora was possibly a popular topic in [2013][2014][2015]. Therefore, we considered that the increased attention to gut microbiota research in the 2010s could be the reason for the growth spurt in annual publication during 2013-2021. What's more, as we can see in Table 1, all of the top 20 most cited documents were released during this flourishing period. Especially the top 1 article achieves an AAS of 252 (Supplementary Figure S1) and ranks in the top 5% of all research outputs ever tracked by Altmetric, indicating its relatively high online attention, which we will further explain in the next chapter. Most cited documents In Table 1, we present the top 20 most local cited documents, among which there are 15 articles and 5 reviews. The content of the articles was analyzed. The major research method is to compare the microbial composition of patients with certain rheumatic disease and that of the healthy controls to determine whether certain microbial genus alterations are associated with the occurrence and development of the disease. The most studied disease is rheumatoid arthritis, while other rheumatic diseases, including systemic lupus erythematosus, ankylosing spondylitis, psoriatic arthritis, and systemic sclerosis, were also researched. Most of these studies have found that patients with rheumatic disease exhibit decreased diversity in gut flora and a relative increase or decrease in abundance of specific species. In addition to the analysis of intestinal microbes, dental and salivary samples were also examined, where similar changes were observed. As illustrated in the most cited article, dysbiosis (altered microbial composition) was detected in fecal, dental, and salivary samples from patients with RA, with a decrease in Haemophilus spp. as well as an increase in Lactobacillus salivarius existing at all three sites compared with healthy controls (Zhang et al., 2015). What's more, there is a study pointing out that the degree of these bacterial changes was correlated with disease duration and autoantibody levels (Chen et al., 2016). From these results, it can be inferred that detecting microbe alterations can be used in screening and diagnosis of multiple rheumatic diseases, and treating the dysbiosis might be a way to improve diseases. The remaining five documents are reviews mainly discussing the role and influence of intestinal dysbiosis on the development of various rheumatic diseases. The presence of bacteria in mucosal surfaces, such as the intestine, the gingiva, and the respiratory tree, is able to trigger host immune responses and cause inflammation (Brusca et al., 2014). In the review written by Gill et al. (2015), it's also pointed out that, except for rheumatic diseases, various inflammatory disorders, such as IBD, are affected by microbiota. As mentioned above, the most local and global cited document is a metagenome-wide association study (MGWAS) published by a research team, led by Zhang et al. (2015), of Peking Union Medical College Hospital. This article is the most well-known one which utilized novel sequencing technologies into intestinal dysbiosis of rheumatic diseases. Through metagenomic shotgun sequencing, microbial alterations in mouth, gut and saliva of patients with RA were identified, and since then, a growing number of studies have used similar approached to explore the field. With the development of sequencing technology, compositional and functional changes of gut flora have been found in a variety of rheumatic diseases, such as AS (Wen et al., 2017) and SLE (Tomofuji et al., 2021). In this way, dysbiosis of specific rheumatic diseases can be understand more precisely, so that targeted therapies can be applied. Source analysis All selected articles were published in one of 199 sources. Six journals have published more than 10 articles, accounting for 34.2% (185 of 541) of the total articles. We applied Bradford's law to identify core sources of the research on intestinal microbes and rheumatic diseases (Bradford, 1934). As depicted in Figure 2, the core sources are Arthritis and Rheumatology, Annals of the Rheumatic Diseases, Current Opinion in Rheumatology, Nutrients, Rheumatology and Journal of Rheumatology, among which four journals were classified as Q1 and the rest two were classified as Q2 by the Journal Citation Reports standard in 2020. Five journals were in the JCR category of rheumatology and five had an impact factor greater than five ( Table 2). Country influence and collaboration Since 1976, 53 countries or regions have participated in studies of rheumatology and intestinal microbes. The United States tops the ranking of country production with a frequency (number of documents) of 527, followed by China (330), Italy (191), the United Kingdom (116), and Japan (111). Correspondingly, the top five most cited countries are the same, including the United States (4043), China (2175), the United Kingdom (1136), Italy (800), and Japan (760). The more citations a country has, the stronger its scientific influence is in this field. Therefore, it reveals that these five countries are the most active, prolific, and influential in this field. Along with the country collaboration map (Figure 3), a complete picture of academic performance and country collaboration is presented. The figure was created based on the frequency of cooperation between two countries. If there exists research cooperation between two countries, those two areas will be connected by a line on the map. The more connections two countries have, the thicker the line is. The United States has the broadest academic associations with other countries or regions while its collaboration with China is the tightest. The United Kingdom and Australia also have strong academic partnerships. In contrast, communication among other countries could be strengthened. With more international cooperative research projects, the disturbance of local lifestyle habits and regional features on microbial characteristics will be eliminated, and the research findings can be more accurate and be more widely applicated. Keyword co-occurrence network Keywords provide an overview of the research content in a highly refined and generalized way. Through connecting the keywords existing in the same article, a keyword co-occurrence network was formed (Figure 4), allowing us to further analyze the research topics and trends. The top 10 most frequently occurring keywords are gut microbiota (466), ankylosingspondylitis (346), rheumatoid-arthritis (321), disease (272), gut (263), inflammatory bowel disease (252), t-cells (211), association (191), arthritis (188), and Crohn's disease (186). Each keyword is indicated by a node, the size of which shows its occurrence time. Based on this, keywords with close relations were grouped into one cluster, representing an aspect of the core research field, and nodes with the same color belong to the same cluster. In our analysis, five clusters were recognized. Cluster 1 (purple): Intestinal bacteria lead to rheumatic diseases through joints' inflammation. Keywords include inflammation, arthritis, pathogenesis, association, etc. Cluster 2 (blue): Rheumatic diseases are closely associated with inflammatory bowel disease. Keywords include Core sources according to Bradford's law. According to Bradford's law, one third of the articles in the field lie in the zone of core sources, which were published by six different sources. Furthermore, four research dimensions were identified, including pathology, treatment, disease, and experiments. The pathology part was divided into macro and micro level. The themes of clusters and the research dimensions were cross-tabulated, resulting in a taxonomy presented in Supplementary Table S2. Strategic diagram Based on the keyword co-occurrence network, the centrality and density of each cluster were calculated and a strategic diagram was drawn, with each circle representing a cluster of the same color. Centrality indicates the degree of closeness of one topic with the others, while density indicates the grade of maturity of one theme. Through the strategic diagram, we can further understand the development situation and the future direction of each topic. As shown in Figure 5, there are two clusters located on the axis. The mechanism of interaction between intestinal flora and rheumatic diseases (circle red) achieves a low density, which means it is still underdeveloped and requires further research. For example, it is still under study how gut microbiota contribute to the pathogenesis of rheumatic disorders. The topic of intestinal-bacteria-caused inflammation in rheumatic diseases (circle purple) is of high centrality, which means it connects closely with other subjects, indicating its role as the basis of all relevant research. Inflammation triggered by Country collaboration map. The red line indicates that there was collaboration between two countries. The more connections two countries have, the thicker the line is. The deeper the color of a country is, the more documents are published in this country. intestinal bacteria is thought to be a possible pathological basis for rheumatoid disorders (Toivanen, 2003;Maeda et al., 2016). As for the other three clusters distributed in different quadrants, the association between RD and IBD (circle blue) lies in the first quadrant with high centrality and density. As the core topic in this field, its research system has been formed and closed to completion. It is confirmed by a range of research that rheumatoid disorders, such as AS and SLE, could happen to one patient simultaneously or successively with IBD (Shor et al., 2016;Klingberg et al., 2017). In the second quadrant, dysbiosis and low bacterial diversity in patients with rheumatic disease (circle orange) is a relatively developed theme isolated from the other topics, achieving relatively low centrality and high density. The changes in gut flora are always observed in patients with rheumatoid disorders, but their cause remains to be further discussed. In the third quadrant, both the centrality and density of the cluster are low, indicating that the topic is disappearing. Shared inflammatory mechanism within RA and periodontal disease (circle green) was first stated by Snyderman and McCarty (1982) while the hypothesis referring to P. gingivalis was first published by Rosenstein et al. (2004). Subsequently, research on this topic sprang up in the 2000s and 2010s, and the mechanism of PD's effects on RA was explored at large. In the past decade, new discoveries have rarely been reported. Meanwhile, P. gingivalis has been the only oral bacterium associated with rheumatic diseases so far. Discussion Deducing from the above analysis, it is certain that rheumatic diseases are closely related to changes in gut flora while particular changes in bacterium genera occur in specific rheumatic diseases. However, it is still unclear how intestinal flora affects the occurrence and progression of RD, which needs to be further probed before it can be used in clinical treatments. By observing the keyword co-occurrence network carefully and reading the relevant articles, as inferred from cluster 1, intestinal bacteria can cause inflammation in joints, resulting in rheumatic diseases. Germ-free SKG mice transplanted with fecal bacteria from RA patients and induced with zymosan would develop severe arthritis, with an increase in the number of CD4 + T cells and IL-17-producing T helper (Th17) cells in the intestine (Maeda et al., 2016). Another experiment demonstrated that infections with intestinal pathogens such as Yersinia, Salmonella, or Shigella may trigger reactive arthritis, as the catabolite of these pathogens can be transferred into the synovial tissues, generating local inflammation (Toivanen, 2003). Therefore, intestinal flora and its inflammation play an important role in the pathogenesis of rheumatic disease, while gut and joint inflammation are closely related pathologically, as can also be concluded from cluster 2. In cluster 2, it is recognized that rheumatic diseases are closely associated with IBD, which includes ulcerative colitis (UC) and Crohn's disease (CD). Spondyloarthritis (SpA) is a group of chronic inflammatory rheumatic diseases that includes ankylosing spondylitis, psoriatic arthritis, juvenile spondyloarthritis (JSpA), and enteroarthritis (EA), and it can be classified as axial or peripheral. According to the statistics, up to 40% of IBD patients show articular manifestations, while many patients with rheumatic disease are disturbed by intestinal inflammation at the same time (Klingberg et al., 2017). As Keyword co-occurrence network. Each dot represents a keyword. The higher the frequency it has, the bigger the dot is. Keywords showing up in the same document are wired together, forming five clusters of dots of different colors. reported, IBD and Spa have a common genetic predisposition and pathogenic mechanisms (Ashrafi et al., 2021). However, the exact mechanism of how the intestinal microbiome leads to the association between rheumatic disease and IBD is still under study. The hypothesis includes increased intestinal permeability, impairment in immune tolerance, and migration of intestinal microbial components and activated immune cells to the joint (Yeoh et al., 2013). Depletion of Faecalibacterium prausnitzii has been found in various inflammatory disorders, including SpA and IBD, which is most likely an important pathogenic factor (Gill et al., 2015). Additionally, Th17 cells have been implicated in the pathogenesis, resulting in chronic tissue inflammation (Gaffen et al., 2014). As depicted in cluster 3, rheumatic diseases and intestinal microbiota are mainly interrelated by immunoregulation. When an inappropriate immune response occurs in chronic inflammation, microbial tolerance can be disrupted, resulting in the expansion or contraction of specific bacterial communities. In contrast, changes in gut microbiome composition can affect the immune system, leading to the occurrence of rheumatic diseases. This idea was supported by experiments on animal models, where HLA-B27 transgenic rats avoided arthritis in germ-free conditions, and inflammatory manifestations recurred after recolonized with certain bacteria (Taurog et al., 1999). Microbe-immune-RD crosstalk is linked through shortchain fatty acid (SCFA) production and aryl hydrocarbon receptor (AhR) activation. SCFAs, including acetate, propionate, and butyrate, are the products of dietary fiber fermentation by gut bacteria (Koh et al., 2016), which serve as the link Strategic diagram. Five clusters of keywords are distributed in a coordinate system according to the centrality and density of each cluster. Centrality indicates the degree of closeness of one topic with the others while density means the grade of maturity of one theme. between the intestinal flora and the mucosal immune system (Arpaia and Rudensky, 2014;Zeng and Chi, 2015). It is reported that pentanoate acts as a regulator of immunometabolism by suppressing IL-17A production and inducing IL-10 expression in effector T cells (Luu et al., 2019). Additionally, SCFAs enhance intestinal barrier function by inducing regulatory T cells (Tregs) differentiation in the colon (Smith et al., 2013;Kelly et al., 2015). In patients with RA, through high-fiber dietary intervention to elevate SCFA content, circulating Tregs and the Th1/Th17 ratio increased, while the physical functioning and quality of life were improved significantly (Häger et al., 2019). Aryl hydrocarbon receptor plays a critical role as a modulator of immune function to balance intestinal immune tolerance and activation (Esser and Rannug, 2015;Hubbard et al., 2015). The activity of AhR is a determinant of Th17 and Treg cell differentiation, whose balance profoundly influences immune response on the mucosal surface (Ehrlich et al., 2018). What's more, AhR helps to ensure the tolerance of intraepithelial T lymphocytes and innate lymphoid cells (ILCs) by driving the expression of IL-22p in order to maintain epithelial barrier integrity and microbial homeostasis (Behnsen et al., 2014). What's more, there is a kind of intestinal bacteria reported to have a powerful immune-modulating function. Segmented Filamentous Bacteria (SFB) is a gram-positive Clostridium spp. colonizing the surface of epithelial cells in small intestine (Davis and Savage, 1974). It can stimulate Th17 cell differentiation and promote surface immunoglobulin A (sIgA) secretion in intestine, playing an important role in the induction of autoimmune diseases (Kumar et al., 2016). While germ-free mice rarely develop experimental autoimmune encephalomyelitis (EAE), those mono-colonized with SFB achieve CNS inflammation, and symptoms can be relieved with treatment of pentanoate (Luu et al., 2019). It is also found that in mice with Ahr−/−, the abundance of SFB increases significantly while an enhanced inflammatory state is displayed within the intestine (Murray et al., 2016). Cluster 4 shows that dysbiosis and loss of microbial diversity both exist in the intestine of patients with rheumatic diseases. Disease-specific bacterial alterations and microbiota biodiversity restrictions were evidenced in spondyloarthritis. The number of observed species significantly decreased compared with healthy controls. Specifically, the abundance of Ruminococcus gnavus in SpA patients increased and positively correlated with SpA activity, which is consistent with the possible effect of R. gnavus on triggering or maintaining inflammatory status due to its mucolytic ability (Breban et al., 2017). Additionally, patients with SLE exhibited a decreased ratio of Firmicutes/Bacteroidetes while patients with RA showed an increase in Prevotella copri (Hevia et al., 2014;Pianta et al., 2017). As we can see in cluster 5, the occurrence of rheumatic diseases is not only related to intestinal microbiomes but also related to the oral flora, whose diversity is second only to the gut (Dewhirst et al., 2010). Some specific strains can destroy the dynamic balance between oral flora and the host, causing autoimmune disorders and resulting in rheumatic diseases (Whitmore and Lamont, 2014). Periodontal disease (PD), for instance, is a common oral disease occurring in the periodontium, caused by a variety of bacterial infections (Kawar et al., 2011). Higher levels of disease activity has been found in patients with PD who later developed RA (Hashimoto et al., 2015). Porphyromonas gingivalis, a gram-negative anaerobic bacteria, is one of the important pathogenic bacterial species of PD (Wade, 2013). It has been confirmed to have a potential association with RA. Compared with healthy controls, RA patients had a higher titer of anti-P. gingivalis antibodies and increased antibody concentration preceded the onset of joint symptoms (Johansson et al., 2016;Kharlamova et al., 2016). It is assumed that the existence of P. gingivalis contributes to the progression of RA by inducing the production of anticitrullinated protein antibodies (ACPA) , a specific marker for the diagnosis and prognosis of RA (Barra et al., 2013;Okada et al., 2013). P. gingivalis is the sole microorganism reported that can produce peptidylarginine deiminase (PAD), an enzyme that can catalyze the conversion of arginine residue into citrulline residue, playing an important role in the formation of ACPA (Mangat et al., 2010). New epitopes can be created by the citrullination of proteins in the mucosa such as vimentin and keratin, causing the loss of immune tolerance and thus the presence of ACPA (Rosenstein et al., 2004). Citrullinated alpha-enolase peptide 1 (CEP-1), a major antigenic target of ACPA, shows sequence similarity and cross-reacts with enolase from P. gingivalis (Lundberg et al., 2008). Moreover, studies show that by controlling periodontal infection and treating PD, symptoms of RA began to subside (Erciyas et al., 2013), with a reduction in levels of TNF-α, ACPAs, and anti-P. gingivalis antibodies (Ortiz et al., 2009;Okada et al., 2013). Variations on gut flora could be utilized in disease prevention, diagnosis, and treatment. The diagnosis of rheumatic diseases with bacteria seems to be prospective. In a study of bacterial changes in patients with gout, a diagnosis model was established based on 17 genera of intestinal microbiota sampled from stool, which showed promising diagnosis sensitivity. The accuracy of this microbiotabased predictive model was 81.7% when applied to the experiment and control groups, and it reached 88.9% in the validation group, which is higher than the diagnosis based on blood uric acid (71.3%) (Guo et al., 2016). Similarly, a diagnosis model for SLE based on five genera and two phyla of the oral microbiota achieved a possibility of 95.3% in distinguishing SLE patients from healthy controls . Therefore, the microbiota-based predictive model may be a potential tool for screening and earlier diagnosis of rheumatic diseases, but it still requires verification in larger populations. As the effects of intestinal microbiota on diseases have become better understood, dietary interventions and microbiota transplantation are being used to treat dysbiosis. As shown in Supplementary Table S2, some strains of the genera Bifidobacterium and Lactobacillus are commonly served as probiotic supplementation (Salazar et al., 2011), also known as live microorganisms that produce beneficial effects on hosts' health when given in sufficient numbers (Food and Agriculture Organization of the United Nations, and World Health Organization, 2006). Several studies verified that administration of Lactobacillus casei for weeks improved symptoms and the pathophysiological index in RA patients (Alipour et al., 2014;Zamani et al., 2016), since patients with RA may have a decreased abundance of Lactobacillus spp., as shown in previous research (Davis and Savage, 1974). Nevertheless, more research needs to be conducted about the proper formulation and dose of the probiotics as well as the appropriate frequency and course of the treatment. Prebiotics, defined as a group of food ingredients that benefit host health by regulating the microbiome, benefit our health by reducing oxidative stress, enhancing anti-inflammatory SCFA activity, and increasing IL-1Ra, IL-10, and IL-18 production (Peters et al., 2019). It is possible for prebiotics, such as oligosaccharides, dietary fiber, and resistant starch, to have a positive effect on treating rheumatic diseases. However, according to a number of reviews, the existing evidence is limited and inconclusive (Badsha, 2018;Macfarlane et al., 2018). Postbiotic intake may be effective for treating rheumatic diseases. As represented by SCFA, postbiotics are soluble factors secreted by the metabolic activities of living bacteria or released components after bacterial death with beneficial effects on the host. Previous studies have shown that direct SCFA intake or a high-fiber diet can inhibit bone loss and suppress experimental arthritis (Lucas et al., 2018;Zaiss et al., 2019). Additionally, fecal microbiota transplant (FMT) can potentially be a novel treatment for rheumatic diseases. By transferring stool from a healthy donor to a patient, dysbiosis is hoped to be controlled. This therapy was first used to treat Clostridium difficile infection and achieved a 90% success rate (Quraishi et al., 2017). As for rheumatic diseases, there exist a few trials of FMT ongoing, involving psoriatic arthritis and systemic sclerosis (Kragsnaes et al., 2018;Hoffmann-Vold et al., 2021). However, it is of great difficulty to predict the efficacy due to the unique intestinal microbiota of every individual donor, and the procedure needs to be further standardized as far as the donor status, sample preparation, dosage regimen, and therapeutic effect evaluation are concerned. All the treatments mentioned above remain in their infancy in treating rheumatic diseases, but their potential is worth expectation, given the demand to try novel approaches, along with the growing understanding of interactions between microbes and human hosts. In the future, more bioinformatics methods can be applied to relevant researches, such as integrated regulatory network analysis of single-cell sequencing of individual bacterium and bulk sequencing of gut microflora. With the combination of genomics, transcriptomics, and proteomics studies, the pathogenesis of rheumatic diseases as well as the mechanism of gut flora's influence on rheumatic diseases can be further explored. As the first study that attempts to apply bibliometric analysis to the relationship between rheumatic diseases and gut microbiota, hopefully our study could bring new sights to our readers. By scientifically analyzing hundreds of papers, the findings are relatively more trustworthy. Researchers may learn about the historical evolution and future direction of the topic and find out a more suitable orientation for their further research. For students and those who are new to this field, by reading our article, they could have a general understanding of the interaction between rheumatic diseases and gut flora and might find out the sub-topic they want to explore in development. However, there still exist some limitations in our study. First, since our data was obtained solely from the Web of Science Core Collection, there are other relevant articles not included in our study. Second, by looking at the most cited documents, we can recognize the important articles and research directions in history, but it is of difficulty to identify the popular direction at present, since the citation amount of the paper published in the last few years cannot be competitive with those published decades ago. Nowadays, the popularity of one study also depends on social influence and exposure degree to the general public, except for citation number as a recognition degree in academia, which demands further analysis other than bibliometric analysis to assess. Conclusion In this article, we examined the performance of the documents, sources, and countries in the field. We focused on five branches and summarized knowledge on each aspect of the interaction between rheumatic diseases and intestinal microbiota from a visualization and bibliometric perspective. The research trends are still growing, and more attention should be paid to the mechanisms and pathogenesis, such as the microbe-immune-RD crosstalk. We hope our study can provide new knowledge and fresh thinking for readers, and hopefully more research achievements can be applied to clinical practice in the purpose of expanding the methods of disease prevention, diagnosis, and treatment, for the maximum benefit of RD patients. Data availability statement Publicly available datasets were analyzed in this study. This data can be found here: Web of Science TM (WOS, http://www. webofknowledge.com). Author contributions All authors did the conception and design, collected and/or assembled the data, carried out the data analysis and interpreted the data, wrote the manuscript, and finally approved the manuscript. Funding This study was supported in part by the National Natural Science Foundation of China (81930057, 81772076, 81971836, and 81801620), CAMS Innovation Fund for Medical Sciences (2019-I2M-5-076), and Deep Blue Talent Project of Naval Medical University, 234 Academic Climbing Programme of Changhai Hospital and Achievements Supportive Fund (2018-CGPZ-B03).
2023-01-11T15:56:55.501Z
2023-01-09T00:00:00.000
{ "year": 2022, "sha1": "718c646803d0e6f3bd8b3792537fa3bcd33bf384", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "718c646803d0e6f3bd8b3792537fa3bcd33bf384", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
247470388
pes2o/s2orc
v3-fos-license
The Structural Characteristics of an Acidic Water-Soluble Polysaccharide from Bupleurum chinense DC and Its In Vivo Anti-Tumor Activity on H22 Tumor-Bearing Mice This study explored the preliminary structural characteristics and in vivo anti-tumor activity of an acidic water-soluble polysaccharide (BCP) separated purified from Bupleurum chinense DC root. The preliminary structural characterization of BCP was established using UV, HPGPC, FT-IR, IC, NMR, SEM, and Congo red. The results showed BCP as an acidic polysaccharide with an average molecular weight of 2.01 × 103 kDa. Furthermore, we showed that BCP consists of rhamnose, arabinose, galactose, glucose, and galacturonic acid (with a molar ratio of 0.063:0.788:0.841:1:0.196) in both α- and β-type configurations. Using the H22 tumor-bearing mouse model, we assessed the anti-tumor activity of BCP in vivo. The results revealed the inhibitory effects of BCP on H22 tumor growth and the protective actions against tissue damage of thymus and spleen in mice. In addition, the JC-1 FITC-AnnexinV/PI staining and cell cycle analysis have collectively shown that BCP is sufficient to induce apoptosis and of H22 hepatocarcinoma cells in a dose-dependent manner. The inhibitory effect of BCP on tumor growth was likely attributable to the S phase arrest. Overall, our study presented significant anti-liver cancer profiles of BCP and its promising therapeutic potential as a safe and effective anti-tumor natural agent. Introduction Liver cancer is the sixth highest-occurring primary cancer and the third major cause of cancer-related death (8.3%), second only to lung cancer (11.4%) and colorectal cancer (10%) [1]. Liver cancer can be divided into three categories, of which the most prominent is hepatocellular carcinoma, accounting for up to 75% to 85% [2]. Early liver cancer is easy to metastasize and difficult to diagnose. Besides, anti-cancer treatment commonly causes toxic side effects to healthy organs as well as immune systems. Moreover, the treatment cost is relatively high [3]. These problems ultimately led to the low cure rate of liver cancer. Patients with advanced liver cancer generally opt for surgical resection combined with other chemotherapeutic treatments. However, the surgical risk is high, while a cure is rare [4]. As the number of patients with liver cancer continuously increases, researchers have committed to improving liver cancer treatment by exploring effective natural anti-liver cancer agents with low toxicity. Over the past few decades, the natural polysaccharide has attracted more attention because of its multiple health care benefits. Polysaccharides are widely distributed in nature and are implicated in different disease states, pathological processes, and aging [5]. Studies have shown that plant polysaccharides from various sources present varied specificity and a large diversity in their chemical structures [6]. Moreover, the biological activities of polysaccharides have been shown to be highly dependent on the chemical structure [7]. Molecular Weight Analysis by HPGP The effects of different extraction temperature (70 °C, 90 °C) and ethanol concentration (60%, 70%, 80%) on the molecular weight of Bupleurum chinense DC polysaccharide were compared. Six samples were prepared for analysis [29]. The average molecular weight of BCP was evaluated by HPGPC (Agilent Technologies, Palo Alto, CA, USA). The sample was prepared by weighing 1 mg BCP, dissolving in 2 mL ultra-pure water, and passing through a 0.22 μm semipermeable membrane. T-series dextrans T-10 (1 × 10 4 Da), T-40 (4 × 10 4 Da), T-70 (7 × 10 4 Da), T-110 (1.1 × 10 5 Da), T-500 (5 × 10 5 Da), and T-2000 (2 × 10 6 Da) were used as standard products [30]. The experimental conditions and the specific parameters of the instrument were as follows: Aglient 1200 high-performance liquid chromatograph (Agilent Technologies, Palo Alto, CA, USA), TSK-gel G4000PWxl column (Agilent Technologies, Palo Alto, CA, USA), Refractive Index Detector (RID) Then, the isolation and purification of CBCP were conducted using the column chromatography method [22]. We fully dissolved it in distilled water, followed by dialysis (10 kDa) with flowing and distilled water alternately for three days. The polysaccharides extracts were then freeze-dried and re-dissolved in deionized water to achieve final concentration of 20 mg/mL. Then, the purification step was performed using Sephadex G-150 gel column (1.6 × 60 cm). The eluent was in distilled water, and the flow rate was set at 0.1 mL/min. Automatic fraction collector was used to collect one tube every 15 min. The phenol-sulfuric acid method was used to detect the polysaccharide content of the elution peak. Next, the main peak part was collected and lyophilized to obtain the pure Bupleurum chinense DC polysaccharide (BCP) [23]. BCP Characterization 2.3.1. Chemical Components Analysis The total carbohydrate and reducing carbohydrate content in BCP were measured with phenol-sulfuric acid and 3, 5-dinitrosalicylic acid (DNS) method with D-glucose as the standard [24]. Using the bovine serum albumin standard, the protein content in BCP was measured with the Coomassie brilliant blue (G-250) method [25]. The content of uronic acid in BCP was determined by the carbazole-sulfuric acid method with galacturonic acid as the standard [26]. UV Spectroscopy Analysis BCP solution of 1 mg/mL was prepared in distilled water. Using UV spectrophotometry (UV-25900PC, Japan), the absorption was measured by a full wavelength scan in the 190-500 nm wavelength range [27]. The quantification for nucleic acid and protein contents was determined based on the absorption at 260 nm and 280 nm wavelength, respectively [28]. Molecular Weight Analysis by HPGP The effects of different extraction temperature (70 • C, 90 • C) and ethanol concentration (60%, 70%, 80%) on the molecular weight of Bupleurum chinense DC polysaccharide were compared. Six samples were prepared for analysis [29]. The average molecular weight of BCP was evaluated by HPGPC (Agilent Technologies, Palo Alto, CA, USA). The sample was prepared by weighing 1 mg BCP, dissolving in 2 mL ultra-pure water, and passing through a 0.22 µm semipermeable membrane. T-series dextrans T-10 (1 × 10 4 Da), T-40 (4 × 10 4 Da), T-70 (7 × 10 4 Da), T-110 (1.1 × 10 5 Da), T-500 (5 × 10 5 Da), and T-2000 (2 × 10 6 Da) were used as standard products [30]. The experimental conditions and the specific parameters of the instrument were as follows: Aglient 1200 high-performance liquid chromatograph (Agilent Technologies, Palo Alto, CA, USA), TSK-gel G4000PWxl column (Agilent Technologies, Palo Alto, CA, USA), Refractive Index Detector (RID) (Agilent Technologies, Palo Alto, CA, USA), ultra-pure water as mobile phase, the flow rate was 0.6 mL/min, the column temperature was 30 • C, the detector temperature was 35 • C, and the injection volume was 20 µL. The establishment of the standard curve by the T series standard products and the average molecular weight of BCP counted with the standard curve. FT-IR Spectrum Analysis The sample of 1 mg BCP with dried potassium bromide powder of 150 mg was weighed, ground thoroughly in a bowl, poured into a mold, and pressed into a transparent sheet. It was scanned using FT-IR spectrometer (Bruker VECTOR-22, Karlsruhe, Germany), and the recorded infrared spectrum was at the range of 400-4000 cm −1 [31]. Monosaccharide Composition Analysis by IC Firstly, BCP of 5 mg was accurately weighed and added to 2 mol/L trifluoroacetic acid (TFA). After filling the tube with N 2 , it was hydrolyzed in an oil bath at 115 • C for three hours. Then the TFA was removed. The methanol solution was added to remove the excess TFA and then dried three times with N 2 . Finally, all TFA was removed to obtain the final degradation product. The obtained hydrolysate was dissolved in ultra-pure water and compounded into a concentration of 100 ppm solution [32]. Nine monosaccharides (rhamnose, arabinose, fucose, galactose, glucose, xylose, mannose, glucuronic acid, and galacturonic acid) were compounded into a mixed standard with a concentration of 100 ppm. The monosaccharide composition of BCP was detected by ion chromatography (IC). The detector was Dionex ICS2500; the analytical column was PA20 (150 mm × 3 mm, Polymers 2022, 14, 1119 5 of 21 the column temperature: 30 • C), the eluents were NaAc solution (100 mm) and NaOH solution (6 mm), the injection volume was 1 mL, and the flow rate was 0.45 mL/min [33]. NMR Spectroscopy Analysis The 60 mg BCP was placed in the NMR tube and dissolved in 0.5 mL D 2 O. The 1 H NMR and 13 C NMR spectra were recorded at 25 • C with a Bruker AMX-500 NMR. Scanning Electron Microscopy Analysis The microstructure of BCP was observed by using scanning electron microscopy (SEM). The dried BCP powder was fixed on the wall by double-sided adhesive tape, and the excess floating powder was blown off by a rubber suction bulb, then sputtered with a layer of gold for the sake of conduction [34]. Experimental conditions were configured as SEM (SU1510, Hitachi, Japan), with a magnification of 100×, 1000×, and 3500×, respectively. Thermal Analysis by TGA and DSC The thermal properties of BCP were analyzed by TGA (Perkin-Elmer, Waltham, Massachusetts, USA) and DSC (Q2000, TA, New Castle, PA, USA). In the TGA test, a 10 mg of sample was placed in an aluminum pan and heated from 25 to 600 • C at a rate of 10 • C/min in N 2 atmosphere [35]. In the DSC test, 10 mg of sample was heated from 25 to 200 • C at a rate of 10 • C/min under N 2 atmosphere [36]. Congo Red Analysis The sample of 0.5 mg/mL BCP and 50 mol/L Congo red was prepared in advance. The two reagents mixed with 1 mol/L NaOH solution to adjust the concentration of NaOH [37]. After mixing, the two sets of NaOH solutions with concentrations of 0, 0.1, 0.2, 0.3, 0.4, and 0.5 mol/L were obtained [38]. The derivatization reaction was carried out at 25 • C for 10 min, and the full wavelength scanning was carried out in the 400-600 nm wavelength range with an ultraviolet visible spectrophotometer (UV-Vis). H22 hepatoma cells were purchased from Shanghai Institute of Biological Sciences, Chinese Academy of Sciences. They were stored in DEAE medium supplemented with 10% heat-inactivated fetal calf serum (FCS), 100 U/mL penicillin, and 100 g/mL streptomycin. Next, they were cultured and passaged at 37 • C in 5% CO 2 . Kunming mice (age: 7-8 weeks old, gender: female, body weight: 20 ± 2 g) were provided by SPF Biotechnology Co., Ltd. (Beijing, China). The license code is SCXK 2019-0010. Mice feeding conditions were as follows: relative humidity was 50 ± 5%, the temperature was 22 ± 2 • C, the light-dark cycle was 12 h, and water and feed were provided free of charge. All animal experiment procedures followed the "Regulations on the Management of Laboratory Animals" (China). Establishment of H22 Tumor-Bearing Mouse Model After feeding the mice freely for a week, fifty mice were randomly divided into five groups. The groups were as follows: The establishment of H22 tumor-bearing mouse model process is shown in Figure 2. Mice in groups 1-5 were given 0.2 mL normal saline intragastric administration at the same time for 7 days. Then, mice in groups 2-5 were inoculated with 10 6 H22 cells in subcutaneous right forelimb armpit. Group 1 and group 2 were gavaged with 0.2 mL sterile normal saline every day, group 3 was intraperitoneally injected with 20 mg/kg Polymers 2022, 14, 1119 6 of 21 5-fluorouridine daily, group 4 was gavaged with 100 mg/kg BCP, and group 5 was gavaged with 300 mg/kg BCP (intragastric administration or intraperitoneal injection were made at the same time every day, continuing for 14 days). The weights of mice and their vital signs were recorded during the experiment. Solid Tumors and Immune Organ Indices Within 24 h after the last administration, all mice were weighed. After weighing, they were sacrificed by cervical dislocation and then dissected. The solid tumors, thymus, and spleen were removed from all mice. The blood stains were washed with PBS buffer, the surface water was dried with filter paper, and the tissues were weighed. The vernier caliper was used to measure the tumor volume of mice. The tumor inhibition rate (TIR) and immune organ indices were calculated by the following Formulas (1) and (2). TIR (%) = (the average tumor weight of model group − the average tumor weight of treated group)/the average tumor weight of model group × 100 (1) Organ index (mg/g) = organ weight (g)/mice wight (g) × 1000 (mg)/1 (g) FITC-AnnexinV/PI Double Staining Detection The light-scattering properties of cells changed during apoptosis. PI was used as a fluorescent dye, the tumor tissue was ground and crushed with a cell sieve, and the cell suspension was prepared in PBS. Next, the cell density was diluted to 10 6 cells/mL. The changes of cell light scattering were measured by flow cytometer to identify apoptotic cells by the test method of FITC-AnnexinV/PI kit. Cell Cycle Distribution Detection The cell suspension was prepared as described in Section 2.4.3 and fixed with 70% alcohol overnight at 4 • C. The cell cycle distribution detection is conducted using the manufacturer's instructions. After washing with PBS to remove ethonal, 50 µL RNase (1 mg/mL) was added. The sample was then digested at room temperature for about 30 min, followed by the addition of 50 µL PI (500 µg/mL propidium iodide) for staining. They were gently mixed and incubated at 4 • C for 10 min, shielded from light. The variation of cell cycles was detected by using a flow cytometer. Assay of Mitochondrial Membrane Potential (∆Ψm) JC-1 is a fine fluorescence probe that is commonly used to detect mitochondrial membrane potential (∆Ψm). Following the kit instructions, the prepared cell suspension (as described in Section 2.4.3) was blended with the working medium of JC-1 at room temperature and was kept incubated for 30 min. After centrifugation, the precipitates were washed repeatedly with PBS, and then the ∆Ψm was measured by flow cytometer. Statistical Analysis The experimental findings generated in the experiment are expressed as mean ± standard deviation (X ± SD). The data were analyzed by SPSS software, and ANOVA was used to establish a significant difference. p < 0.05 could be considered as statistically significant. The Basic Chemical Components and UV-Visible Spectrum Analysis of BCP The crude polysaccharides of Bupleurum chinense DC (CBCP) was obtained from dried Bupleurum chinense DC powder by preliminary extraction. Then, the yield of CBCP was determined to be about 5.35%. To purify the product, protein and fat impurities were firstly removed, and small molecule polysaccharides were further eliminated by dialysis procedure. The subsequent purification was conducted using Sephadex G-150 gel columns. The experimental data showed: the total sugar content of BCP was about 93.58 ± 1.34%, the uronic acid content was 9.64 ± 0.35%, and the protein content was 0.65 ± 0.22%. This result preliminarily suggested BCP as an acidic polysaccharide. According to the ultravioletvisible spectroscopy of BCP (Figure 3), the absence of absorption peaks at 260 nm and 280 nm indicated that BCP contained trace amounts of protein and nucleic acid. HPGPC and FT-IR Analysis of BCP As shown in Figure S3A-C, the extraction temperature of Bupleurum chinense DC polysaccharide had no obvious effect on its molecular weight. However, with the increase of alcohol concentration, the peak time of HPGPC chromatogram shifted to the right, and its molecular weight became low. The HPGPC preprogram of BCP ( Figure 4) has shown a single homogeneous narrow peak, indicating that BCP was a homogeneous fraction. According to the standard curve obtained from T series standard products, the regression equation was log Mw = −0.3173 Rt + 8.9584 (R 2 = 0.9923). We could calculate that the molecular weight of BCP was 2.01 × 10 3 kDa (Rt: 8.368 min). The infrared spectrum of BCP ( Figure 5) indicates that BCP is a typical polysaccharide [39]. There was a strong and broad band of O-H stretching vibration at 3423.74 cm −1 , while the absorption peak appeared at 2920.34 cm −1 due to C-H stretching vibration. The absorption peaks at 1743.67 cm −1 and 1239.21 cm −1 were attributed to the existence of uronic acid, which supported the determination of uronic acid described in 2.1 above. The strong absorption peak at 1617.48 cm −1 confirmed the characteristic asymmetric stretching of the C=O. Moreover, the absorption peaks at 1439.76 cm −1 , 1371.18 cm −1 , and 1331.59 cm −1 could be attributed to the variable angle vibration of C-H. The signal peaks at the range of 800-1200 cm −1 could be called the carbohydrate fingerprint region, among which the peaks at 1100.46 cm −1 , 1049.85 cm −1 , and 1020.46 cm −1 were generated by the bending vibration of C-O [40]. The weak absorption peaks at 893.66 cm −1 and 831.61 cm −1 confirmed that BCP contained both β-and α-type glycosides [41]. HPGPC and FT-IR Analysis of BCP As shown in Figure S3A-C, the extraction temperature of Bupleurum chinense DC polysaccharide had no obvious effect on its molecular weight. However, with the increase of alcohol concentration, the peak time of HPGPC chromatogram shifted to the right, and its molecular weight became low. The HPGPC preprogram of BCP ( Figure 4) has shown a single homogeneous narrow peak, indicating that BCP was a homogeneous fraction. According to the standard curve obtained from T series standard products, the regression equation was log Mw = −0.3173 Rt + 8.9584 (R 2 = 0.9923). We could calculate that the molecular weight of BCP was 2.01 × 10 3 kDa (Rt: 8.368 min). The infrared spectrum of BCP ( Figure 5) indicates that BCP is a typical polysaccharide [39]. There was a strong and broad band of O-H stretching vibration at 3423.74 cm −1 , while the absorption peak appeared at 2920.34 cm −1 due to C-H stretching vibration. The absorption peaks at 1743.67 cm −1 and 1239.21 cm −1 were attributed to the existence of uronic acid, which supported the determination of uronic acid described in 2.1 above. The strong absorption peak at 1617. 48 800-1200 cm −1 could be called the carbohydrate fingerprint region, among which the peaks at 1100.46 cm −1 , 1049.85 cm −1 , and 1020.46 cm −1 were generated by the bending vibration of C-O [40]. The weak absorption peaks at 893.66 cm −1 and 831.61 cm −1 confirmed that BCP contained both βand α-type glycosides [41]. Figure 6A,B showed the ion chromatograms of monosaccharide standards and BCP hydrolysates. The analysis revealed the major compositions of BCP: rhamnose, arabinose, galactose, glucose, and galacturonic acid, with a molar ratio of 0.063:0.788:0.841:1:0.196. In recent years, researchers have isolated different kinds of polysaccharides from Bupleurum to explore their structure. The monosaccharide compositions of other Bupleurum polysaccharide analogues were compared with BCP, as shown in Table S3 [11,12,15,16,[42][43][44][45][46]. The detection of galacturonic acid in BCP was consistent with the previous determination of uronic acid in BCP by the carbazole sulfonic acid method [47]. Figure 6A,B showed the ion chromatograms of monosaccharide standards and BCP hydrolysates. The analysis revealed the major compositions of BCP: rhamnose, arabinose, galactose, glucose, and galacturonic acid, with a molar ratio of 0.063:0.788:0.841:1:0.196. In recent years, researchers have isolated different kinds of polysaccharides from Bupleurum to explore their structure. The monosaccharide compositions of other Bupleurum polysaccharide analogues were compared with BCP, as shown in Table S3 [11,12,15,16,[42][43][44][45][46]. The detection of galacturonic acid in BCP was consistent with the previous determination of uronic acid in BCP by the carbazole sulfonic acid method [47]. NMR Results of BCP In order to further characterize the BCP structure, we performed an NMR analysis. The glycosidic bond with anomeric hydrogen on α-type generated chemical shifts at δ 4.9-5.9 ppm, while β-type caused shift at δ 4.3-4.9 ppm [48][49][50]. As shown in Figure 7A, the chemical shift of 1 H NMR spectra of δ 3.25-5.30 ppm confirmed BCP as a typical polysaccharide with both α-type glycosidic bond and β-type glycosidic bond [51]. The solvent peak of D 2 O appeared at δ 4.70 ppm. The 1 H NMR spectra also showed that protons of H2 to H6 had chemical shifts in the range of δ 3.25 to δ 4.19 ppm [52,53]. As shown in Figure 7B, there were five signal peaks in the range of δ 99.62-107.44 ppm at anomeric C-1 region in the 13 C spectrum, which attributed to the α-type glycosidic bond and β-type glycosidic bond in BCP. The presence of uronic acid has been confirmed once again with the chemical shift signal at δ 170.66 ppm. Together with previous studies, our findings suggest: (1) the 1 H signal at δ 4.99 ppm and the 13 NMR Results of BCP In order to further characterize the BCP structure, we performed an NMR analysis. The glycosidic bond with anomeric hydrogen on α-type generated chemical shifts at δ 4.9-5.9 ppm, while β-type caused shift at δ 4.3-4.9 ppm [48][49][50]. As shown in Figure 7A, the chemical shift of 1 H NMR spectra of δ 3.25-5.30 ppm confirmed BCP as a typical polysaccharide with both α-type glycosidic bond and β-type glycosidic bond [51]. The solvent peak of D2O appeared at δ 4.70 ppm. The 1 H NMR spectra also showed that protons of H2 to H6 had chemical shifts in the range of δ 3.25 to δ 4.19 ppm [52,53]. As shown in Figure 7B, there were five signal peaks in the range of δ 99.62-107.44 ppm at anomeric C-1 region in the 13 C spectrum, which attributed to the α-type glycosidic bond and β-type glycosidic bond in BCP. The presence of uronic acid has been confirmed once again with the chemical shift signal at δ 170.66 ppm. Together with previous studies, our findings suggest: (1) the 1 H signal at δ 4.99 ppm and the 13 C signal at δ 107.44 ppm The Molecular Morphology of BCP The microscopic surface morphology of BCP was studied using the scanning electron microscope with different magnifications: 100× ( Figure 8A), 1000× ( Figure 8B), and 3500× ( Figure 8C). This result indicated that BCP was flaky or clastic with a rough surface and predominantly layered structure, with sizes from 100 to 1300 microns [57]. The Molecular Morphology of BCP The microscopic surface morphology of BCP was studied using the scanning electron microscope with different magnifications: 100× ( Figure 8A), 1000× ( Figure 8B), and 3500× ( Figure 8C). This result indicated that BCP was flaky or clastic with a rough surface and predominantly layered structure, with sizes from 100 to 1300 microns [57]. Thermal Analysis of BCP As shown in Figure 9A, there were three distinct weight-loss stages in TGA curve of BCP at the range of 25-600 °C. In the first stage, a mass loss of approximately 9.96% occurred near 65 °C, which was assigned to the evaporation of water in the BCP sample [58]. The second mass loss change was occurred at the temperature range of 200-450 °C. At this stage, the most obvious mass loss was about 67.33%, which could be attributed to thermal decomposition polysaccharide [59]. In the last stage, most of the remaining materials were converted into ash and inorganic components at the temperature range of 450-600 °C [60]. The results of TGA showed that BCP had good thermal stability below 200 °C. The DSC curve ( Figure 9B) of the BCP occurred endothermic peak at 67.7 °C, which could be attributed to the evaporation of water in the BCP sample [61]. When the temperature was increased to 200 °C, there was no new endothermic peak that appeared. The results of DSC showed that there was no depolymerization reaction caused by glycosidic bond breaking in the range of 25-200 °C, and the polysaccharide structure was stable at this stage which coincides with the TGA analysis. Thermal Analysis of BCP As shown in Figure 9A, there were three distinct weight-loss stages in TGA curve of BCP at the range of 25-600 • C. In the first stage, a mass loss of approximately 9.96% occurred near 65 • C, which was assigned to the evaporation of water in the BCP sample [58]. The second mass loss change was occurred at the temperature range of 200-450 • C. At this stage, the most obvious mass loss was about 67.33%, which could be attributed to thermal decomposition polysaccharide [59]. In the last stage, most of the remaining materials were converted into ash and inorganic components at the temperature range of 450-600 • C [60]. The results of TGA showed that BCP had good thermal stability below 200 • C. The microscopic surface morphology of BCP was studied using the scanning electron microscope with different magnifications: 100× ( Figure 8A), 1000× ( Figure 8B), and 3500× ( Figure 8C). This result indicated that BCP was flaky or clastic with a rough surface and predominantly layered structure, with sizes from 100 to 1300 microns [57]. Thermal Analysis of BCP As shown in Figure 9A, there were three distinct weight-loss stages in TGA curve of BCP at the range of 25-600 °C. In the first stage, a mass loss of approximately 9.96% occurred near 65 °C, which was assigned to the evaporation of water in the BCP sample [58]. The second mass loss change was occurred at the temperature range of 200-450 °C. At this stage, the most obvious mass loss was about 67.33%, which could be attributed to thermal decomposition polysaccharide [59]. In the last stage, most of the remaining materials were converted into ash and inorganic components at the temperature range of 450-600 °C [60]. The results of TGA showed that BCP had good thermal stability below 200 °C. The DSC curve ( Figure 9B) of the BCP occurred endothermic peak at 67.7 °C, which could be attributed to the evaporation of water in the BCP sample [61]. When the temperature was increased to 200 °C, there was no new endothermic peak that appeared. The results of DSC showed that there was no depolymerization reaction caused by glycosidic bond breaking in the range of 25-200 °C, and the polysaccharide structure was stable at this stage which coincides with the TGA analysis. The DSC curve ( Figure 9B) of the BCP occurred endothermic peak at 67.7 • C, which could be attributed to the evaporation of water in the BCP sample [61]. When the temperature was increased to 200 • C, there was no new endothermic peak that appeared. The results of DSC showed that there was no depolymerization reaction caused by glycosidic bond breaking in the range of 25-200 • C, and the polysaccharide structure was stable at this stage which coincides with the TGA analysis. Congo Red Analysis of BCP Congo red is an acid dye that forms complexes with polysaccharides with a spatial helical structure. As shown in Figure 10, when the concentration range of NaOH was 0-0.05 mol/L, the absorption wavelength gradually increased until it peaked around 508 nm at 0.05 mol/L. Then, as the concentration of sodium hydroxide increased, the maximum wavelength of absorption decreased and stabilized. The maximum wavelength of the control group was significantly lower than the BCP + Congo group, and a redshift (p < 0.05) was observed. These results suggested a triple-helix structure for BCP. Interestingly, the previous literature revealed that the triple-helical polysaccharides could actively induce tumor cell apoptosis [62][63][64][65]. Congo red is an acid dye that forms complexes with polysaccharides with a spatial helical structure. As shown in Figure 10, when the concentration range of NaOH was 0-0.05 mol/L, the absorption wavelength gradually increased until it peaked around 508 nm at 0.05 mol/L. Then, as the concentration of sodium hydroxide increased, the maximum wavelength of absorption decreased and stabilized. The maximum wavelength of the control group was significantly lower than the BCP + Congo group, and a redshift (p < 0.05) was observed. These results suggested a triple-helix structure for BCP. Interestingly, the previous literature revealed that the triple-helical polysaccharides could actively induce tumor cell apoptosis [62][63][64][65]. Weight, Immune Organ Indices, and Tumor Inhibition Rate The death date and survival rate of mice in each group was shown in Table 1 below. Differences in tolerance among individual mice resulted in the death of mice during the period. However, in general, the survival rate of mice was high, and the data were significant [66]. The weights of mice are shown in Table 2. The mice were similar before inoculation with H22 hepatoma cells (22 ± 1 g), and there was no significant difference between each group. The final weight gain of the model group was markedly different from that in the blank group (p < 0.05), suggesting that H22 tumor cells proliferated indefinitely in the model group of mice. The mice in the 5-Fu group had symptoms such as loss of appetite, The death date and survival rate of mice in each group was shown in Table 1 below. Differences in tolerance among individual mice resulted in the death of mice during the period. However, in general, the survival rate of mice was high, and the data were significant [66]. The weights of mice are shown in Table 2. The mice were similar before inoculation with H22 hepatoma cells (22 ± 1 g), and there was no significant difference between each group. The final weight gain of the model group was markedly different from that in the blank group (p < 0.05), suggesting that H22 tumor cells proliferated indefinitely in the model group of mice. The mice in the 5-Fu group had symptoms such as loss of appetite, sluggishness, and depressed mental state. In contrast, the weight of mice in LBCP and HBCP groups was gradually close to that of mice in the blank group. The mental state of mice was also significantly improved, indicating the BCP treatment on H22 tumor-bearing mice showed substantial beneficial effects without severe toxicity. The tumor inhibition rates calculated by the tumor weights and volumes are listed in Figure 11A. The weight and volume of tumors in the 5-Fu, the LBCP, and the HBCP groups were lower than the model group. To be exact, the tumor inhibition rate (TIR) was 57.53% in the 5-Fu group, 15.44% in the LBCP group, and 37.20% in the HBCP group. The results showed that 5-Fu had the most significant anti-tumor effects, but its side effects on mice were also severe. In contrast, the tumor-inhibiting capacity of BCP increased in a dose-dependent manner without inhibiting the growth state of the mice. The number of immune cells is highly correlated to the weights of immune organs as well as immunological functions [67]. The thymus is an essential lymphatic organ that stores and secretes immune cells and molecules [68]. As the largest immune organ, the spleen is the core of cellular and humoral immunity [69]. The thymus and spleen are the essential immune organs in the body, and their organ indexes can be calculated to evaluate the strength and weakness of immune functions. The organ indexes are listed in Figure 11B. The thymus index of the model group was markedly decreased (p < 0.05) compared to the blank group, suggesting that thymus atrophy was accompanied by tumor cell proliferation. The thymus index of the 5-Fu group was the lowest, even lower than that of the model group (p < 0.05). The thymus index increased from the LBCP group to the HBCP group compared to the model group, indicating that BCP induced a protective effect on the thymus. The spleens from the model group were remarkably swollen compared to the blank group (p < 0.05). Although the spleen indexes in the LBCP and HBCP groups were higher than the 5-Fu group, they were remarkably lower than the model group (p < 0.05). The changes in immune organ indexes in the LBCP group and HBCP group illustrated that BCP could enhance the immunity of H22 tumor-bearing mice. These consequences fully suggested that 5-Fu could not only inhibit the rapid proliferation of tumor cells effectively but destroy normal immune organs [70]. Altogether, BCP was shown to protect immune organs while suppressing tumor growth. Cell Apoptosis Analysis by FITC-AnnexinV/PI Propidium iodide (PI) is a commonly used red-fluorescent dye that stains the nucleus and chromosomes. Since PI is not permeable to live cells, it can selectively stain dead cells or cells in the middle or late stage of apoptosis [71]. Annexin V staining is another common method to detect apoptotic cells. During apoptosis, the anionic phosphatidylserine (PS) is translocated to the extracellular side of the plasma membrane and binds to Annexin V conjugates with high affinity [72]. Dead and apoptotic cells (in the period of early and late stage of apoptosis) can be distinguished using FITC-AnnexinV/PI staining [73]. As illustrated in Figure 12, the apoptosis rate of the model group without BCP and 5-Fu treatment was 11.32% (early apoptosis rate 9.64%, late apoptosis rate 1.68%). After BCP treatment, the apoptosis rate of the LBCP group was 20.98% (early apoptosis rate 14.60%, late apoptosis rate 6.58%), and the apoptosis rate of HBCP group reached 32.3% (early apoptosis rate 21.30%, late apoptosis rate 12.00%). The apoptosis rates of the LBCP and HBCP groups were remarkably increased compared to the model group (p < 0.05). The proportion of early and late apoptosis in solid tumor cells in the 5-Fu group was also remarkably increased compared to the model group (p < 0.05). The results have shown that BCP induced apoptosis and inhibited the rapid proliferation of H22 hepatoma cells in a dose-dependent manner. Cell Apoptosis Analysis by FITC-AnnexinV/PI Propidium iodide (PI) is a commonly used red-fluorescent dye that stains the nucleus and chromosomes. Since PI is not permeable to live cells, it can selectively stain dead cells or cells in the middle or late stage of apoptosis [71]. Annexin V staining is another common method to detect apoptotic cells. During apoptosis, the anionic phosphatidylserine (PS) is translocated to the extracellular side of the plasma membrane and binds to Annexin V conjugates with high affinity [72]. Dead and apoptotic cells (in the period of early and late stage of apoptosis) can be distinguished using FITC-AnnexinV/PI staining [73]. As illustrated in Figure 12, the apoptosis rate of the model group without BCP and 5-Fu treatment was 11.32% (early apoptosis rate 9.64%, late apoptosis rate 1.68%). After BCP treatment, the apoptosis rate of the LBCP group was 20.98% (early apoptosis rate 14.60%, late apoptosis rate 6.58%), and the apoptosis rate of HBCP group reached 32.3% (early apoptosis rate 21.30%, late apoptosis rate 12.00%). The apoptosis rates of the LBCP and HBCP groups were remarkably increased compared to the model group (p < 0.05). The proportion of early and late apoptosis in solid tumor cells in the 5-Fu group was also remarkably increased compared to the model group (p < 0.05). The results have shown that BCP induced apoptosis and inhibited the rapid proliferation of H22 hepatoma cells in a dose-dependent manner. Cell Cycle Analysis The apoptosis-inducing ability of BCP was further explored by PI staining [74]. A complete cell cycle comprises five different phases, including G0, G1, S, G2, and M phases [75]. It was reported that cell cycle arrest at a specific stage would induce tumor cell apoptosis [76]. As shown in Figure 13, compared with the model group, the G0/G1 phase and G2/M phase in the BCP group were significantly decreased (p < 0.05). The percentage of S-phase cells increased by BCP treatment in the LBCP group (18.98%) and the HBCP group (38.69%) when compared with the model group (15.97%). Therefore, we speculated that BCP could arrest solid tumor cells in the S phase of the cell cycle and induce tumor cell apoptosis in a dose-dependent manner. Cell Cycle Analysis The apoptosis-inducing ability of BCP was further explored by PI staining [74]. A complete cell cycle comprises five different phases, including G0, G1, S, G2, and M phases [75]. It was reported that cell cycle arrest at a specific stage would induce tumor cell apoptosis [76]. As shown in Figure 13, compared with the model group, the G0/G1 phase and G2/M phase in the BCP group were significantly decreased (p < 0.05). The percentage of S-phase cells increased by BCP treatment in the LBCP group (18.98%) and the HBCP group (38.69%) when compared with the model group (15.97%). Therefore, we speculated that BCP could arrest solid tumor cells in the S phase of the cell cycle and induce tumor cell apoptosis in a dose-dependent manner. [75]. It was reported that cell cycle arrest at a specific stage would induce tumor cell apoptosis [76]. As shown in Figure 13, compared with the model group, the G0/G1 phase and G2/M phase in the BCP group were significantly decreased (p < 0.05). The percentage of S-phase cells increased by BCP treatment in the LBCP group (18.98%) and the HBCP group (38.69%) when compared with the model group (15.97%). Therefore, we speculated that BCP could arrest solid tumor cells in the S phase of the cell cycle and induce tumor cell apoptosis in a dose-dependent manner. Mitochondrial Membrane Potential (MMP) Analysis In recent years, research has shown that the decrease of MMP is highly correlated with the apoptosis of cells under various influencing factors [77]. As shown in Figure 14, the fluorescence intensity of the untreated model group was as high as 94.30%. After BCP treatment, the signal peak gradually moved to the left with the increase of BCP dose, resulting in the fluorescence intensity of 84.30% in the LBCP group and 65.20% in the HBCP group. After 5-Fu treatment, the fluorescence intensity of the 5-Fu group was remarkably increased compared to the model group (p < 0.05). The results suggest that BCP might dose-dependently cause the decrease of mitochondrial membrane potential and ultimately lead to apoptosis. In recent years, research has shown that the decrease of MMP is highly correlated with the apoptosis of cells under various influencing factors [77]. As shown in Figure 14, the fluorescence intensity of the untreated model group was as high as 94.30%. After BCP treatment, the signal peak gradually moved to the left with the increase of BCP dose, resulting in the fluorescence intensity of 84.30% in the LBCP group and 65.20% in the HBCP group. After 5-Fu treatment, the fluorescence intensity of the 5-Fu group was remarkably increased compared to the model group (p < 0.05). The results suggest that BCP might dose-dependently cause the decrease of mitochondrial membrane potential and ultimately lead to apoptosis. Discussion In this paper, we extracted and purified an acidic water-soluble polysaccharide BCP from the root of Bupleurum chinense DC and investigated its structure and anti-tumor activity. We showed that BCP was an acid-soluble polysaccharide that contained trace Discussion In this paper, we extracted and purified an acidic water-soluble polysaccharide BCP from the root of Bupleurum chinense DC and investigated its structure and anti-tumor activity. We showed that BCP was an acid-soluble polysaccharide that contained trace amounts of protein and nucleic acid. As we all know, pharmaceutical polysaccharides have rich biological activities because of their unique monosaccharide composition and special structure. Bupleurum chinense DC is a commonly used drug product, which is mainly used to clear away heat and disperse fire [78]. There are also articles mentioning the role of Bupleurum chinense DC for soothing and protecting the liver [79]. However, as far as we know, this is the first time that a Bupleurum chinense DC polysaccharide with such a large molecular weight has been extracted, and its basic structure and its anti-liver cancer activity in vivo have been explored. The research may contribute to the development of functional food ingredients with the potential to treat liver cancer. The spleen, thymus, and other immune organ indexes can reflect the immune status of the body to a certain extent. After BCP treatment, the body's immune index was significantly improved compared with 5FU treatment. The results confirmed that drug treatment has irreversible effects on the damage of the body's immune function, but BCP improved the body's immune function. It has been reported that mitochondria play an irreplaceable role in cell apoptosis because they can transmit and amplify death signals, and are the central link in the interaction between upstream apoptosis pathway and Caspase pathway and other downstream death pathways [80]. There are three main ways of mitochondria mediated apoptosis. The first is to destroy the antioxidant capacity of cells, the second is to block the production of ATP by breaking the electron chain, and the third is to affect the mitochondrial pathway of cell apoptosis [81]. In this research, BCP induced apoptosis of H22 tumor cells mainly through the third pathway, the cell cycle analysis, FITC-AnnexinV/PI staining, and JC-1 experiments, which have collectively shown that BCP is sufficient to induce apoptosis and inhibit the rapid proliferation of H22 hepatocarcinoma cells in a dose-dependent manner. The inhibitory effect of BCP on tumor growth was likely attributable to the cell cycle arrest (in the S phase) and the activation of mitochondria-related pathways. In this paper, we extracted and purified an acidic water-soluble polysaccharide BCP from the root of Bupleurum chinense DC and investigated its structure and anti-tumor activity. We showed that BCP was an acid-soluble polysaccharide that contained trace amounts of protein and nucleic acid. The average molecular weight of BCP was 2.01 × 10 3 kDa, consisting of rhamnose, arabinose, galactose, glucose, and galacturonic acid in the molar ratio of 0.063:0.778:0.841:1:0.196, accompanied by αand β-type glycosidic residues. On this basis, we also observed its microstructure and found that it has a layered, rough surface with a few fragments. The results of Congo red showed that BCP had a triple-helix structure. Some literatures have shown that the triple helix structure of polysaccharides is a kind of biological macromolecule with special chain structures in nature. It not only has high biological activity, but also has special molecular recognition ability and incomparable functional properties of other polysaccharides [82]. More importantly, we revealed that BCP could induce apoptosis and inhibit the rapid proliferation of H22 hepatoma cells by arresting growing cells in the S phase of the cell cycle in a dose-dependent manner without causing severe toxicity. In conclusion, our paper exhibited promising anti-liver cancer activity of BCP and shed light on the potential use of Bupleurum chinense DC as an effective and safe anti-liver cancer treatment.
2022-03-16T15:12:42.464Z
2022-03-01T00:00:00.000
{ "year": 2022, "sha1": "6bf5d185e4ec88e561098aeee50be8589d0d717e", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4360/14/6/1119/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0a9ea2433e5feb67a6fb6e71d3c31b83a445ffe6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
18406751
pes2o/s2orc
v3-fos-license
Dataset of TWIST1-regulated genes in the cranial mesoderm and a transcriptome comparison of cranial mesoderm and cranial neural crest This article contains data related to the research article entitled “Transcriptional targets of TWIST1 in the cranial mesoderm regulate cell-matrix interactions and mesenchyme maintenance” by Bildsoe et al. (2016) [1]. The data presented here are derived from: (1) a microarray-based comparison of sorted cranial mesoderm (CM) and cranial neural crest (CNC) cells from E9.5 mouse embryos; (2) comparisons of transcription profiles of head tissues from mouse embryos with a CM-specific loss-of-function of Twist1 and control mouse embryos collected at E8.5 and E9.5; (3) ChIP-seq using a TWIST1-specific monoclonal antibody with chromatin extracts from TWIST1-expressing MDCK cells, a model for a TWIST1-dependent mesenchymal state. Value of the data The data set provides an important reference for all studies investigating Twist1 function in the context of development and cancer. By comparing the transcriptome of the cranial mesoderm and cranial neural crest, the data set provide a useful tool for studying the complex process of craniofacial development. The data set potentially contributes to the identification of genes that control the mesenchymal cell state in development and cancer. Isolation and analysis of CM and CNC populations Embryo were collected at E9.5 from Mesp1-Cre x Z/EG (for CM) and Wnt1-Cre x Z/EG (for CNC) [2][3][4]. Heads were dissected below the first branchial arch, dissociated and prepared for cell sorting as described [2]. Each sample yielded 4000-18,000 GFP-positive cells, which were stored at À 80°C. RNA was extracted and amplified using Illumina TotalPrep (Ambion) and labeled using MessageAmp II aRNA (Ambion) as described elsewhere [1]. Chromatin Immunoprecipitation ChIP was carried out using extracts of TWIST1-expressing MDCK cells [8]. Cross-linking in 1% formaldehyde, lysis and sonication were carried out as described [1]. Extracts were pre-cleared by incubation with A/G magnetic beads (Dynal) for 3 hrs and incubated with an anti-TWIST1 monoclonal antibody (Abcam ab50887) overnight at 4°C, before adding blocked beads and subsequent washing steps in RIPA buffer, RIPA/NaCl buffer and LiCL buffer [1]. Sequencing was carried out by the Australian Genome Research Facility. Data analysis Raw microarray data were log 2 transformed, quantile normalized and differential expression analyzed using the Linear Models for Microarray (LIMMA, [9] implementation within Gene Pattern. Differentially expressed genes were filtered on a false discovery rate (FDR) of 0.05. For ChIP-Seq data, 50 bp reads were trimmed using Cutadapt [10], filtered by quality score and aligned to the CanFam3 dog genome using bowtie2 [11] as described [1]. Peaks were called using MACS2 [12] and IDR analysis performed using an IDR cut-off of 0.05. Peak coordinates from two replicates were merged, using the most extreme start and end positions of the two replicates. The equivalent mouse genome (mm10) peak genomic locations were determined using Liftover (NCBI) annotated using the R library ChipSeeker. Acknowledgments We thank the staff of the CMRI BioResources Unit for animal husbandry. Our work was supported by the National Health and Medical Research Council (NHMRC), Australia (Grant ID 1066832), the Australian Research Council, Australia (Grant DP 1094008) and Mr James Fairfax. HB was supported by an NHMRC Biomedical Postgraduate Scholarship and a CMRI Scholarship, XCF is supported by a University of Sydney International Postgraduate Research Scholarship; an Australian Postgraduate Award and a CMRI Scholarship and AA was supported by a visiting studentship from CMRI and an internship from the University of Groningen, The Netherlands. PPLT is an NHMRC Senior Principal Research Fellow (Grant ID 1003100, 1110751). The Flow Cytometry Centre is supported by Westmead Institute for Medical Research, Australia, NHMRC of Australia and Cancer Institute, NSW, Australia. Transparency document. Supporting information Transparency data associated with this article can be found in the online version at http://dx.doi. org/10.1016/j.dib.2016.09.001.
2018-04-03T04:46:04.947Z
2016-09-12T00:00:00.000
{ "year": 2016, "sha1": "4519d60fcccb8be3787bbe61488eca490016d5d3", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.dib.2016.09.001", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ffd2b635a0305e903d9719178a05fb4ec251e276", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
18412110
pes2o/s2orc
v3-fos-license
Gender differences in hypoxic acclimatization in cyclooxygenase‐2‐deficient mice Abstract The aim of this study was to determine the effect of cyclooxygenase‐2 (COX‐2) gene deletion on the adaptive responses during prolonged moderate hypobaric hypoxia. Wild‐type (WT) and COX‐2 knockout (KO) mice of both genders (3 months old) were exposed to hypobaric hypoxia (~0.4 ATM) or normoxia for 21 days and brain capillary densities were determined. Hematocrit was measured at different time intervals; brain hypoxia‐inducible factor ‐1α (HIF‐1α), angiopoietin 2 (Ang‐2), brain erythropoietin (EPO), and kidney EPO were measured under normoxic and hypoxic conditions. There were no gender differences in hypoxic acclimatization in the WT mice and similar adaptive responses were observed in the female KO mice. However, the male KO mice exhibited progressive vulnerability to prolonged hypoxia. Compared to the WT and female KO mice, the male COX‐2 KO mice had significantly lower survival rate and decreased erythropoietic and polycythemic responses, diminished cerebral angiogenesis, decreased brain accumulation of HIF‐1α, and attenuated upregulation of VEGF, EPO, and Ang‐2 during hypoxia. Our data suggest that there are physiologically important gender differences in hypoxic acclimatization in COX‐2‐deficient mice. The COX‐2 signaling pathway appears to be required for acclimatization in oxygen‐limiting environments only in males, whereas female COX‐2‐deficient mice may be able to access COX‐2‐independent mechanisms to achieve hypoxic acclimatization. Introduction The mammalian brain is dependent on timely availability of both oxygen and glucose for normal physiologic function. When mammals are exposed to chronic hypoxia, systemic and central adaptational changes allow them to acclimatize to the low oxygen environment (LaManna et al. 1992). The major long-lasting systemic responses include hyperventilation, loss of body weight, and polycythemia through upregulated erythropoiesis. In the CNS, the major response is increased capillary density by angiogenesis over a 3-week period of sustained exposure. Cyclooxygenase-2 (COX-2), an important marker of inflammation, is constitutively expressed in neurons, astrocytes, and endothelial cells in the brain under normal physiologic conditions (Kaufmann et al. 1996;Nogawa et al. 1997;Hirst et al. 1999) and upregulated by hypoxia Benderro and LaManna 2014). Enzymatic activity of COX-2 in endothelial cells catalyzes the conversion of arachidonic acid to prostaglandin 2 (PGE2), which promotes angiopoietin-2 (Ang-2) expression near sites of vascular remodeling, inducing angiogenesis during hypoxia (Xu and LaManna 2006;Dore-Duffy and LaManna 2007). The pathway of hypoxia-inducible factor (HIF)-mediated upregulation of vascular endothelial growth factor (VEGF) also involves in hypoxiainduced angiogenesis (Wang and Semenza 1993;Levy et al. 1995). However, the interaction between these pathways remains unclear. We recently reported the time course of HIF-1a, VEGF, COX-2, and Ang-2 trended similarly during prolonged hypoxia (Benderro and LaManna 2014). It has been shown that the hypoxia-induced COX-2 activation may augment PGE2 release, resulting in increased accumulation of HIF-1a, increased expression of VEGF, and enhanced angiogenesis (Casibang et al. 2001;Pai et al. 2001;Huang et al. 2005). Reduction of angiogenesis was observed in cornea of COX-2 KO mice in an interleukin-1b-induced angiogenesis model, suggesting that the suppressed angiogenesis by inhibition of COX-2 may be through the inhibition of HIF-1a or VEGF expression (Liu et al. 1999;Jones et al. 2002). For example, NS -398, a COX-2 selective inhibitor, suppressed hypoxia-induced angiogenesis by reducing HIF-1a or inhibition of HIF-1 activity (Zhong et al. 2004) and markedly reduced hypoxia-induced VEGF production (Liu et al. 1999), and this inhibitory effect could be reversed by exogenous PGE2 (Liu et al. 1999). Yanni et al. (2010) have demonstrated that hypoxia induces COX-2, prostanoids production, and VEGF synthesis in retinal M€ uller cells, and that VEGF production is at least partially COX-2-dependent, suggesting that PGE2 mediates the VEGF response of M€ uller cells. However, some studies indicated that COX-2 expression has no significant correlation with VEGF expression (Kim et al. 2011), and hypoxia-driven HIF-1a accumulation is independent of COX-2 pathway (Stasinopoulos et al. 2009). On the other hand, though COX-2 is mainly essential for induction of Ang-2 during hypoxia Yao et al. 2011) evidence has shown that VEGF also promotes Ang-2 activities that favor vascular sprouting during hypoxia-induced angiogenesis LaManna et al. 2006). The COX knockout mice have provided useful models for investigating the roles of the COX isoforms in normal physiology and various pathological states. Mice with genetic deletion of COX-2 have been used to investigate the effects of COX-2 deficiency on hypoxia-induced vascular responses such as angiogenesis (Yanni et al. 2010). However, no study to date has examined sex differences in the COX-2-deficient mice in hypoxic acclimatization. Because there are examples of sex-dependent differences in the COX-2 KO mouse strain (Yang et al. 2005;Chillingworth et al. 2006;Robertson et al. 2006), it became apparent that it was necessary to study whether sex differences extended to the response to prolonged hypoxia. In this study, we investigated the role of COX-2 on hypoxic acclimatization responses using COX-2-deficient mice of both genders in comparison with the wild-type mice. Animal preparation The experimental protocol used in this study was approved by the Institutional Animal Care and Use Committee (IACUC) at Case Western Reserve University. COX-2 heterozygous (AE) males and females (B6; 129S7-Ptgs2 tm 1 Jed ) were purchased from Jackson Laboratories (Bar Harbor, ME) and bred to produce both cyclooxygenase-2 wild-type (+/+, WT) and homozygous knockout (-/-, KO) mice from the same litter. Genotyping was performed using PCR analysis on DNA samples obtained from tail biopsies. All mice were housed and maintained at the Animal Resource Center on a 12:12-h light/dark diurnal cycle with unrestricted access to food and water. Experiments were conducted in 3-month-old WT and COX-2 KO mice of both genders. Chronic hypoxic exposure As previously reported LaManna 2011, 2014), hypoxic mice (WT or KO) were kept in a hypobaric chamber in which a constant pressure of 300 mmHg (~0.4 atm, equivalent to 8% normobaric oxygen at sea level) was maintained. Pressure was periodically (maximum 1 h a day) returned to atmosphere for replenishment of food and water, cage cleaning, and body weight recording. Normoxic mice (littermates of WT or KO mice) were kept next to the chamber to ensure identical ambient conditions. For capillary density analysis, brains of mice were collected after 21 days of normoxic or hypoxic exposure. In a separate group of animals, blood and tissue samples were collected on 0, 1, 4, 7, 14, and 21 days of the exposure for the measurement of hematocrit and western blot analyses. Determination of cerebral capillary density Brain microvascular density was determined by immunohistochemical staining for Glucose Transporter-1 (GLUT-1) and counting the number of GLUT-1-positive capillaries per unit area (N/mm 2 ), as described previously LaManna 2011, 2014). Mice were deeply anesthetized with isoflurane and perfused transcardially with PBS (pH 7.4) and 4% paraformaldehyde. Brains were removed and immersed in 4% paraformaldehyde overnight at 4°C. The brain samples were dehydrated through graded alcohol and embedded in paraffin. Coronal serial sections (5 lm) of frontal cortex (levels of Bregma 0.98 mm to 0.38 mm), (Paxinos and Franklin 2003) were made on a microtome. Sections were deparaffinized, rehydrated, and subjected to antigen retrieval at 90°C for 10 min in 0.1 mol/L sodium citrate buffer and incubated with 3% hydrogen peroxide. Slides were blocked with 10% normal horse serum for 1 h and then incubated with primary antibodies (anti-Glut-1, Santa Cruz, CA) at 4°C overnight. After three serial washes with 0.1 mol/L PBS-tween solution, the secondary antibody (1:200, Vector Labs, Burlingame, CA) was applied. The slides were washed again and incubated in Vectastain ABC Elite reagent (Vector Labs, Burlingame, CA) for 30 min and then developed using diaminobenzidine. After dehydration and coverslipping, images were taken with a SPOT digital camera in conjunction with a Nikon E600 Eclipse microscope. Images spanning the entire depth of the parietal cortex were resolved at 2009 optical resolution. Adobe Photoshop CS5 and ImageJ were used to count positively stained microvessels, less than 20 lm in diameter, to determine the capillary density (number per mm 2 of brain tissue). For each brain, at least four different GLUT-1-stained sections were averaged for quantification. Each quantified section was at least 50 lm apart from the subsequent quantified section. Statistical analysis All values were presented as mean AE SD. Statistical analyses were performed using SPSS v 20.0 for Windows. Group comparisons were made by one-way analysis of variance (ANOVA) using Tukey's statistic. The comparison between any two groups was analyzed with a t-test for paired sample, two-tailed. The survival analysis was performed using a Wilcoxon (Gehan) survival analysis. Significance was considered at the level of P < 0.05. Results Overall survival during hypoxic exposure The overall survival was monitored in mice exposed to hypoxia for 21 days (Fig. 1). All WT mice (male and female, n = 21 each) successfully survived the whole length of 21-day hypoxic exposure. However, the male Body weight change during hypoxia As seen in Figure 2, baseline body weights and the change in body weights during 21-day normoxic or hypoxic exposure were measured in the age-matched WT (male: n = 39; female: n = 28) and KO mice (male: n = 24; female: n = 27). Both male and female KO mice had significantly lower body weights compared to their corresponding WT mice (grams, male: 25 AE 2 vs. 28 AE 1.8, female: 21 AE 1.6 vs. 24 AE 2.2, Fig. 2A). In the normoxic groups, the body weight profiles of WT and KO groups were similar in both male and female mice (Fig. 2B). During the first week of hypoxia, all groups had a significant and similar magnitude of body weight loss (about 20%) compared to their corresponding prehypoxic baselines. After day 7, the WT mice (both male and female) and female KO mice started to regain body weight gradually and reached about 83% of normoxic baseline at day 21. However, the surviving male KO mice had continuous body weight loss during the entire length of hypoxic exposure; the body weight was only about 73% of normoxic baseline at 21 days of hypoxia (Fig. 2B). Hematocrit change during hypoxia The time course of hematocrit change during hypoxia was measured in WT and KO mice (Fig. 3). In the male mice, the KO group had a slightly higher normoxic hematocrit compared to the WT group (%, 50 AE 3, n = 13 vs. 47 AE 4, n = 15, P < 0.05). During hypoxia, the hematocrit in the male and female WT mice increased gradually and reached about 60% at day 4, and the hematocrit was about 80% at day 21. The hypoxia induced similar change in hematocrit in the female WT and KO groups compared to the WT mice. However, the hematocrit in the male KO group only reached 55% at 4 days of hypoxia and was sustained at that level throughout the remaining of exposure. In females, the WT and the KO mice had similar baseline hematocrit. The WT and KO groups had similar trend of hematocrit change during hypoxia; the hematocrit was reached about 80% at day 21 of exposure ( Fig. 3). EPO expression in kidney The kidney EPO protein level was measured in the WT and the KO mice under normoxic and 7-day hypoxic conditions (Fig. 4). The 7-day time point was chosen because we have previously reported that the peak elevation of EPO occurred at 7 days of hypoxia in kidney during chronic hypoxia in mice (Benderro and LaManna 2013). In the male mice, the normoxic baseline EPO in the KO group was significantly higher (50% increase) than the WT mice. The kidney EPO increased over onefold at 7-day hypoxia in the WT mice but remained unchanged in the KO mice. In females, the WT and the KO mice had a similar EPO baseline, and the EPO level was significantly increased in both WT and KO groups at 7-day hypoxia. Microvascular density in cerebral cortex Cerebral capillary density (N/mm 2 ) was identified by GLUT-1 immunostaining and quantified as described previously (Benderro and LaManna 2011 Expression of HIF-1a, VEGF, EPO, and Ang-2 in cerebral cortex The western blot analysis of HIF-1a, EPO, VEGF, and Ang-2 were performed in the cerebral cortex of WT and KO mice under normoxic and hypoxic conditions (Fig. 6). In the male mice, the KO group had a significantly higher (~50% increase) normoxic baseline HIF-1a, EPO, VEGF, and Ang-2 compared to the WT group. At 7 days of hypoxia, levels of HIF-1a, EPO, VEGF, and Ang-2 were significantly increased by twofold to 2.5-folds in the WT group, but the male WT mice exhibited no hypoxia-induced upregulation of these proteins. However, in the females, the WT and the KO mice had similar normoxic baselines of HIF-1a, EPO, VEGF, and Ang-2. At 7 days of hypoxia, the levels of the above proteins were increased significantly in both WT and KO groups, and by a similar magnitude. Discussion In this study, we investigated the role of COX-2 on acclimatization to prolonged moderate hypoxia using COX-2-deficient mice of both genders. We found that there were no gender differences in hypoxic acclimatization in the WT mice; however, remarkable gender differences were observed in the COX-2-deficient mice. The male KO mice exhibited progressive vulnerability to prolonged hypoxia, as demonstrated by decreased survival, diminished erythropoietic response, and lack of hypoxiainduced cerebral capillary angiogenesis during hypoxic exposure. Unexpectedly, female KO mice demonstrated no deficiency in adaptive responses compared to the WT mice. The male KO mice had continuous body weight loss and death during the entire length of hypoxic exposure, suggesting that there was no critical survival window for the male KO mice. The progressive mortality during hypoxia in the male KO mice may be also related to the diminished erythropoietic and polycythemic responses. The hematologic acclimatization response, driven by kidney-produced erythropoietin, enables the maintenance of oxygen content in blood and improvement of tissue oxygenation despite decreased arterial partial pressure of O 2 (PaO 2 ) during hypoxia (Xu et al. 2004). We have previously reported that kidney EPO was elevated throughout the 21-day hypoxic period and peaked between 7 and 14 days (Benderro and LaManna 2013); hematocrit increased with continued hypoxia, doubling by 21 days LaManna 2011, 2013). The relatively elevated basal kidney EPO and hematocrit level in the male KO mice may indicate a hypoxia-like state in the kidney tissue and a diminishing of further hypoxic sensitivity, due to the deficiency of COX-2. Cerebral vascular remodeling through angiogenesis is the major CNS acclimatization response to prolonged hypoxia . Reduction in angiogenesis was observed in cornea of COX-2 KO mice in an interleukin-1b-induced angiogenesis model (Kuwano et al. 2004). It has been shown in vitro that hypoxia-induced VEGF production was diminished in COX-2 KO mouse retinal M€ uller cells (Yanni et al. 2010). In our study, we observed that the absence of COX-2 in males resulted in attenuated HIF-1a accumulation, response deficits in downstream gene products EPO and VEGF during hypoxia, the suppressed Ang-2 upregulation and the overall failure to induce new capillary formation in the cerebral cortex, suggesting that HIF-1a/VEGF pathway can be regulated by COX-2 but the effect appears to be genderdependent. The relatively higher baseline of HIF-1a, VEGF, EPO, and Ang-2 in the male KO mice may reflect the hypoxia-like state in these mice, as the elevated EPO baseline level we observed in kidney tissue. In addition, the attenuated HIF-1a upregulation may be also responsible for the progressive vulnerability to prolonged hypoxia in the male KO mice. HIF-1a is a nuclear factor associated with neuroprotection via regulation of energy metabolism and is a key regulator of oxygen homeostasis during hypoxia (Semenza 1999). HIF-1a regulates genes related to glucose metabolism, angiogenesis, and erythropoiesis to promote cell survival (Bergeron et al. 2000;Semenza 2000;Kiriakidis et al. 2007). The gender differences caused by COX-2 deletion or inhibition have been observed in other studies. The elevated level of estrogen is positively associated with cerebral blood flow (Kastrup et al. 1999) and is favorable on recovery following stroke (McCullough et al. 2001;Manwani et al. 2015). A recent human study has indicated that in females, hypoxia-mediated cerebral vasodilation is similar across early and late follicular phases and is not affected by COX inhibition (Peltonen et al. 2016). It has been shown that the male 129/COX-2 -/-mice exhibit malignant hypertension, overt proteinuria, and severe renal abnormalities compared to milder defects in the female mice (Yang et al. 2005). In a model of arthritis and inflammatory pain, both disease severity and nociception, COX-2 knockout females exhibited reduced edema and joint destruction compared with male knockouts or wild types of either sex (Chillingworth et al. 2006). Genetic deletion of COX-2 may also have a sex-dependent effect on maintenance of normal bone microarchitecture and density in mice. It has been shown that in 4-month-old COX-2 knockout mice, the females had normal bone geometry and trabecular microarchitecture, whereas the age-matched males exhibited reduced bone volume fraction within the distal femoral metaphysis (Robertson et al. 2006). In humans, nonsteroidal anti-inflammatory drugs (NSAIDs), which have been linked to their ability to inhibit inducible COX-2 at sites of inflammation, may produce different responses in men and women; for example, ibuprofen has little effect on noninflammatory experimental pain in women, but is effective in men (Chillingworth et al. 2006). Aspirin, a NSAID simultaneously inhibits COX-1 and COX-2 isoforms (Warner and Mitchell 2004), has been shown to impair the wound healing process in female, but not male mice. It also showed that the expression of von Willebrand as identified by the GLUT-1 positive staining in brain cortex. Values are mean AE SD, n = 6 for each group. * indicates significant difference (t-test, P < 0.05) from the corresponding normoxic baseline; † indicates significant difference (t-test, P < 0.05) from the WT group at the same exposure condition. factor (vWF, an endothelial cell marker) and VEGF was the same in the female and male control groups, but was higher in the female aspirin-treated group compared with the male aspirin-treated group (dos Santos and Monte-Alto-Costa 2013). It has been reported that estrogen stimulates angiogenesis by a direct effect on endothelial cells during wound healing (Gilliver et al. 2008). Sex-dependent effect of COX-2 inhibition was also observed in cognitive performance in mouse, suggesting that COX-2 activity may influence mnemonic processes in a sex-dependent manner (Guzman et al. 2009). These findings suggest the importance of studying subjects of both genders in rodent models of neurodegenerative disorders and developing of treatment strategies selectively according to gender. In conclusion, we found that there were no gender differences in hypoxic acclimatization in the WT mice. While female COX-2-deficient mice successfully responded to hypoxic exposure in a manner similar to the WT mice, the male COX-2-deficient mice were incapable of physiological acclimatization. Our data suggest that there are physiologically important gender differences in hypoxic acclimatization in COX-2-deficient mice. The COX-2 signaling pathway appears to be required for successful hypoxic acclimatization in males, however, female Representative western blot analysis of normoxic control and 7-day hypoxia in male and female mice, respectively. Lower panel: Optical density ratios of respective protein normalized to b-tubulin or b-actin. Values are mean AE SD, n = 4-8 for each group. * indicates significant difference (t-test, P < 0.05) from the corresponding normoxic baseline; † indicates significant difference (t-test, P < 0.05) from the WT group at the same exposure condition.
2017-10-19T04:30:59.193Z
2017-02-01T00:00:00.000
{ "year": 2017, "sha1": "aa04ed70d1c25a257555b64bcd816b5d850ed1de", "oa_license": "CCBY", "oa_url": "https://physoc.onlinelibrary.wiley.com/doi/pdfdirect/10.14814/phy2.13148", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "aa04ed70d1c25a257555b64bcd816b5d850ed1de", "s2fieldsofstudy": [ "Biology", "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
232359980
pes2o/s2orc
v3-fos-license
Epidemiology of distal radius fractures in children and adults during the COVID-19 pandemic – a two-center study Background Distal radius fractures (DRFs) constitute 15–21% of all fractures. There are no detailed data on the possible changes in the epidemiology and treatment of DRFs in children and adults during the Covid pandemic. The purpose of our study was a comprehensive assessment of the impact of the COVID-19 pandemic on distal radius fractures (DRF) epidemiology, including both children and adults and various fracture fixation methods in two large trauma centers in Poland. Methods This study compared the medical data on the treatment of distal radius fractures in Poland in two periods: the period of the COVID-19 pandemic (from March 15 to October 15, 2020) and the corresponding period prior to the pandemic (from March 15 to October 15, 2019). We assessed detailed data from two trauma centers for pediatric and adult patients. Outpatients seeking medical attention at emergency departments and inpatients undergoing surgery at trauma-orthopedic wards were evaluated. We compared epidemiological data, demographic data, treatment type, and hospital stay duration. Results The total number of patients hospitalized due to DRF during the pandemic was 180, it was 15.1% lower than that from the pre-COVID-19 pandemic period (212). In the case of adult patients, the total number of those hospitalized during the pandemic decreased significantly (by 22%) from 132 to 103 patients. Analysis of the individual treatment methods revealed that the number of adults who underwent conservative treatment was considerably (by 30.3%) significantly lower in the period of the COVID-19 pandemic, from 119 to 83 patients. Compared to 13 patients from the pre-pandemic period, the number of surgically treated adults statistically increased to 20 patients (by 53.8%). Our analyses showed hospitalizations of surgically treated adults to be shorter by 12.7% during the pandemic, with the corresponding hospitalizations of surgically treated pediatric patients to be shorter by11.5%. Conclusions Our study showed a significant impact of the COVID-19 pandemic on the epidemiology and treatment of DRFs in children and adults. We found decreased numbers of pediatric and adult patients with DRFs during the COVID-19 pandemic. The pandemic caused an increase in the number of children and significantly increase adults undergoing surgical treatment for DRFs, a decrease in mean patient age, shorter significantly length of hospital stay, and an increased number of men with DRFs. The COVID-19 pandemic altered the healthcare in the whole world in the year 2020 [13][14][15][16][17][18][19][20][21][22][23][24][25][26]. Although the causative coronavirus (SARS-CoV-2) can infect both adults and children, the majority of children have mild or asymptomatic course of the disease [13]. The COVID-19 pandemic has impeded general access to specialist care and altered the daily clinical practice and admission routines (in both emergency and primary-care settings) [14-17, 19, 21, 22]. Large group of Hospitals changed its working routines to specialcrisis mode, cancelling or limitting planned admissions [16,17,22,26]. Some physicians switched to contracted COVID-19, some shortened their office hours to limit the risk of infection. Some trauma and orthopedic units and some emergency wards have also altered their admission criteria [16,17,22]. Despite having suffered an injury, some patients, particularly the elderly and those with comorbidities, have avoided seeking medical help at emergency or trauma and orthopedic wards due to the fear of contracting COVID-19 [13,22]. In order to increase the safety of the medical personnel and reduce the number of admitted and operated patients, some hospitals increased the number of indications for conservative treatment of injuries and postponing surgery [25,26]. DRF can be treated surgically with various stabilization methods and non-surgically in plaster [2,6,7,[9][10][11]25]. The British Orthopedic Association (BOA) recommends non-surgical treatment of DRF during the COVID-19 pandemic. The guidelines accept in some patients the possibility of complications and deformity, that will require deferred surger [25]. The goal of treating injuries during the COVID-19 pandemic is rapid and safe treatment [25]. The reduction of the number of patients treated surgically with DRF during the COVID-19 pandemic may have consequences in the form of more complications in the future [25,26]. Surgical treatment of DRF is indicated especially in young patients with dislocated, multi-fragmented and intra-articular fractures. In addition to the above factors, the epidemiology of DRF during the COVID-19 pandemic may be influenced by other factors, such as the medical and bioethical framework, the surgeon, and hospital policy (confounding factors) [25,26]. There have been few studies evaluating the important issue of the impact of the COVID-19 pandemic on DRF epidemiology in children and adults [13,15,25]. Nabian et al. presented an epidemiologic model of pediatric injuries during the COVID-19 pandemic based on data from a tertiary trauma center in Iran [13]. Those authors observed an increased proportion of DRFs in children (from 28% of all fractures from the prepandemic period to 30% of all fractures during the COVID-19 pandemic [13]. Nabian reported no changes in either the mean age of patients or the male-to-female patient ratio during the COVID-19 pandemic [13]. Bram et al. assessed the effects of the COVID-19 pandemic on the epidemiology of injuries in pediatric patients [15]. According to their report, the total number of fractures decreased by 61%, there were no changes in the male-tofemale ratio, and the mean age of patients decreased from 9.4 to 7.5 years [15]. Baawa-Ameyaw reported that 54% of 92 patients with DRF managed nonoperatively during the COVID-19 pandemic had indication for operative management [25]. The sparse available literature on the effect of the COVID-19 pandemic on DRF epidemiology focuses on pediatric patients and has a limited scope, since the authors typically assess the number of patients with specific fracture locations presenting at emergency departments [13,15]. Nonetheless, there are no detailed data on the possible changes in the epidemiology and treatment of DRFs in children and adults. Such data may prove useful in preparing resources. The research question was whether the COVID-19 pandemic was influencing changes in the epidemiology and treatment of DRFs in children and adults. The purpose of our study was a comprehensive assessment of the impact of the COVID-19 pandemic on DRF epidemiology, including both children and adults and various fracture fixation methods in two large trauma centers in Poland. Methods Distal radius fracture epidemiology was evaluated in two large trauma centers for pediatric and adult patients in Poland. Outpatients seeking medical attention at emergency departments and inpatients undergoing surgery at trauma-orthopedic wards were evaluated. In order to collect data for the study, the medical database of all data of patients treated in two trauma centers was analyzed. The analysis included the period of the COVID-19 pandemic in Poland (from March 15 to October 15, 2020), and the obtained data were compared with those from the corresponding period prior to the COVID-19 pandemic (from March 15 to October 15, 2019). The inclusion criteria were a history of DRF in the period between Mar. 15, 2020, and Oct. 15, 2020, or between Mar. 15, 2019, and Oct. 15, 2019; available medical records; and available demographic data. The study was approved by the local review board. All procedures were followed in accordance with relevant guidelines. Analysis of two databases from two large trauma centers in Poland included the total number of DRF patients, total number of pediatric patients (< 18 years old) with a DRF, total number of adult patients (> 18 years old) with a DRF, total number of pediatric patients with a DRF who received conservative treatment (plaster cast), total number of adults with a DRF who received conservative treatment (plaster cast), total number of pediatric patients with a DRF who received surgical treatment, proportion of pediatric patients who received surgical treatment (total number of all pediatric patients treated surgically/ total number of all pediatric patients × 100%), total number of adults with a DRF who underwent surgical treatment, proportion of adults who underwent surgical treatment (total number of adults who underwent surgical treatment / total number of adults × 100%), total number of adults with a DRF who underwent surgical treatment involving open reduction and volar plate fixation, total number of adults with a DRF who underwent surgical treatment involving closed reduction and Kirschner wire fixation, mean age of all patients, mean age of all adult patients, mean age of all pediatric patients, mean hospital stay duration of surgically treated adults, mean hospital stay duration of surgically treated pediatric patients, and male-to-female patient ratio. All these data for the period of the COVID-19 pandemic in Poland (from Mar. 15 to Oct. 15, 2020) were compared with the corresponding data for the period prior to the COVID-19 pandemic in Poland (from Mar. 15 to Oct. 15, 2019). The obtained data were statistically analyzed using the Statistica 13.1 program. Pearson's chi-square test was used to assess the relationship between the frequency distribution of responses in one variable with respect to the other variable. Student's t-test were used to compare the continuous variables for two groups (during and before the pandemic). The adopted significance level was α = 0.05. Results The results have been presented in Table 1. Our analysis showed that the total number of patients hospitalized due to DRF during the pandemic (in 2020) was 15.1% lower than that from the pre-COVID-19 pandemic period (in 2019). In the case of adult patients, the total number of those hospitalized during the pandemic decreased significantly (by 22%) from 132 to 103 patients, (p = 0,01253), Fig. 1. In the case of patients under the age of 18 years, the total number of those hospitalized decreased by 3.8%. Analysis of the individual treatment methods revealed that the number of adults who underwent conservative treatment was considerably (by 30.3%) significantly lower in the period of the COVID-19 pandemic (p = 0, Fig. 2. The number of pediatric patients who underwent conservative treatment decreased somewhat less dramatically (by 7.2%) from the first evaluated period to the second. Compared to the figures from the pre-pandemic period, the number of surgically treated adults was significantly higher (by 53.8%), (p = 0,03618), Fig. 2, while the number of surgically treated pediatric patients was higher by 18.2% in 2020. The parameter that increased the most (by 275%) in comparison to its pre-pandemic value was the number of adults who underwent surgical treatment with volar plate fixation. Interestingly, the number of patients treated with Kirschner wires remained unchanged. What also draws attention is the lower mean age of patients hospitalized due to a DRF during the pandemic (37 years and 2 months) in comparison to the pre-pandemic mean age of hospitalized DRF patients (40 years and 1 month). In adult patients, the mean age dropped from 58 years to 57 years and 9 months, while in children it dropped from 10 years and 6 months to 9 years and 8 months. Our analyses showed hospitalizations of surgically treated adults to be shorter by 12.7% during the pandemic (from 2,92 days in 2019 to 2,55 days in 2020), with the corresponding hospitalizations of surgically treated pediatric patients to be significantly shorter by 11.5% (from 3,82 days in 2019 to 3,38 days in 2020), (p = 0, 03857), Fig. 3. There have been no studies comprehensively evaluating the important issue of epidemiology and treatment of DRFs in adult and pediatric patients during the COVID-19 pandemic. Baawa-Ameyaw reported that 54% of 92 patients with DRF managed nonoperatively during the COVID-19 pandemic had indication for operative management [25]. Nabian et al. presented an epidemiologic model of pediatric injuries during the COVID-19 pandemic based on data from a tertiary trauma center in Iran [13]. Those authors observed an increased proportion of DRFs in children (from 28% of all fractures from the pre-pandemic period to 30% of all fractures during the COVID-19 pandemic [13]. Nabian reported no changes in either the mean age of patients or the male-to-female patient ratio during the COVID-19 pandemic [13]. Bram et al. assessed the effects of the COVID-19 pandemic on the epidemiology of injuries in pediatric patients [15]. According to their report, the total number of fractures decreased by 61%, there were no changes in the male-to-female ratio, and the mean age of patients decreased from 9.4 to 7.5 years [15]. Bram et al. noted a decreased incidence of injuries due to sports and other outdoor activities, with an increased incidence of high-energy injuries due to falls from trampolines and bicycles [15]. Hashmi reported a 50% decrease in both elective and emergency admissions to orthopedic wards, with no changes in either the mean age or male-to-female ratio in patients in the COVID-19 pandemic period in comparison with the relevant prepandemic figs [16].. Yu et al. observed a 42% decrease in the number of patients with fractures seen at one of the orthopedic wards in China during the COVID-19 epidemic [17]. Poggetti et al. reported a 28.6% decrease in the number of patients undergoing surgery due to hand and wrist trauma in one of Italian hospitals during the COVID-19 pandemic [18]. In one of the Turkish hospitals, the total number of fractures recorded during the COVID-19 pandemic was by 61.6% lower than the number of fractures recorded in 2019 [20]. Our retrospective study showed reduced numbers of pediatric (by 3.8%) and adult patients (by 22%) referred to emergency departments due to DRFs during the COVID-19 pandemic. Similar, or even more pronounced decreases over the COVID-19 pandemic period (compared to period prior to the COVID-19 pandemic) have been reported in other countries (19-69%) [13,[15][16][17][18][19][20][21][22]24]. The reduced numbers of DRF-associated hospitalizations can be explained by lockdown measures, limited exercise opportunities, and the necessity to stay indoors during the pandemic. As a result of having to stay at home under adult supervision, children and adolescents under the age of 18 years were less prone to suffer injuries, which are typically exercise-related in this age group. Hence the less pronounced difference observed in this age group. Young adults limited their exercise by staying at home; this made them less prone to injuries/falls, which are the most common mechanism of DRFs. The elderly stayed mostly at home due to fears of infection. Some of them did not seek medical attention despite their injury and let it heal without any orthopedic intervention. We expected to see a trend towards lower numbers of DRF patients due to social distancing measures and instances of self-quarantine, which altered people's behaviors and lifestyles [13,[18][19][20][22][23][24]. Approximately 25% of injuries in children are due to sports [15]. Sports activities and training sessions were mostly canceled, with schools, kindergartens, and nurseries partly or completely closed. The amount of traffic also declined dramatically due to the COVID-19 pandemic. These factors, as well as the patients' and their guardians' fears of infection during a visit to the hospital affected the epidemiology and treatment of DRFs in children and adults [13-17, 23, 24]. Some authors reported falling numbers of traffic accidents, sports-related injuries, and outdoor injuries during the pandemic, which would lead to lower numbers of high-energy fractures [14, 17-19, 22, 24]. However, the number of low-energy fractures remains unchanged [14,18,19]. On the other hand, the period of COVID-19 pandemic saw increased numbers of indoor injuries and alcoholrelated injuries [14, 17-19, 22, 24]. Evaluating the individual treatment methods, we assumed that most high-energy fractures would require surgical treatment, with most low-energy injuries managed conservatively. Turgut et al. observed an 89% increase in the proportion of children undergoing surgery due to fractures during the COVID-19 pandemic, with no corresponding increase in adults undergoing surgical treatment [20]. Pichard reported an increased proportion of patients undergoing surgery (from 36.9% in 2019 to 51.2% during the COVID-19 pandemic) [24]. We observed increased numbers of patients undergoing orthopedic surgery treatment during the pandemic (an 18.2% increase in the number of children and a 53.8% increase in the number of adults). This may have been a result of the increased numbers of high-energy injuries due to falls from a trampoline or bicycle [15]. The lower by 30.3% number of adult patients receiving conservative treatment can be attributed to limited exercise and recreational activities, whereas the dramatic 98% increase in the proportion of surgically treated adults can be attributed to the work and renovations done around the house during the lockdown period and the maintained high level of activity on the part of construction businesses, which were exempt from lockdown restrictions. This can be best seen while analyzing the number of patients treated with a volar plate. These were mostly patients with high-energy injuries due to falls from a height associated with work done in or around the house and with construction activities. Our analysis revealed a 7.2% decrease in the mean age of patients during the pandemic, which may have been a result of elderly people's fears of visiting an emergency department during the pandemic and the more effective measures to prevent injuries in the elderly. On the other hand, Lv et al. reported a significant increase in the mean age of patients presenting with fractures during the pandemic in China [23]. The lower mean age of patients hospitalized due to DRF can be attributed to the nature of the SARS-CoV-2 virus, which is more virulent in the elderly [13]. Because of their fear of infection, elderly patients submitted more eagerly to lockdown restrictions. Moreover, some of the oldest patients never reached a hospital due to fears of infection and allowed their fractures to heal without seeking medical attention. Our analyses were based on data collected from hospital departments performing elective and emergency procedures. The observed shorter mean hospital stays of patients undergoing surgery during the lockdown period was a result of elective procedures being cancelled, patients with injuries being treated more speedily, and the hospital stays being limited to a minimum due to the epidemiological situation in hospitals. This also applied to pediatric patients who were hospitalized together with an adult guardian. The increased number of DRFs in males in comparison to that in females can be attributed to uninterrupted work involving physical labor in construction, mining, and smelting industries, despite lockdown restrictions elsewhere. The increased male-to-female ratio among DRF patients is also associated with the differences in the type of work done by men and women. Jobs requiring physical labor, which tend to be more commonly held by men were exempt from lockdown restrictions, which increased the proportion of men who incurred injuries. Moreover, men who self-quarantined at home remained actively involved in work around the house and in renovations. The women who stayed at home were more likely to engage in low-energy activities, such as cleaning or childcare, which are less traumatic and less likely to cause DRFs. The limitation of our work may be the fact, that the epidemiology of DRF during the COVID-19 pandemic may be influenced by other factors, such as medical and bioethical framework, the surgeon, and hospital policy (confounding factors) [25,26]. Our study showed the effect of the COVID-19 pandemic on the epidemiology of DRFs in adults and children. The general tendency for DRFs to occur decreased during the pandemic; however, the observed increase in the proportion of patients who underwent surgical treatment may be an important warning sign, indicating that the pandemic was responsible for the increased number of high-energy DRFs requiring surgery. The results of our analysis can be useful in taking appropriate measures and securing the resources necessary for the treatment of DRFs, especially since the COVID-19 pandemic saw increased numbers of DRF patients undergoing surgical treatment. Moreover, this study suggests the need to inform men about the risk of DRFs, as evidenced by the dramatic increase in the number of male patients with this type of injury. Conclusions Our study showed a significant impact of the COVID-19 pandemic on the epidemiology and treatment of DRFs in children and adults. We found decreased numbers of pediatric and adult patients with DRFs who were referred to trauma centers during the COVID-19 pandemic. The COVID-19 pandemic caused an increase in the number of children and significant increase of adults undergoing surgical treatment for DRFs, decreased number of patients in both groups adults and children treated conservative. We noted an significant increase in the number of adults treated with volar plate, a decrease in mean patient age, significantly shorter durations of hospital stay in children and adults undergoing surgical treatment, and increased number of men treated with DRFs.
2021-01-07T09:05:01.664Z
2020-12-31T00:00:00.000
{ "year": 2021, "sha1": "2a724aa2ff98068924d7c337505b518a17ef8da2", "oa_license": "CCBY", "oa_url": "https://bmcmusculoskeletdisord.biomedcentral.com/track/pdf/10.1186/s12891-021-04128-5", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "de3fa68adaa7b07ac71d9abfb17fc63c4450ebe6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
55985488
pes2o/s2orc
v3-fos-license
Interpretive and Analytical Approaches to Aerial Survey in Archaeology This article discusses two contrasting approaches to archaeological survey using aerial reconnaissance. A more traditional strategy is to look for interesting spots in the landscape with a highly concentrated archaeological record. These are usually called “sites”. This concept is still used in everyday practice, despite its long-standing problematic character. The opposing approach divides the studied region into analytical units, which are sampled for evidence in a standardized manner and only then is the collected information subsequently interpreted. Varying densities of recorded facts across space are now studied rather than the binary categories of “on-site” and “off-site”. In Czech archaeology, this operational difference has often been classified as the “synthesizing” vs. “analytical” research methodology. This debate has been ongoing for quite some time in the context of field-walking and surface collection of archaeological finds. This text examines an analogous problem in the field of aerial survey, where it seems to be closely connected to another long-standing methodological and terminological discussion: the comparative usefulness of “oblique vs. vertical” aerial photography. IANSA 2017 ● VIII/1 ● 79–92 Ladislav Šmejda: Interpretive and Analytical Approaches to Aerial Survey in Archaeology 80 in mutual opposition to each other as regards their technical parameters and practical utility. The aim of this paper is to evaluate oblique and vertical aerial photographs in terms of the two above-mentioned survey strategies: synthesizing and analytical approach. 2. Oblique and vertical aerial photographs As their names suggest, the main criteria for distinguishing between vertical and oblique photographs is the orientation of the camera at the moment when the photograph is taken. Verticals are produced when the camera’s optical axis is oriented downwards, perpendicular to the horizontal plane. For practical reasons, a small deviation (usually less than 3 degrees) of the optical axis from the plumb line is generally tolerated. Obliques are captured by cameras that are tilted significantly from the vertical. We speak about “low obliques” when the optical axis is tilted no more than 30 degrees from the vertical, and “high obliques” that typically point around 60 degrees away from the vertical. In vertical photographs, the nadir (i.e. point on the ground directly below the camera at the time of exposure) is located approximately in their geometrical centre (principal point); while in the case of high obliques the position of the nadir is typically positioned outside the photo frame (Figure 1). Another significant difference is that verticals are often taken in so-called stereo pairs (subsequent frames have significant overlap of their ground coverage), enabling a “threedimensional” perception during visual analysis and offering advanced possibilities of precision mapping (Risbøl et al. 2015). Obliques are very rarely obtained in this way, their analytical potential thus being, technically speaking, more limited. Verticals versus obliques can be compared based on practical considerations of data collection and processing, but not necessarily the most important one for a full appreciation of the actual potential of aerial photographs. No image taken by an optical sensor with a central projection of rays (all conventional cameras) captures the surface of the Earth truly vertically (orthogonally), thus making what we understand as a plan or map. This radial distortion of an image due to the vertical ruggedness of the terrain is explained in Figure 2. There is no simple transformation relationship between the central projection of any photo and the orthogonal map or plan. Correction of this type of distortion can be computed from a series of overlapping images, in which the apparent dislocation of points on the individual photographs can be explained by differences in their elevation. If stereo pairs of photographs are not available, a digital elevation model of the terrain can help to re-project a photo onto a horizontal plane (Hampton 1978). Adjustments of the horizontal positions of captured data must therefore always be computed for both verticals and obliques. For this type of processing vertical photographs are much less problematic, because the perspective distortion as well as displacement due to elevation variances generally increase with the distance from the nadir. In vertical photos, these positional shifts as well as the distortions of shapes and lengths are smaller and more regularly distributed across the photo frame than is the case in high-angle obliques. However, it is clear that all photographs require a geometric correction before they are used for planimetry (measurements of distances, angles and areas). Therefore it might seem more suitable to link the difference between “oblique” and “vertical” imaging more generally with the strategy of data collecting (synthesising/interpretive vs. analytical), rather than with the type and orientation of the camera. 3. Scale of photographs Archaeologists, and especially those insufficiently acquainted with vertical aerial photos, sometimes highlight the issue Figure 1. Footprints of oblique (A) and vertical (B) aerial photographs covering an archaeological site. The crosses mark the nadirs of individual photographs, i.e. the points directly below the camera positions. Note that they are located outside the covered area in the case of obliques, while they coincide with the centres of vertical photos (after Hampton1978, Figure 9). IANSA 2017 ● VIII/1 ● 79–92 Ladislav Šmejda: Interpretive and Analytical Approaches to Aerial Survey in Archaeology 81 that the nominal scale of available vertical images is smaller than that required for fine-grained studies of archaeological heritage and that no details are visible. In many cases this is true of imagery taken for purposes other than archaeology, but in principle there should be no dramatic differences in this respect between vertical and oblique photographs, and this can be easily exemplified. To better understand this, we can consider imaging on film to illustrate the principle, even though film has largely been replaced by digital technology nowadays (Verhoeven 2007). We know that the nominal scale of an image on a film depends on the ratio between flight height (altitude above the terrain) and the focal length of the camera. When photographing the landscape using a common hand-held camera with a standard lens of focal length f=50 mm from an altitude of 500 m, we get an image on the negative at a scale of 1:10,000 (500/0.05). For hand-held oblique photography, the use of a lens with a significantly longer focal length (a so-called telephoto lens) is mostly impractical in aerial prospection because such an arrangement can capture only small views and the image is too enlarged to be held steadily in the viewfinder because of constant vibrations and turbulence affecting the aircraft and its crew during the flight. In addition, the necessity to use a fast shutter speed in order to avoid blurred images calls for a wide aperture, which may in some cases decrease the sharpness of certain parts of the picture. Hence in oblique photography we can hardly obtain a significantly higher nominal scale than the value stated above. Obtaining vertical images at approximately this same scale is not particularly a problem (for example, with the once common wide-angle aerial camera with f=152 mm from an altitude of 1,520 m above the ground). To give an example from central Europe, a limited number of verticals with this scale are available in the military archive of the Czech Republic in Dobruška (Břoušek, Laža 2006), although more frequently we can find photos there with a nominal scale ranging from 1:20,000 to 1:30,000. Nevertheless, large format negatives (18×18 cm or more recently 23×23 cm) can be enlarged without any significant loss of detail. Thus, we can conclude that in the end, we are working with enlarged oblique and vertical photographs of comparable scales (see also Doneus 1997; Palmer 2005, 103–104). Furthermore, the scale of oblique photographs dramatically decreases from the foreground to the background of the image, which, together with the distortion of shapes due to perspective, usually leaves parts of oblique photographs useless for detailed analysis. Oblique photography using medium or large format film still has the advantage that we can get a greater enlargement of the details on the positive compared to vertical imaging from a greater height, but today most oblique photographs are probably taken on small format film or, increasingly, by a digital sensor, the resolution of which has only slowly been improved to approach the standard common in analogue photography. Past studies have concluded that the necessary density of data was not present in the primary digital record Figure 2. The concept of radial distortion of an image due to vertical ruggedness of the terrain on an aerial photograph. There is no simple transformation relationship between the central projection of the photo and the orthogonal map or plan. The correction of the distortion can be derived from a series of overlapping images, in which the apparent dislocation of points a, b, c on the individual photographs can be explained by differences in their elevation. Using the method of intersecting radial lines, their correct locations A, B, C on the map can be derived (after Hampton 1978, Figure 17). IANSA 2017 ● VIII/1 ● 79–92 Ladislav Šmejda: Interpretive and Analytical Approaches to Aerial Survey in Archaeology 82 due to obvious technical limits and that digital imaging could not at that time surpass traditional film (Owen 2006; Verhoeven 2007). However, the emphasis on a completely digital workflow is strong, and there are also further benefits stemming from the use of digital technology for data collection, which will likely dictate future Introduction The concept of an analytical approach to archaeological surface collection has been associated with processual archaeology and its emphasis on sampling and the quantitative aspects of the archaeological record (Redman 1987;Schiffer et al. 1978).These research strategies have been systematically rethought, enriched with a number of new observations and improvements and, most importantly, brought into practice in central European archaeology by M. Kuna (e.g. 1994;1998;2000;2004).This has occurred in such a convincing manner that within one or two decades they have become an integral part of the archaeological methodology.Given the statistical evaluation of data and the study of their spatial properties in Geographical Information Systems (GIS), the discipline has gained a highly effective tool which has significantly advanced our understanding of the past (Gojda 2004a;Neustupný 1998;Neustupný, Venclová 2000;Smrž et al. 2011;Šmejda 2003). The core of this article, which entirely subscribes to the inspiration mentioned above, considers the idea that aerial survey in archaeology can be understood in terms of both an analytical and synthesizing (interpretive) methodology, similar to that of surface survey by fieldwalking (Šmejda 2009).In an analogous way to the development of the techniques of surface collection of artefacts, in the field of aerial survey, we can also observe a movement from the effort to identify individual spots of interest in the landscape to a systematic study of entire landscape transects.In this more recent approach, space is understood as a continuum that is sampled in a certain controlled routine, the results and interpretations being gained later, independently of the process of data collection.The former approach, the discovery of new "sites" through data collection, is a synthesizing method because the interpretation of empirical observations is conducted immediately during field-walking, while the latter is an analytical approach because only the analysis (analytical decomposition) of the area being investigated is conducted in the field. In order to discuss these strategies in the context of aerial reconnaissance, it is first necessary to compare the properties of the two elementary categories of aerial photographs, i.e. so-called "oblique" and "vertical" photographs (Doneus 2000).They have traditionally been perceived as standing in mutual opposition to each other as regards their technical parameters and practical utility.The aim of this paper is to evaluate oblique and vertical aerial photographs in terms of the two above-mentioned survey strategies: synthesizing and analytical approach. Oblique and vertical aerial photographs As their names suggest, the main criteria for distinguishing between vertical and oblique photographs is the orientation of the camera at the moment when the photograph is taken.Verticals are produced when the camera's optical axis is oriented downwards, perpendicular to the horizontal plane.For practical reasons, a small deviation (usually less than 3 degrees) of the optical axis from the plumb line is generally tolerated.Obliques are captured by cameras that are tilted significantly from the vertical.We speak about "low obliques" when the optical axis is tilted no more than 30 degrees from the vertical, and "high obliques" that typically point around 60 degrees away from the vertical.In vertical photographs, the nadir (i.e. point on the ground directly below the camera at the time of exposure) is located approximately in their geometrical centre (principal point); while in the case of high obliques the position of the nadir is typically positioned outside the photo frame (Figure 1).Another significant difference is that verticals are often taken in so-called stereo pairs (subsequent frames have significant overlap of their ground coverage), enabling a "threedimensional" perception during visual analysis and offering advanced possibilities of precision mapping (Risbøl et al. 2015).Obliques are very rarely obtained in this way, their analytical potential thus being, technically speaking, more limited. Verticals versus obliques can be compared based on practical considerations of data collection and processing, but not necessarily the most important one for a full appreciation of the actual potential of aerial photographs.No image taken by an optical sensor with a central projection of rays (all conventional cameras) captures the surface of the Earth truly vertically (orthogonally), thus making what we understand as a plan or map.This radial distortion of an image due to the vertical ruggedness of the terrain is explained in Figure 2.There is no simple transformation relationship between the central projection of any photo and the orthogonal map or plan.Correction of this type of distortion can be computed from a series of overlapping images, in which the apparent dislocation of points on the individual photographs can be explained by differences in their elevation.If stereo pairs of photographs are not available, a digital elevation model of the terrain can help to re-project a photo onto a horizontal plane (Hampton 1978).Adjustments of the horizontal positions of captured data must therefore always be computed for both verticals and obliques.For this type of processing vertical photographs are much less problematic, because the perspective distortion as well as displacement due to elevation variances generally increase with the distance from the nadir.In vertical photos, these positional shifts as well as the distortions of shapes and lengths are smaller and more regularly distributed across the photo frame than is the case in high-angle obliques.However, it is clear that all photographs require a geometric correction before they are used for planimetry (measurements of distances, angles and areas).Therefore it might seem more suitable to link the difference between "oblique" and "vertical" imaging more generally with the strategy of data collecting (synthesising/interpretive vs. analytical), rather than with the type and orientation of the camera. Scale of photographs Archaeologists, and especially those insufficiently acquainted with vertical aerial photos, sometimes highlight the issue Footprints of oblique (A) and vertical (B) aerial photographs covering an archaeological site.The crosses mark the nadirs of individual photographs, i.e. the points directly below the camera positions.Note that they are located outside the covered area in the case of obliques, while they coincide with the centres of vertical photos (after Hampton1978, Figure 9).that the nominal scale of available vertical images is smaller than that required for fine-grained studies of archaeological heritage and that no details are visible.In many cases this is true of imagery taken for purposes other than archaeology, but in principle there should be no dramatic differences in this respect between vertical and oblique photographs, and this can be easily exemplified.To better understand this, we can consider imaging on film to illustrate the principle, even though film has largely been replaced by digital technology nowadays (Verhoeven 2007).We know that the nominal scale of an image on a film depends on the ratio between flight height (altitude above the terrain) and the focal length of the camera.When photographing the landscape using a common hand-held camera with a standard lens of focal length f=50 mm from an altitude of 500 m, we get an image on the negative at a scale of 1:10,000 (500/0.05).For hand-held oblique photography, the use of a lens with a significantly longer focal length (a so-called telephoto lens) is mostly impractical in aerial prospection because such an arrangement can capture only small views and the image is too enlarged to be held steadily in the viewfinder because of constant vibrations and turbulence affecting the aircraft and its crew during the flight.In addition, the necessity to use a fast shutter speed in order to avoid blurred images calls for a wide aperture, which may in some cases decrease the sharpness of certain parts of the picture.Hence in oblique photography we can hardly obtain a significantly higher nominal scale than the value stated above. Obtaining vertical images at approximately this same scale is not particularly a problem (for example, with the once common wide-angle aerial camera with f=152 mm from an altitude of 1,520 m above the ground).To give an example from central Europe, a limited number of verticals with this scale are available in the military archive of the Czech Republic in Dobruška (Břoušek, Laža 2006), although more frequently we can find photos there with a nominal scale ranging from 1:20,000 to 1:30,000.Nevertheless, large format negatives (18×18 cm or more recently 23×23 cm) can be enlarged without any significant loss of detail.Thus, we can conclude that in the end, we are working with enlarged oblique and vertical photographs of comparable scales (see also Doneus 1997;Palmer 2005, 103-104).Furthermore, the scale of oblique photographs dramatically decreases from the foreground to the background of the image, which, together with the distortion of shapes due to perspective, usually leaves parts of oblique photographs useless for detailed analysis. Oblique photography using medium or large format film still has the advantage that we can get a greater enlargement of the details on the positive compared to vertical imaging from a greater height, but today most oblique photographs are probably taken on small format film or, increasingly, by a digital sensor, the resolution of which has only slowly been improved to approach the standard common in analogue photography.Past studies have concluded that the necessary density of data was not present in the primary digital record due to obvious technical limits and that digital imaging could not at that time surpass traditional film (Owen 2006;Verhoeven 2007).However, the emphasis on a completely digital workflow is strong, and there are also further benefits stemming from the use of digital technology for data collection, which will likely dictate future trends (Heipke et al. 2006). Small, mass-produced cameras and their lenses are usually designed for entirely different purposes and are subject to other standards than those used in professional aerial cameras; this has an impact on the quality of the data collected together with differences in the types of media used for data storage (Scollar et al. 1990, 78-121).However, archaeologists may tolerate greater imprecision in the ground location of the objects of their interest (up to several metres or more, depending on circumstances) in contrast to some highly specialised geodetic applications.So the distortion caused by using non-calibrated cheap cameras is usually not considered fatal. Interpretive vs. analytical approach to aerial reconnaissance Oblique photography is essentially a very selective type of data collection, and thus it represents a synthesizing approach to archaeological survey.It is usually undertaken from a small plane with the aim to photographically record, in the best way possible, the features that attract the attention of the observer and seems significant for an archaeological understanding of the targeted area at the time of flyover (Figure 3).This means that during the flight a synthesis of empirical observation is taking place, leading to the decision of whether or not permanent documentation should be made.The whole process is rather quick, executed simultaneously with ongoing navigation and communication with the pilot, and often in somewhat stressful conditions caused by air turbulence, sharp sunlight or nearby air traffic.Undoubtedly, this work environment is not ideal for any kind of in-depth analysis and researchers working in this field generally agree that only the most distinctive features are usually recorded where their presence is expected beforehand and the attention of the researcher is focused on this specific task.Apart from these, there may be many other remains of past human activities in the studied region, overlooked or neglected for a number of reasons that escape recording even after relatively intensive survey periods (Gojda 2003, 71-72).This potential weakness of a synthesising strategy to aerial survey is well documented by the following example: In Scotland, where aerial prospecting has been used since the 1940s, the elongated prehistoric enclosures of the cursus type remained virtually unknown until the 1970s, probably because their presence was not expected there and attention had been paid mostly to the remains dating from the Roman period.During the last forty years, their number has increased to several dozens thanks to prospecting redirected toward their identification and partially due to the reinterpretation of previously-taken images.A new analysis of these older photos has shown that they had already captured these features, but their original interpretation was incorrect (Brophy, Cowley 2005). Vertical imaging represents a very different methodone which can be labelled as an "analytical approach" to data collecting.In this case, photographing a selected area is undertaken systematically (sticking to chosen technical parameters, such as the flight height and flight path, focal length of camera, number of photographs necessary to cover the extent of studied landscape transect, etc.) and represents a spatially continuous record of the landscape under comparable conditions (light, state of vegetation, etc.) over the whole surveyed area (Figure 3).The specialist analysis, synthesis of information and its interpretation take place after the flight has ended.It is then possible to apply various approaches to the evaluation of the captured images, their analysis can be done independently by more people, it is plausible to apply a range of technical tools that complement or enhance the human senses, and so forth.In this way we can repeatedly look at large portions of the landscape in detail and in conditions which are more favourable to the identification of even relatively unpronounced and previously unexpected features.Photographs taken by vertically-oriented aerial reconnaissance cameras also represent a technically standardised visual record of the Earth's surface (Figure 4), which requires less geometric transformation for precise mapping compared to oblique images.Here we touch on the field of photogrammetry, the science of accurate measurements of real-world objects by means of their photographic representation.Advanced methodologies of photo orthorectification have been developed and are improving further (Schenk 2004); the necessary procedures and tools are today incorporated in a number of available computer programs (Heipke 1996;Pavelka 1999;Lillesand et al. 2015).It is undeniable that working with vertical images requires longer practical experience, tools for image enlargement and, if possible, also technical equipment for stereoscopic perception.When these requirements are met, practically all vertical images can offer valuable information for research into historic landscapes, even though they may not necessarily be represented by the spectacular cropmarks that the public (as well as many professionals) primarily link with aerial archaeology.A great many features of cultural heritage are preserved in the landscape.Most of these remain unnoticed or unrecorded due to several factors: researchers follow other objectives set by their work agenda, or the current theoretical paradigms of the relevant disciplines do not favour particular types of sites, or researchers preferably target other (better-preserved and more visually-striking) study sites, and, last but not least, there is a lack of knowledge about the technical possibilities of studying these features. Comparing the two categories of aerial photographs from just one other perspective, we cannot assume that the relatively few practitioners involved in oblique aerial photographing in archaeology could record a truly representative sample of all the possible classes of remains and features of historic landscapes.Luckily there are archives of vertical images in which the topography of the entire country is unselectively recorded and by which its appearance and condition has been periodically monitored.Of course, some problems are encountered in forested areas that are difficult to handle by conventional photography.In this type of environment, airborne laser scanning (ALS/LIDAR) today provides invaluable help to an ever-increasing level, because laser pulses are, to a certain degree, able to penetrate the vegetation canopy (Doneus et al. 2008;Gojda, John 2013;Trier et al. 2015).Despite these technological advances, aerial photographs still remain an important source of information, as their potential has so far only been evaluated to a minor level.An extensive amount of yet unexamined data persists in archives, as can be illustrated by the examples of deserted field systems, mining areas, sheep houses, churches, roads, tracks, etc. (Figure 5 gives an example illustrating this point). Another important advantage with vertical images is that in permanently inhabited areas they always include enough control points for precision mapping.Thus we can avoid the issue that is quite typical for oblique images, in which the effort to capture the archaeological features in detail often (paradoxically) makes their precise metric analysis impossible because of the lack of reliable ground control points (Gojda 2004b, 100;Palmer 2005).Even in cases when oblique aerial photos capture targeted sites in their wider landscape context (which is not always the case), they unavoidably suffer from great perspective distortion of the ground representation that, combined with other types of problems, complicates their geometrically correct transcription onto a plan or map (e.g.terrain height variation, optical distortions of common cameras).Hence mapping conducted on the basis of oblique photographs meets significant difficulties, especially in regions with larger areas of fields (and lower density of ground control points), a situation quite common in many countries with intensive agriculture. Oblique imaging undoubtedly has its own advantages as it offers great flexibility in the choice of timing and viewing angles, as well as many options of working with light to capture a visually impressive picture.It also allows for experimenting with various photographic techniques, media for data storage, etc. Images obtained in this way usually well illustrate the general appearance of historic sites to the wider public (Figure 6) because they do not need to be magnified and they are taken from a perspective close to everyday human experience (Grady 2000, 25-26).Individual images of this kind represent a detailed view of concentrated archaeological information.This can be both an advantage and disadvantage depending on the situation.The clear and detailed definition of features is certainly a positive aspect of oblique photography, but the absence of a wider landscape context (which would facilitate interpretation and mapping by referencing the local topography) may be seen as its major disadvantage.By contrast, the landscape setting of any place of interest is naturally available for study in every systematic vertical imaging (Doneus 2000, 36). To sum up this section, the use of oblique images in accurate mapping remains rather difficult and the technical problems rapidly worsen with increasing deviation of the image's principal axis from vertical.The optical parameters of the cameras used are usually not known with sufficient precision and quite often rigorous photogrammetric approaches cannot be applied.Despite these problems, methods of their (approximate) rectification are still available and in many cases they can be considered sufficient.Apart from the option to georeference raster images in present-day geographic information systems (e.g.ESRI ArcGIS), there are computer programs created specifically for aerial archaeology, the best known of which are probably Aerial and AirPhoto (Haigh 1995;Scollar 1998).These are relatively cheap and easy to use, and even have the option to take into account the terrain height to achieve higher precision, but they necessarily lack the versatility and extensive user support compared to the more generally-oriented commercial products aimed at larger markets. A significant and virtually unavoidable problem with many oblique images is the aforementioned absence of a sufficient number of suitable ground control points, which would allow accurate transcription of the image into a map.In principle, the use of stereoscopic analysis is possible for oblique images and would undoubtedly aid their interpretation and transcription, but photographs are usually not taken for this purpose and do not meet the requirements for facilitating a three-dimensional perception. Other aspects that we need to consider, along with the basic technical parameters of the images, are the temporal and financial constraints.Outsourced vertical imagery, delivered by a national agency or commercial company can in many ways save researchers' time, allowing them to dedicate their resources to specialised tasks linked directly to their archaeological or cultural heritage agenda.Prospecting with oblique imaging if it is to be done effectively requires a virtually constant readiness to make use of suitable meteorological and light conditions and exchange costly flight time for documentation of the highest possible quality.For a systematic project in a chosen region, many flight hours during the year are necessary, to which we must add not only the significant time spent for every flight preparation but also the subsequent recording of individual flight routes and the observations made, archiving of photographs, etc.It is estimated that each hour spent in the air requires four hours (or up to eight hours, depending on circumstances) after landing dedicated to these post-flight activities (Musson 1995, 63).The specialist analysis of the images and work with the data collected can only commence afterwards. Timeliness of aerial survey One of the common complaints one can hear in the archaeological community about vertical photographs is that most of them were commissioned for non-archaeological purposes (e.g. for cartography or civil engineering) and therefore are lacking properties that are optimal for archaeological prospection.This raises the question of how we can generally determine which season of the year is best for archaeological reconnaissance.There is no doubt that a heavy preference for one particular set of conditions, typically those producing well-developed cropmarks (Riley 1979), is certainly productive in its one-sidedness (Gojda 2004b, 76), but at the same time, such an empirical bias causes many other indicators of anthropogenic landscape features to be neglected.Valuable information can certainly be captured in any season of the year.This is not to deny that having more historical photos taken in the main season for cropmarks would be helpful (for central Europe, this period is from May to July).We simply have to accept the fact that these are rarely available in the existing archives because the peak of the vegetative cycle is not an ideal time for the collection of cartographic data.Fully grown vegetation covers the terrain, causing difficulties for the extraction of elevation models using the photogrammetric method (Fabris, Pesci 2005). Until the second half of the twentieth century, the campaigns of aerial photo reconnaissance in the Czech Republic were often undertaken when most fields had been harvested (i.e. from high summer to the beginning of autumn).Precise dates, apart from the year, were usually not recorded or have not survived for the first decades of systematic military reconnaissance in the Czech Republic (this material is held in the archive of the Military Geographical and Hydrometeorological Institute in Dobruška), but the season is indicated by the numerous sheaves on the fields and by the typical "envelope-like" pattern on larger stubble fields, caused by harvesting the outer strip around the edges of a field first, followed by another strip nearer to the centre and so on (Figure 7).This means that a considerable portion of the fields is free of standing crops in these photographs and the anticipated marks showing the presence of subsurface archaeology can be identified only by chance on fields that had not yet been harvested, or rather rarely on stubble fields and grassland. The national programmes of aerial photographic campaigns for cartographic purposes in the Czech Republic have in recent decades been undertaken in April and May, which seems to be a bit too early for the purposes of archaeological prospecting.However, when the spring weather is favourable, we can get a very good record of the early phase of cropmarks, either on winter cereals (Figure 8) or on other early crops such as rape (Brassica napus).In addition, when concurrent favourable factors generate good conditions for the early development of cropmarks, we can observe very fine details thanks to the low height of plants, which later partially disappear when the vegetation grows taller and overweight stems tend to bend over. Apart from the fact that we usually lack vertical images taken at the height of summer, we also do not have enough documentation from the winter season that make use of snow marks (Becker 1996a, Abb. 1;1996b, Farbtafeln XXXV-XXXVII;Braasch 1996;Faßbinder, Irlinger 1996, Farbtafeln XXXI-XXXII;Fröhlich 1997, Abb. 7, 90, 91;Leidorf 1996, 42;Stanjek, Faßbinder 1996); such coverage is very rarely represented in many existing archives.Some hope in this respect is offered by the surviving photographs taken in air reconnaissance during the Second World War and during the subsequent, so-called Cold War period (Cowley, Stichelbaut 2012;Going 2002;2006;Rączkowski 2004), where these seasons might be better represented, even though the geographical coverage and actual number of surviving photographs will be limited. Another temporal layer for comparison of vertical and oblique imaging is the time of day when the photographs of a given area are taken.Here we also encounter a conflict between the needs of archaeology and cartography.Oblique archaeological imaging gains most information from photographs taken in the morning and in the evening, when the low angle of sunlight accentuates anthropogenic modifications in the terrain (Figure 9) or even cropmarks by casting shadows (Crawford, Keiller 1928;Gojda 2004b, 82).On the other hand, vertical images commissioned primarily for the purpose of making topographic maps are taken at the time of day when shadows are minimal because they diminish the "legibility" of the Earth's surface and hinder taking precise measurements in the affected areas (Burnside 1979, 33).That is the reason why the terrain often looks very flat and seemingly monotonous on individual vertical photographs, but once we interpret stereo-pairs of images allowing three-dimensional perception this problem almost disappears (Avery, Lyons 1981;Bradford 1957). Impact of new technology Current technological advances make some aspects of aerial prospection significantly easier.These include digital photography, GPS, GIS and Internet, which can all be interconnected during the survey itself and the subsequent processing of collected data (Doneus 2006;Leckebusch 2005;Nagy, Schlenther 2007).On the other hand, it should be highlighted that the direction in which the development is heading clearly leads away from the key role of individual surveyors, whose personal dedication, skills and effort gave birth to aerial archaeology and nurtured it from the very beginning, to the coordinated cooperation between specialists from a range of disciplines.Without such cooperation the potential of the current technologies cannot be fully reached.It logically follows that with this trend the financial cost of survey undertaken by archaeologists is growing (and will be growing), and so are the demands on their familiarity with the technologies used and on securing the long-term storage and maintenance of the fast-growing volume of typologically varied data (Bewley et al. 1999;Doneus 2006). Currently, we do witness an enormous increase in the number of available types of UAVs (unmanned aerial vehicles, or drones) in the civil sector.These devices will dramatically reduce the costs of analytically undertaken archaeological prospecting, especially in smaller study areas, and they are capable of removing one of the few truly apparent advantages of oblique imaging from small aircraft: the high flexibility of the timing and recording methods (Colomina, Molina 2014;Lambers et al. 2007).Considering this technological development, UAVs may effectively close the gap between the two formerly competitive strategies: the analytic approach of vertical photogrammetric imaging, and the synthesising survey strategy intrinsically connected with oblique imaging.We are entering an era when we can utilise the complete range of approaches from the purely analytic to the precisely targeted interpretive prospecting, and take into account specific research questions, environmental and technical constraints, as well as the limited budgets of archaeological projects. Conclusion In contrast with surveys based on the practice of an interpretive approach and oblique photography, analytical aerial prospecting makes use of vertical photography where the technical issues, including the necessary procedures during flight and the primary archiving of images, are taken care of by the specialised personnel of a particular professional institution (so far it is unlikely that this type of imaging could be effectively undertaken by archaeologists themselves).The archaeologist can thus invest the financial resources into purchasing the ready-to-use data providing a complete and standardised analytical coverage of the study area instead of buying flight time for one's own aerial reconnaissance.The researcher's time can then be dedicated to the study and interpretation of the delivered imagery-that is, directly to specialised archaeological work. While systematic analytical surveys are highly advantageous for archaeology in many aspects, there remain some problems which justify oblique photography to be undertaken in parallel.Factors in favour of synthetic aerial survey include the relative freedom to mix various aspects that play a key role when certain knowledge enhancement requires very specific details to be captured in pin-pointed areas -details that need the carefully targeted effort of a skilled specialist in historic landscape studies.Oblique imaging has often contributed to a successful interpretation with details not visible on vertical images.Hence the approaches can be considered complementary in many respects and the most productive projects will probably be those that will manage to get the most from combining both strategies (Doneus 2000, 36-38).One example of results obtained by this combined strategy is shown in Figure 10.The redundancy of data produced using both methods is also useful as it brings the possibility to control the spatial delimitation and interpretation of features of uncertain origin and/or function.Independent observation (imaging) and the mutual evaluation of various interpretative alternatives is often critical for finding a satisfactory understanding of discoveries, which can be subsequently tested by geophysical survey, field-walking, test excavation and the like.By combining both survey strategies we can successfully bridge the known differences between their capabilities, which are often apparent in the practice of both surface survey (field-walking) and aerial prospecting.The analytical approach usually brings a quantitatively and spatially well-balanced distribution of data, but compared to the synthesizing approach it lags behind in quality and richness of detail that can only be present when our attention is focused at an optimal time on one spot where this detail is momentarily accessible.An attempt to summarize the pros and cons of both strategies of data collection is shown in Table 1. Figure 1.Footprints of oblique (A) and vertical (B) aerial photographs covering an archaeological site.The crosses mark the nadirs of individual photographs, i.e. the points directly below the camera positions.Note that they are located outside the covered area in the case of obliques, while they coincide with the centres of vertical photos (after Hampton1978, Figure9). Figure 2 . Figure2.The concept of radial distortion of an image due to vertical ruggedness of the terrain on an aerial photograph.There is no simple transformation relationship between the central projection of the photo and the orthogonal map or plan.The correction of the distortion can be derived from a series of overlapping images, in which the apparent dislocation of points a, b, c on the individual photographs can be explained by differences in their elevation.Using the method of intersecting radial lines, their correct locations A, B, C on the map can be derived (after Hampton 1978, Figure17). Figure 3 . Figure 3. GPS record of "synthesizing survey" flights conducted by the author in 2006 in the region of West Bohemia. Figure 4 . Figure 4.An example of three overlapping vertical aerial photographs from a single sortie, providing a systematic record of the landscape.Source: Military Geographical and Hydrometeorological Institute in Dobruška, adapted by the author. Figure 5 .Figure 6 .Figure 5 . Figure 5. Mnetěš, Litoměřice District, Czech Republic.Gradual dilapidation of a former sheep house, as recorded on vertical aerial photographs in 1946 (A), 1973 (B) and 2007 (C).A pronounced linear cropmark on the youngest picture reveals the position of a ditch dividing fields on both earlier images.Source: Military Geographical and Hydrometeorological Institute in Dobruška (A, B) and Geodis Brno (C). Figure 7 . Figure 7. Harvested fields on an enlargement of a vertical aerial photograph from 1946.Source: Military Geographical and Hydrometeorological Institute in Dobruška. Figure 8 . Figure 8. Horní Počaply, Mělník District, Czech Republic.Maculae visible on a field with a winter crop.An enlargement of a vertical photograph taken primarily for cartographic purposes on 28 th April 2007.Source: Geodis Brno. Figure 9 . Figure 9. Litice, Plzeň-město District.An oblique image captures a comprehensive view of the castle remains, highlighted by shadow marks, but it is not suitable for their precise mapping (photo by the author from 20 th June 2005). Figure 10 . Figure 10.Ledčice, Mělník District.An archaeological interpretation of a compilation of vertical and oblique aerial photographs transcribed into a map in GIS. Table 1 . A comparison of selected characteristics of vertical and oblique aerial photos (i.e.analytical and synthetic survey) in the Czech Republic.
2018-12-12T12:31:36.507Z
2017-06-30T00:00:00.000
{ "year": 2017, "sha1": "235a51ad07e7d7e12a553ac8d45237497714c53f", "oa_license": "CCBY", "oa_url": "https://doi.org/10.24916/iansa.2017.1.6", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "235a51ad07e7d7e12a553ac8d45237497714c53f", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Geography" ] }
256710282
pes2o/s2orc
v3-fos-license
Outcome of Nivolumab-Induced Vogt–Koyanagi–Harada Disease-Like Uveitis in a Patient Managed without Intravenous Methylprednisolone Therapy Background In recent years, immune checkpoint inhibitors (ICI) have been often used for several types of cancers. Immune-related adverse events (irAEs) are autoimmune responses caused by ICI. Among the different types of irAEs, uveitis is common in ophthalmology. Moreover, there are reports on Vogt–Koyanagi–Harada (VKH) disease-like uveitis. In most cases, VKH, as in the usual VKH, is managed with intravenous methylprednisolone therapy. Case Report. A 72-year-old man was diagnosed with gastric cancer, and he was treated with nivolumab, a type of ICI. After eight cycles of nivolumab therapy, he developed fulminant type 1 diabetes mellitus and diabetic ketoacidosis. Thus, the treatment was discontinued. Subsequently, the patient was referred to our department due to bilateral blurry vision. He had decreased visual acuity in both eyes, and slit lamp examination revealed the presence of bilateral anterior chamber cells and keratic precipitates. Fundus examination showed bilateral serous retinal detachment (SRD), wavy retinal pigment epithelium (RPE), and choroidal thickening. Cerebrospinal fluid examination revealed prominent pleocytosis. Thus, we initiated eye drop therapy and subtenon injection of triamcinolone acetonide on the right eye only. After 1 month, SRD and wavy RPE disappeared, and the patient's visual acuity improved. Further, both eyes had similar improvements in visual acuity and abnormal findings. Oral prednisolone was subsequently administered for hearing loss. However, intravenous methylprednisolone was not used, and ophthalmologic findings and visual acuity did not change before and after systemic steroid therapy. One year after disease onset, SRD and wavy RPE did not relapse. Conclusion Nivolumab-induced VKH disease-like uveitis can have good outcomes even in a patient managed without intravenous methylprednisolone therapy. Introduction In recent years, immune checkpoint inhibitors (ICI) targeting programmed cell death 1 (PD-1) have been often used in cancer treatment. PD-1 is a coinhibitory molecule that exists on the surface of T cells and binds to programmed death-ligand 1 expressed on the surface of tumor cells. The binding of ligand and PD-1 inhibits T cell activation leading to apoptosis. Nivolumab, a human immunoglobulin G4 monoclonal antibody, binds to PD-1, maintains T cell activation, and inhibits T cell apoptosis [1]. Since it was approved in 2014, its application has expanded over the years. That is, it is used to manage different types of cancer. Although the drug is associated with good outcomes in cancer, patients can develop inflammatory adverse events caused by immune activation, which are referred to as immune-related adverse events (irAE). Uveitis is a common irAE in ophthalmology [2], and severe cases of Vogt-Koyanagi-Harada disease (VKH)-like uveitis have been reported. Based on the uveitis guidelines in Japan, high-dose steroid therapy is used for VKH [3]. In most cases of ICI-induced VKH disease-like uveitis, both domestically and internationally, the same treatment strategy is used to date. However, high-dose steroid therapy can be challenging in some patients, such as elderly individuals and those with diabetes. Herein, we report a patient with nivolumab-induced VKH disease-like uveitis who had good outcomes after receiving treatment without intravenous methylprednisolone therapy. Case Presentation A 72-year-old man was referred to our department due to bilateral blurry vision in June 2021. His decimal bestcorrected visual acuities (BCVAs) were 0.2 in the right eye and 0.4 in the left eye. He had no previous medical history of infection, headache, tinnitus, hearing impairments, and vitiligo. Slit lamp examination revealed the presence of minimal bilateral anterior chamber cells and keratic precipitates. Fundus examination and optical coherence tomography showed serous retinal detachment (SRD), wavy retinal pigment epithelium (RPE), and choroidal thickening in both eyes (Figures 1(a), 1(b), 2(a), and 2(b)). Fluorescein angiography revealed leakage at some pinpoint-sized areas, pooling on the posterior pole, and hyperfluorescent optic disks in both eyes (Figures 1(c) and 1(d)). Indocyanine green fluorescence angiography showed some hypofluorescent dark spots during the late phase (Figures 1(e) and 1(f)). Cerebrospinal fluid examination was performed at the department of neurology, and results showed that the number of cells, predominantly mononuclear cells, increased to 142/μL. Audiometry was conducted at the department of otorhinolaryngology. Based on the findings, presbyacusis was suspected. Human leukocyte antigen (HLA) typing revealed A2, A24, B35, B54, and DR4. Case Reports in Ophthalmological Medicine The patient was diagnosed with gastric cancer in 2019, and treatment with nivolumab was initiated in March 2021. However, in June 2021, 2 weeks before our initial examination, he developed fulminant type 1 diabetes mellitus and diabetic ketoacidosis after eight cycles of nivolumab therapy. We assumed that VKH disease-like uveitis was caused by nivolumab. Thus, nivolumab therapy was discontinued. Next, treatment with topical corticosteroid (betamethasone sodium phosphate 0.1%) six times a day and tropicamide phenylephrine hydrochloride three times a day was started on both eyes. On the following day, the right eye, which had severe vision loss, received subtenon injection of triamcinolone acetonide (STTA). After 1 week, the patient's BCVA improved up to 0.6 in both eyes. However, there were no changes in SRD. Further, the volume of subretinal fluid 3 Case Reports in Ophthalmological Medicine 2(f)), and the BCVAs improved up to 0.9 in the right eye and 0.6 in the left eye. The ophthalmologic progress was good. However, the patient had hearing loss 2 weeks after the initial examination. The surgeon initiated treatment with oral prednisolone (40 mg) with tapering in August 2021. Hearing loss immediately improved, and oral steroid treatment was completed in October 2021. After oral steroid therapy, there was no improvement in BCVA. In both eyes, choroidal thinning (Figures 2(g) and 2(h)) and cataract (particularly in the left eye) gradually progressed. In February 2022, uveal inflammation was not observed, and bilateral cataract surgery (phacoemulsification and intraocular lens implantation) with STTA was performed. The BCVAs on the day after each surgery were 1.0 in the right eye and 1.2 in the left eye. After approximately 1 year after disease onset, abnormal findings such as leakage and pooling on fluorescein angiography and hypofluorescent dark spots on indocyanine green fluorescence angiography had disappeared (Figures 3(c)-3(f)). Fundus examination revealed a sunset glow appearance (Figures 3(a) and 3(b)). SRD and wavy RPE did not relapse 1 year after disease onset (Figures 2(i) and 2(j)). Discussion VKH is severe bilateral granulomatous posterior or panuveitis associated with SRD, disk edema, and vitritis [4]. It is an autoimmune inflammatory disease caused by cytotoxic T lymphocytes targeting melanocytes [5,6]. Genetic phenotype and viral infection are involved in the pathogenesis of the disease [7], and several cases of VKH disease-like uveitis after the initiation of ICI, which is often used in cancer treatment in recent years, have been reported. There are 15 cases of nivolumab-induced VKH disease-like uveitis, including ours [8][9][10][11][12][13][14][15][16][17][18][19][20] (Tables 1 and 2). Ophthalmic irAE are diverse, and the most common of which is uveitis (15.1%) [2]. Among all types of uveitis, the most frequently observed is anterior uveitis (37.7%) [21]. However, VKH disease-like findings are also observed in some cases. The median time from the start of ICI to the appearance of uveitis is 63 days, with 83.6% of cases appearing within 6 months [21]. In our case, fulminant type 1 diabetes mellitus, which is an irAE, appeared at the 106th day (15 weeks, 3.5 months), and ophthalmic symptoms at the 119th day (17 weeks, 4 months). Based on other reports of nivolumab-induced VKH disease-like uveitis, the onset was within 6 months (from 2 weeks to 4 months) in all but one case [12]. Case Reports in Ophthalmological Medicine In patients with VKH, mononuclear cell-dominant cerebrospinal fluid pleocytosis, which is indicative of aseptic meningitis, is often observed, with a frequency of 82.7% [22]. A previous report did not show pleocytosis, thereby indicating the risk of a different etiology from the usual VKH [16]. However, in our case, the patient presented with prominent pleocytosis and VKH. If irAE is suspected, ICI therapy should be discontinued. Based on the American Society of Clinical Oncology clinical practice guidelines, ICI should be permanently discontinued if posterior uveitis or panuveitis develops [26]. In clinical practice, ICI therapy is continued or resumed based on the extent of the underlying disease and irAE in some cases. Nivolumab treatment was discontinued in almost all cases since 2018, after the abovementioned guideline was published. In one case, the patient presented with choroidal thickening and anterior segment inflammation after restarting nivolumab treatment [9]. Hence, caution must be observed when continuing or resuming nivolumab. Initially, acute VKH must be treated aggressively with corticosteroids, and local treatment alone is not recommended for this disease [4]. As shown in Table 2, the initial treatment for nivolumab-induced VKH disease-like uveitis is not certain, and whether systemic therapy, particularly intravenous methylprednisolone, is necessary is a matter of debate. Ours and previous cases showed that intravenous steroids (150-1,000 mg/day) were used in 8 of 15 patients. Moreover, 13 patients received systemic steroids, including oral steroids. Steroid therapy can cause complications, such as infection and diabetes, and its application in elderly individuals may be challenging. Nevertheless, there are several cases, including ours, in which systemic complications could not be controlled by local therapy alone. Systemic steroids may be important as in VKH. In addition, the extent of VKH disease-like findings in these reports varies. Ultimately, decision-making must be made on a case-to-case basis. STTA was administered to only one eye, and it was observed that there was no difference in final visual acuity after 1 week of administration or in final visual acuity. To the best of our knowledge, this is the first case in which STTA was administered to one eye only. Prior to the initiation of oral steroids, SRD and wavy RPE disappeared, and the patient had the maximum BCVA before cataract surgery. Hence, remission was achieved with local therapy (eye drops and STTA) alone. There have been no recurrences since then. However, oral steroids administered for hearing loss could have prevented recurrence. Nevertheless, whether nivolumab-induced VKH diseaselike uveitis is different from diseases such as nivolumabinduced VKH and VKH is inconclusive. If the latter is considered, intravenous methylprednisolone therapy should be initiated, as in VKH. However, in a previous case, SRD resolved immediately after treatment with eye drops and STTA alone. Further, the BCVA improved, which is not commonly observed in normal VKH. Nevertheless, our patient required oral steroids due to extraocular complications. Hence, the development of a treatment regimen for nivolumab-induced VKH disease-like uveitis, with consideration of the systemic management and prevention of recurrence, remains a challenge. Data Availability All data used to support the conclusions of the study are available upon request. Ethical Approval This case report was approved by the ethics committee of the organization to which the authors belong. Consent A written explanation about the study publication was provided, and consent was obtained from the patient himself. The Hindawi consent to publication form was used, and the signed consent form was kept by the author.
2023-02-10T16:08:09.796Z
2023-02-08T00:00:00.000
{ "year": 2023, "sha1": "c9f0633b073698bbd155022dfd8ff073f6be373e", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/criopm/2023/9565205.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f1b68522d913d10d2b48434c673d6a7972d69cbb", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
16628251
pes2o/s2orc
v3-fos-license
Prevalence of Non-drug Poisoning in Patients Admitted to Hospitals of Mazandaran University of Medical Sciences, 2010-2011 Introduction: Every year million people have poisoning. Most of them will duo to severity of complications. Identifying the pattern of poisoning will help to prevent of them. Because of the non-medicine substance have a wide variety range and easily is used among people, so the aim of this study was to determine frequency of non-medicinal poisoning according to 10th revision of International Classification of Diseases (ICD-10) in hospitalized patient. Method: This is a descriptive cross section study. The medical records of inpatient hospitalized in hospitals of Mazandaran University of Medical Sciences during 2010-2011 were reviewed. The ICD-10 codes for retrieval patient records were T51-T65 which was included alcohol, organic solvent, halogen derivatives, corrosive substance, detergent, metals, inorganic substance, carbon monoxide, gases, fumes and vapors, pesticide, noxious substance has eaten as seafood, noxious substance has eaten as food, unspecified substances. The data were analyzed with SPSS and descriptive and X2 statistics. Results: Of the 1546 in patient with diagnosed poisoning, the 581(37.5%) were non medicine poisoning. Median of age 29±17 years, 231(51.6%) female, 300(51.6%) are intentional, and the most material were insecticide276 (47.5%), sting 96(16.3%) and alcohol 76(13%) and organic solvent 40 cases and the 38(95%) of them was children. Conclusion: According the result of this study the most cause of poisoning was insecticides. Preventive program for all the groups are suggested and for intentional self-harms and suicide attempted the program of consultation is necessary. INTRODUCTION Introduction: Every year, millions of people are being poisoned by various substances (1). Many patients die due to complications from poisoning (2)(3)(4)(5). Mortality due to poisoning has increased dramatically in the United States since the 1970s. The most significant increase was reported in unintentional poisoning mortality rates, which more than tripled from 1990 to 2002 (27). Recognizing patterns can help to prevent accidental poisoning. Consumption and poisoning are different due to the different communities. Some sources stated that most accidental poisoning often with cases of deliberate self-poisoning in children and adults over 50 years of youth suicides has been observed (7). Prevention strategies can be used to achieve the exchange of information cheaper and more convenient ways of the countries with the highest degree of effectiveness can be selected. But before achieving prevention strategies in communities known to be toxic pattern selection strategies are effective. Prerequisite to having pattern of uniform classification system is for consumables. The World Health Organization (WHO) in International Classification of Diseases' Book (ICD), the same Classification system and the international has offered from needed material and its reason. This system can be used for information exchange and the strategies based on the same source data collection can take maximum advantage (28). Toxicity study using the International Classification of Disease study has been done. These codes comprised 960-995 was used from ICD-9-C-M which considered outdated, which refer to poisoning by drugs, medical and biological substances, as well as toxic effects of substances that are chiefly non-medicinal (29). In Hassanian's study was used ICD-10, but for classifying of burn is calculated classes of poisoning (30), Bohnert's study was only due to poisoning (31). Qureshi also study the cause of poisoning (intentional and unintentional) and is considered one of the classes, it is unknown who removed also study the cause of poisoning (intentional and unintentional) and is considered one of the classes, and it is unknown who removed. In study of Karami and Ahmadi also used the study drugs were classified according to the range of materials used (12)(13). It is worth noting that according to most sources, some researchers attempted to classify the range of materials (3)(4)(14)(15), Botulism is a bacterial origin not that has part of poisoning (16), Finally, it has examined only one type of poisoning (2,(17)(18)(19)(20). However, despite the high prevalence of non-drug intoxication due to access easy to use and its effects on high-risk age groups (young and old) have not been considered as a separate (4) . Also, due to the lack of uniform data classes are not comparable. Those cases in which the disease is international class model or problems that existed earlier were in the selected classes (9)(10)(11). Given that no similar study in the province and did not using the international classification system; and also because of the availability of pesticides and other poisons material is readily available. Therefore, this study aimed to evaluate the toxicity of non-drug using by (ICD). The results of this study to determine the pattern of international non-medicinal poisoning is common in the province. The results obtained with all the other countries that share the same regression. Prevention strategies that can be used in other countries and experience are based on the same data structure can be used. MATERIALS this survey is descriptive and cross sectional and was conducted using data from the medical records of 581 hospitalized patients who have been diagnosed with poisoning in 2010 in hospitals which are affiliation of Mazandaran University of Medical Sciences (22 hospitals except for private and social security health service's hospitals). Because of the importance of the subject, the research society hasn't been under scrutiny or census. The criterion of entering this study was determined by final diagnosis of non-drug poisoning which are in ICD, and leaving the study was the disclaimer of the diagnosis and discharge against medical advice in case of absence of final and definite diagnosis. The variables of this study were age, gender, marital status, general status while hospitalizing and discharging, discharging status Length of stay (LOS), poisoning agent's type, the cause of poisoning (intentional, unintentional, unde-termined intent , poisoning background (medicinal or non-medicinal), the time between incident of the poisoning and hospitalization, month of the hospitalization. The method was that after preparation the schedule, a pilot study with at least 30 medical records was done and the defects of the schedule was resolved, afterwards with proposal confirmation and getting the warrant of deputy for research, the medical records was taken from the archive and the data was entered in the table. It is important to mention that the book of International Classification of Diseases (ICD-10) is currently used in health care centers, has a letternumber code for every status of disorders with this arrangements study of different categories such as disorders, symptoms, or even poisonings are much easier. The non-drug poisonings according their codes are: poisoning with alcohol (T51), organic solvents (T52), halogen derivatives (T53), corrosive chemicals (T54), detergents (T55), metals (T56), organic chemicals (T57), carbon monoxide (T58), smokes and vapors (T59), pesticides (T60), toxic materials eaten as food (T61), other toxic materials (T62), animal's bite (T63), aflatoxin and mycotoxin (T64), and toxic effect of unknown materials (T65) (8). For legal issues, and also confidentiality principles, we are refused to mention the name of the hospitals, patients, physicians, and presenting date individually. The results were reported after analysis. The data was analyzed using SPSS software, descriptive statistics and chi-square. RESULTS The results show that from all the 22 health care centers across the province only 12 hospitals had hospitalized patients with non-drug poisonings. There were 1546 poisoning cases including medicinal and non-drug, there were 581(37.5%) non-drug poisoning who were hospitalized. There were not any cases with disproved diagnosis or absence of final and definite diagnosis. The average age of pa-tients was 29±17 which was varied from a one year old to 85 years old. The classification of the age showed that there were 63 people (11%) under the age of 12, 96 people (16.7%) between [13][14][15][16][17][18][19]200 people (34.8%) between 20-30, 79 people (13.7%) between 31-40, and 135 people (23.8) 41 and over. There were 231 (39.8%) female and 350 (60.2%) male. From the 231 women (39.8%) in the study group 141 people (61%) were poisoned intentionally and 82 people (35.5%) were poisoned unintentionally whereas, from the 350 men in the study group 159 people (45.4%) were poisoned intentionally and 166 people (47.4%) were poisoned unintentionally. In other cases (30 people) the cause of poisoning was unknown. Chi-square test showed two correlations between the cause of poisoning and gender (X2=14, p-value=0/001, DF=2). The Length of stay of hospitalization was 2±2 days and the maximum time between the poisoning occurred and hospitalization was two hours. In whole 300 people (51.6%) were poisoned intentionally and 248 people (42.7%) were poisoned unintentionally and the intents of others poisonings were unknown. In case of background, 38 people (6.6%) had previous history of poisoning. 410 people (%70.6) people without history and others were unknown. Other characteristics of the patients are gathered in Table 1. Results of the status of the patient entering the hospital and while discharging are gathered in Table 2 and the causes of poisonings are in Table 3. The second Chi-square test (X2=27, p-value=0/006,df=12) showed a meaningful correlation between the age and the status of the patients while dis- DISCUSSION WHO in the 2 nd volume of ICD-10 has defined poisoning: poisoning and certain other consequences of external causes, poisoning by drugs, medications and biological substances (accidental poisoning and poisoning of undetermined intent by alcohol or dependence-producing drugs). Within this description some terms like intent of the user (or terms of encounter) were studied in three categories: intentional, unintentional and unknown. Intention definition was used for of suicide and homicide cases. In cases of unintentional encounter of poisons we can mention encountering agricultural poison while using pesticides, eating poisonous or vegetables, encountering some gas or insect bites. The side effects of the medicines in treatment dosage should not be considered as poisoning. Results showed that non-drug poisoning was the 37.5% of the total poisonings which was 28.4% in Karami's studyin 2000Karami's studyin -2002.7% in Ahmadi's study (13,14). In Hosseinian Moghadam's one year study from the total 11456 patients admitted to Loghman, 2003 people (0.17%) had nondrug poisoning (10). Bohnert in a ten years study, unintentional poisoning mortality rates higher in men than in women; however, the increase in rate over time was higher in women than in men (6). In Moghadamnia's ten years study which had a glanced at poisoning in west Mazandaran province reported that because of the variety and widespread usage of pesticides, petroleum and carbon monoxide outbreak of the non-drug poisonings has increased (7). Result showed that considering gender, male were more than female in number which do not agree with the results of the study of Rafighdoost, Sarjam'ee, and Moosavi (18,21,22). In a 12 years study by Yeganeh, in 2005 the majority of the population in the study were women and in 2001 majority werw men (18). This increase in rate may represent the fact that men have more encounters with poisons which related jobs and maybe the courage of using poisons especially in case of suicide. In average; the non-drug poisoning was high in young individuals in the society. In Sarjamiee's study 19.8% the patients were between 12-18 years old (22), in Moosavi's study 80.4% were between 15-30 year old (23), in Karami's study 47.4 were between 12-30 years old (12), in Nazari's study 25.5% were between 20-29 year old (21), and in Tofighi's study the majority of frequency was amongst 21-30 years olds (2). We cannot definitely explain a reason for it but we may relate this to audacity of young people using toxic materials for suicide, and ease of access or in children lack of knowledge about the materials or accidental access. The results showed that 53 children (9.1%) were poisoned, mostly with Kerosen. In this case it may be the negligence of the parents or using bottles water container's for keeping Kerosen. In Kashef's study using medicine and then animal bites (25), in Basu's study petroleum (26), in Besharat's study opium (20), in Assar's study hydrocarbons were the poisoning agents in children (15). As said above accidental access, negligence of the parents, and lack of knowledge about the material may be the main causes of these poisonings. The results showed that from two hours after poisoning the patients went to hospital, while in Afzali's study 54% patients the average of time was more than 6 hours (1). In Ahmadi's study it took for most of the patients (37.2%) about 2 to 6 hours to get to the hospital and for others 2 hours after the poisoning occurred (14). In Afzali's study 54% of the patients were admitted to the hospital after more than 6 hours (3). This time difference have different reasons like increased access to public health, increases use of personal and public transportation and even the knowledge of the patients. The results showed that the poisonings are higher in summer followed in spring. In Mehdizadeh's study in winter, in Tofighi's study in October till March, and Nazari's study was in winter. In Moghadamnia's study the poisoning agent has a different outbreak in some seasons. For example during war poisoning with chemical gases and carbon monoxide but in winter and during planting seasons agricultural poisonings are more being reported (8). Because agriculture has seasonal prevalence in Mazandaran province, and it starts from the early spring till the end of summer or middle of the autumn so it may seem that in cases of encountering with agricultural poisons especially in farmers, it is mostly about disregarding the essential principles while using them. It is clear in this study that 51.6% of the poisonings were intentional it was 47.2% in Karami (20). This instance shows the high occurrence of self damaging poisoning and intended suicides or even pretending as it is a suicide. The results showed that the most common poisoning agent was the pesticides. Also in Afzali's study the organophosphates and herbicides, in Tofighi's study there were 753 cases of carbon monoxide poisoning in one year, in Yeganeh's study during 12 years depilatory (18), in Moosavi's study organophosphate, alcohol and industrial materials, in Rafighdoost's study 51 cases of organophosphate in one year (18), Nazari in 5 years 3078 cases of carbon monoxide (20), Qian in addition to the drugs chemicals like cyanide, pesticides, carbon monoxide, alcohol, bites, metals, and a mixture of some materials were found (24). Hasanian, Moghadam Nia, Ahmadi and Afzali reported as following: pesticides and alcohol, Ahmadi alcohol, carbon monoxide and industrial materials, environmental poisons, Afzali organo phosphate and depilatory. It seems that the variety of the poisoning agents is the same in most of the places which it could due to the result of the ease of access, being cheap and even ease of usage for suicide. The results of this survey showed that 6.5% of the patients died. The table of age classifications and patients status while being discharged showed that in the age group of 40 and over the death rate (47%) was higher which was in the contrast with the results of the Afzali and Najjari's study which the most of the patients who died were (3)(4). In Moosavi's study from the 178 hospitalized patients 3 subjects (1.6%) died and in Nazari's study there were 11.2% died of carbon monoxide poisoning (21,23). In Qian's study in 10 years there were 218 deaths and in Ahmadi's study 27 patient (1.3%) died of organophosphate and karbamate insecticides poisonings. In Afzali's study 11 patients (3.8%) died of opioids, depilatory and aluminum phosphate. Therefore, it seems that poisoning still has a high mortality range and needs health department's managers' consideration from the aspect of prevention.
2016-05-04T20:20:58.661Z
2013-09-01T00:00:00.000
{ "year": 2013, "sha1": "bd1eeaac68defbfd039c3de15d9fcdadd421b191", "oa_license": "CCBYNC", "oa_url": "https://europepmc.org/articles/pmc3804478?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "bd1eeaac68defbfd039c3de15d9fcdadd421b191", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
24137025
pes2o/s2orc
v3-fos-license
Novel approaches to the patient with massive hemoptysis Massive hemoptysis is a life-threatening condition with a high mortality when treated conservatively. Several modalities have been described in the treatment of hemoptysis with varying results. Endobronchial therapy has traditionally been performed with rigid bronchoscopy. This requires both specialized training and equipment that is not readily available in many centers. The role of fiberoptic bronchoscopy (FOB) is unclear in these situations but is more widely accessible. We describe three cases of the successful treatment of hemoptysis with FOB. These patients were treated with a combination of techniques described previously in the literature; however, these methods failed to result in cessation of the bleeding. Therefore, we employed alternative strategies not described in the literature, using oxidized regenerated cellulose with FOB alone as well as in conjunction with endobronchial placement of vascular embolization coils. These additional techniques may offer other options when rigid bronchoscopy or other modalities are not readily available. M assive hemoptysis is a life-threatening condition with a high mortality when treated conservatively. Interestingly, varying definitions in the literature exist of massive hemoptysis. These range from 100 to over 1000 mL in a 24-hour period (1). Malignant airway tumors, bronchitis, and bronchiectasis are typically the most common cause of massive hemoptysis but tuberculosis and lung abscesses among others have been reported (2). Tuberculosis remains an important cause of hemoptysis in the United States despite the decreased prevalence compared to more endemic countries (3). As with the definition of massive hemoptysis, the approaches to treatment and therapy are variable. Several modalities involving arteriography have been described, including bronchial arterial embolization. Airway interventions include procedures performed with rigid and fiberoptic bronchoscopes and open surgical procedures. Rigid bronchoscopy is advantageous because it maintains a patent airway as well as allows larger instruments and suction catheters to be used. Unfortunately, a substantive percentage of these methods require highly specialized centers, including thoracic vascular radiology and rigid bronchoscopy. We present cases of massive hemoptysis treated with alternative methods using fiberoptic bronchoscopy (FOB) and tools available to bronchoscopists at most institutions. (page number not for citation purpose) Established Facts . Endobronchial treatment of hemoptysis has traditionally been performed with rigid bronchoscopy that is not readily available at many centers. Novel Insights . Fiberoptic placement of oxidized regenerated cellulose and vascular embolization coils are other modalities that may be performed in the peripheral airways to successfully to treat hemoptysis. Case 1 A 63-year-old man with a history of hypertension, hyperlipidemia, type 2 diabetes mellitus, and cigarette smoking was admitted to the hospital following episodes of massive hemoptysis in the previous 24 hours. Admission laboratory work did not reveal a coagulopathy or thrombocytopenia. A chest radiograph demonstrated a radiodensity in the medial aspect of his left lower lung. A follow-up chest computed tomography (CT) scan was significant for a 4-cm spiculated cavitary mass in the superior segment of the left lower lobe that abutted the posterior mediastinum along with several subcentimeter nodules inferior to the lesion. His presentation was highly suspicious for a primary bronchogenic carcinoma. The patient underwent bronchoscopy. The takeoff to the superior segment was significantly stenotic with induration and heaped-up mucosa. The FOB was advanced into the cavity. Forceps biopsies through the working channel of the bronchoscope were taken inside the cavity. This led to bleeding from the orifice of the superior segment that was initially controlled using a combination of recombinant thrombin and balloon bronchoplasty with a 4-Fr Fogarty balloon. Although the bleeding slowed, it, nevertheless, continued unabated. At that point, two pieces of oxidized regenerated cellulose (ORC, Surgicell † , Johnson and Johnson's, London) approximately 15 )15 mm were folded and placed into the jaws of a flexible biopsy forceps. The forceps were then withdrawn into the operating channel of the bronchoscope, and the scope was reinserted into the airways. The forceps were inserted into the cavity, and the ORC was deployed. No further samples were taken to prevent dislodgement of the ORC. The pathology on the forceps biopsy was consistent with inflammatory changes. He then underwent thoracotomy 1 week later due to a continued concern for a malignancy. Prior to the thoracotomy, there were no further episodes of bleeding. The mass was noted to be quite fibrotic with involvement of a branch of the inferior pulmonary artery. Acid-fast bacilli were noted on pathology staining, and DNA probe was positive for Mycobacterium tuberculosis. Case 2 A 75-year-old man with a history of prior resection of a stage Ia non-small cell lung cancer (NSCLC) of the left upper lobe, renal transplant, prostate cancer, chronic obstructive lung disease, coronary artery disease, and atrial fibrillation presented with a 6-cm mass in the right middle lobe. He underwent FOB that yielded a diagnosis of NSCLC that appeared identical to his prior malignancy. The procedure was uncomplicated; he did well and was started chemotherapy for metastatic disease. Roughly, 4 weeks later, he presented with massive hemoptysis and was admitted to the hospital. The etiology of the bleeding was thought to be a consequence of recurrent bleeding from his tumor. A FOB was performed. Substantial amounts of bloody secretions were noted in the airways predominantly from the takeoff of the right middle lobe. After suctioning, there was continued oozing from the right middle lobe. Bleeding was initially controlled with recombinant thrombin and balloon tamponade with a 4-Fr Fogarty balloon. Electrocautery was used as well on visible areas of friable mucosa. Despite this, the bleeding continued. The decision was then made to place ORC into the culprit airway lumen. Using the method outlined above in the first case, the ORC was placed into the lateral segment of the right middle lobe. While the bleeding slowed, there was some displacement of the ORC proximally so a Vortex embolism coil was placed into the lateral segment of the right middle lobe in an attempt to secure the ORC. This was accomplished without difficulty. At that point, hemostasis was achieved. He was then discharged without further incident and had no further bleeding and ultimately died from his underlying malignancy a few months later. Case 3 A 57-year-old gentleman with a history of aortic valve replacement on chronic Coumadin as well as history of pulmonary tuberculosis who presented to the hospital after coughing up copious amounts of blood for 2 days. His international normalized ratio was supratherapeutic at 3.3, but the rest of his admission laboratory work was otherwise unremarkable. A CT scan revealed significant bilateral upper lobe emphysematous changes as well as right upper lobe bronchiectasis (Fig. 1). The patient was brought urgently to the bronchoscopy suite. FOB revealed significant bleeding from the apical segment of the right upper lobe. The airways were aggressively suctioned, and an embolism coil was initially placed in the subsegment to slow the bleeding. This was followed by ORC placement as described above. Another coil was then placed to secure the ORC. There was no evidence of active bleeding, and the patient was able to be brought safely to the ICU and later discharged. As of the writing of this manuscript, there have been no further complaints of hemoptysis. Discussion There are multiple modalities for controlling bleeding in the lung. For life-threatening hemoptysis, the airway should be protected by placing the bleeding side in a recumbent position and protecting the non-bleeding side with consideration of a mainstem placement of a single lumen endotracheal tube, double lumen endotracheal tube, or bronchial blocker (4). Bronchial artery embolization (BAE) is often used to control the bleeding. However, while immediate control of bleeding is quite good, rebleeding is fairly common and has been reported in over 50% of patients in some series (5). Surgery is typically reserved for cases refractory to BAE. Patients who undergo surgical resection during active bleeding have a high rate of morbidity and mortality (6). There are variety of techniques that can be employed with FOB in the setting of hemoptysis. These include topical application of agents such as epinephrine, thrombin or fibrinogen-thrombin, iced saline lavage, endobronchial blockade with a balloon, and, in some cases, the use of laser therapy or electrocautery (7). Despite this, the role of FOB in massive hemoptysis or hemoptysis in general remains unclear. While early versus late bronchoscopy has a higher yield for localizing, the source of bleeding FOB can be limited as a consequence of the inability to see the bleeding site due to blood filling the airways and limited suction capability (1,8). Some of the techniques used in this series have been described before. Tsukamoto et al. described the method for using thrombin and fibrinogenÁthrombin infusion using a fiberoptic bronchoscope with good results (9). The use of a Fogarty balloon to occlude the segmental or subsegmental bronchus leading to the bleeding site is also a described modality and typically is left in place for 24 to 48 hours with careful monitoring of the patient for rebleeding after the balloon is deflated (10). Valipour et al. reported the use of ORC in 57 patients with massive hemoptysis, albeit from a central airway source, achieving immediate hemostasis in 98% of patients. Approximately, 10.6% of those patients had recurrence of bleeding. The authors used rigid bronchoscopy initially and then a FOB through the rigid scope to place the ORC into the central airways (11). Nogueira et al. reported two cases where application of ORC to the bronchi using rigid bronchoscopy was successful in the cessation of hemoptysis (12). These techniques are similar in that they control the bleeding in the central airways. However, the case series presented here describes two unique approaches to management of this often lifethreatening problem and offers alternative approaches to control airway bleeding in cases where rigid bronchoscopy or vascular radiology is not readily available. ORC dissolves to form a gel matrix that promotes clot formation when in contact with blood. Vascular coils cause mechanical disruption of blood flow, which leads to thrombogenesis. In our review of the literature, no reports exist specifically addressing placing ORC directly into a bleeding cavity only using FOB as we described above. In the first case, bleeding was both initially controlled using a combination of balloon bronchoplasty, topical thrombin, and placement of ORC using only FOB delivered outside of the central airways. The last two cases expand on this approach with the delivery of a vascular embolization coil placed directly into the culprit bronchial lumen using it not only to potentially secure the ORC but also, as with the third case, the coil itself was used to initially slow the bleeding followed by ORC and an additional embolization coil. To our knowledge, this is the first report on endobronchial placement of an embolization coil to help control bleeding. While the numbers are obviously small, the clinical results suggest both efficacy and reproducibility. These techniques appear to be reasonable alternatives and complimentary to each other in the treatment of hemoptysis and can be used separately from or in conjunction with rigid bronchoscopy. Further investigation is needed to confirm these observations.
2018-04-03T06:25:50.399Z
2012-04-30T00:00:00.000
{ "year": 2012, "sha1": "a7ec75aee3c0d6105bc89ec0eb3ce613f90fe7f3", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3402/jchimp.v2i1.14784", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a7ec75aee3c0d6105bc89ec0eb3ce613f90fe7f3", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
257383190
pes2o/s2orc
v3-fos-license
Dietary Sources of Anthocyanins and Their Association with Metabolome Biomarkers and Cardiometabolic Risk Factors in an Observational Study Anthocyanins (ACNs) are (poly)phenols associated with reduced cardiometabolic risk. Associations between dietary intake, microbial metabolism, and cardiometabolic health benefits of ACNs have not been fully characterized. Our aims were to study the association between ACN intake, considering its dietary sources, and plasma metabolites, and to relate them with cardiometabolic risk factors in an observational study. A total of 1351 samples from 624 participants (55% female, mean age: 45 ± 12 years old) enrolled in the DCH-NG MAX study were studied using a targeted metabolomic analysis. Twenty-four-hour dietary recalls were used to collect dietary data at baseline, six, and twelve months. ACN content of foods was calculated using Phenol Explorer and foods were categorized into food groups. The median intake of total ACNs was 1.6mg/day. Using mixed graphical models, ACNs from different foods showed specific associations with plasma metabolome biomarkers. Combining these results with censored regression analysis, metabolites associated with ACNs intake were: salsolinol sulfate, 4-methylcatechol sulfate, linoleoyl carnitine, 3,4-dihydroxyphenylacetic acid, and one valerolactone. Salsolinol sulfate and 4-methylcatechol sulfate, both related to the intake of ACNs mainly from berries, were inversely associated with visceral adipose tissue. In conclusion, plasma metabolome biomarkers of dietary ACNs depended on the dietary source and some of them, such as salsolinol sulfate and 4-methylcatechol sulfate may link berry intake with cardiometabolic health benefits. Introduction Anthocyanins (ACNs) are phytochemical compounds of the subclass of flavonoids in the broader (poly)phenol class, highly present in plant foods, such as berries, grapes, eggplants, and many other colored fruits and vegetables [1,2]. Most of the dietary ACNs reach the large intestine unaffected where they may affect both gut microbial composition Study Design and Subjects We studied a validation subsample within the Diet, Cancer, and Health-Next Generations (DCH-NG) cohort: the DCH-NG MAX study. The DCH-NG was an extension of the previous cohort the Diet, Cancer, and Health (DCH) [10]. A sample of 39,554 participants was included in the DCH-NG involving biological children, their spouses, and grandchildren of the DCH cohort [11]. The DCH-NG MAX study recruited 720 volunteers with residency in Copenhagen, aged 18 years old or more, between August 2017 and January 2019. The major aims of the MAX study were to validate a semi-quantitative food frequency questionnaire against the twenty-four-hour dietary recalls and to examine the plasma and urine metabolome reproducibility as well as gut microbial stability on a long-term scale. Biological samples, health examinations such as anthropometric and blood pressure measurements, and two questionnaires about lifestyle and dietary habits were collected at baseline, 6, and 12 months. The DCH-NG cohort study was approved by the Danish Data Protection Agency (journal number 2013-41-2043/2014-231-0094) and by the Committee on Health Research Ethics for the Capital Region of Denmark (journal number H-15001257). The volunteers provided their written informed consent to participate in the study. All the details about clinical measurements, dietary and metabolomics data were previously detailed [11]. Anthropometric Measurements Participants were asked to wear underwear and be barefoot for measuring height and weight using a wireless stadiometer and a body composition analyzer, respectively (SECA mBCA515, Hamburg, Germany). Height and weight were measured to the nearest 0.1 cm and 0.01 kg, respectively, and body mass index (BMI) was calculated. The waist circumference was measured twice at the midpoint between the lower rib margin and the iliac crest. A third measurement for the waist circumference was measured if the difference between the first two was more than 1 cm. Blood pressure and pulse rate were measured 3 times using the left arm, considering the measurement with the lower systolic blood pressure and its corresponding diastolic blood pressure value as valid. DEXA-validated bioimpedance instrument (SECA mBCA515, Germany) was used to estimate visceral adipose tissue volume. Dietary Data The 24-h dietary recalls were recorded at baseline, 6 and 12 months using a Danish version of the web-based tool myfood24 (www.myfood24.org/) (7 February 2023) from Leeds University [12], containing almost 1600 Danish food items. All foods consumed the day before the examinations were reported by the participants in either grams or in standard portion size. The percentage of calories using the energy equivalents for carbohydrates, proteins, and fat was used to indicate the intake of macronutrients. Complex food products were appointed as recipes or dishes. The McCance and Widdowson's Food Composition Table [13], or recipes from the food frequency questionnaires in the DCH were used to have the standardized recipes [14]. Dietary Intake of Anthocyanins Estimation of the intake of polyphenols from 24-HDRs was completed by a protocol using "in-house" software developed by the University of Barcelona, the Bellvitge Biomedical Research Institute (IDIBELL), and the Centro de Investigación Biomédica en Red (CIBER) ©. A link between all 24-HDR food items or ingredients and the foods from the Phenol-Explorer database was created [15]. The intake of individual (poly)phenols in mg/day was obtained and ACN consumption from separate foods were estimated as the sum of 71 individual ACNs included in Phenol-Explorer database. The estimated intake of dietary (poly)phenols of the DCH-NG MAX study has previously been described [16]. A total of 147 food items that contain ACNs were used to estimate the total dietary ACN intake as shown in the Supplementary Table S1. Intake of berries was estimated as the sum of foods with at least 50% of its composition or recipe made by berries. These include raw and frozen berries, berries marmalades or jams, and stewed berries. Dietary ACN intake related with the other foods were classified and added up according to the following food groups: dairy products with berries (including ice cream and yogurt), other fruits (i.e., plums, cherries, apples, etc.), non-alcoholic drinks (including fruit smoothies and juices), wines, vegetables, mixed dishes (meat or fish dishes with vegetables containing ACNs), and bakery (including pastry, biscuits, desserts, and waffles with berries or other ACNcontaining preparations). (Supplementary Table S2). Intakes of foods not containing ACN were disregarded. Blood Sampling, Analysis of Cardiometabolic Risk Factors and Metabolomics Participants were instructed to maintain a fasting time of 1-9 h (mean fasting time: 5 h) during all the examination days. Blood samples were taken into Vacutainer tubes containing lithium heparin at baseline time 0 (n = 624), 6 months (n = 380), and 12 months (n = 349). Within 2 h of blood draw, plasma was obtained by centrifugation, and samples were stored at −80 • C. After that, plasma samples were delivered to the Danish National Biobank (DNB), where plasma was divided into aliquots and sent to University of Barcelona and kept at −80 • C until metabolomic analysis. Other blood measurements such as hemoglobin, A1c (HbA1c), serum lipids, and high sensitivity C reactive protein (hsCRP) were measured as described before [17]. Metabolomics Analysis of Plasma Samples Repeated measures of the plasma metabolome at all three time points were used for metabolomics analysis. All the samples were prepared and analyzed using the targeted UPLC-MS/MS method described previously, with slight modifications [18,19]. Briefly, 100 µL of plasma was added into protein precipitation plates together with 500 µL cold acetonitrile containing 1.5 M formic acid and 10 mM of ammonium formate and were kept at −20 • C for 10 min to enhance protein precipitation. Then, positive pressure was applied to recover the extracts, which were taken to dryness and reconstituted with 100 µL of an 80:20 v/v water:aceto nitrile solution containing 0.1% v/v formic acid and 100 ppb of a mixture of 13 internal standards. Samples were then transferred to 96-well plates and analyzed by a targeted metabolomic analysis using an Agilent 1290 Infinity UPLC Nutrients 2023, 15, 1208 4 of 13 system coupled to a Sciex QTRAP 6500 mass spectrometer, using the operating conditions described elsewhere [18]. The Sciex OS 2.1.6 software (Sciex, Framingham, MA, USA) was used for data processing. Metabolomics Data Pre-Processing The POMA R/Bioconductor package (https://github.com/nutrimetabolomics/POMA) (7 February 2023) was used for the pre-processing of metabolomics data [20]. Metabolites with more than 40% missing values, and those with a coefficient of variation (CV) > 30% in internal quality control were removed. K-nearest neighbor (KNN) algorithm and correction of batch effects using the ComBat function ("sva" R package) were used to impute the remaining missing values [21], while auto-scaling and Euclidean distances (±1.5× Interquartile range) were used to normalize the data and remove the outliers, respectively. The final metabolomics dataset included the concentration of 408 plasma metabolites. Statistical Analyses For descriptive statistics, intake of total ACNs (irrespective of their dietary source) was categorized by tertiles using 0.3-8.9 mg as thresholds. Continuous variables following a normal distribution are shown as mean ± SD, and those following a skewed distribution are shown as median (p25-p75). Sociodemographic and clinical characteristics were compared across tertiles of ACNs intake using linear mixed models in a random intercepts model adjusted for age and sex. Associations between intake of ACN dietary sources and their association with cardiometabolic risk factors were tested using linear mixed models in random intercepts models adjusted for age, sex, and BMI. First, associations between intake of total ACNs and metabolome biomarkers were analyzed using a censored regression for panel data with "censReg" and "plm" R-packages [22]. Censored regression models were applied due to the right-skewed distribution of total ACN intake and the considerable proportion of zero values (24% of non-consumers of ACNs). Covariates included in the models were age, sex, and BMI. p-values were adjusted for multiple comparisons using the Benjamini-Hochberg method, and adjusted p-values <0.05 were considered statistically significant. Second, associations between ACNs from different dietary sources and metabolites were assessed using Mixed Graphical Models (MGM) with the "mgm" R-package [23]. MGMs are undirected probabilistic graphical models able to represent associations between nodes adjusted for all the other variables in the model. MGM specifications were set to allow the maximum number of interactions in the network. Variables in the model were dietary ACN intakes by food categories (8 food groups), and the whole metabolomics set of variables. The agreement between repeated measurements for total dietary intake of ACNs and for ACN intake from different dietary sources was poor across the study evaluations (intra-class correlation coefficient < 0.15). Therefore, all observations of the study were considered independent and were included in MGM analysis (k = 1351). For visual clarity, only the first-order neighborhood of ACNs food sources was plotted. To evaluate the associations between metabolites and cardiometabolic risk factors linear mixed models were used in random intercepts models adjusted for age, sex, and BMI. Metabolites were selected based on the combination of both analyses, censored regression, and MGM. Standardized coefficients were plotted in a heatmap built using the "pheatmap" R-package (Kolde R (2019). pheatmap: Pretty Heatmaps). All statistical analyses were performed using R, version 4.1.3. (R foundation, Austria). Sociodemographic, Clinical, and Dietary Characteristics At baseline, out of the 720 volunteers who agreed to participate in the study, 624 had completed clinical, dietary, and plasma metabolomics data. Of the 624 participants included, 55% were female, aged (mean ± SD) 45 ± 12 years old, and had a BMI of 25 ± 4 kg/m 2 . At 6 months, 380 participants had completed clinical, dietary, and metabolomics data and at Nutrients 2023, 15, 1208 5 of 13 12 months completed data were available for 349 participants. Only, 287 participants had completed clinical dietary and plasma metabolomics data available at all three time points. The distribution of total ACN intake was right-skewed with a median value of 1.6 (p25-p75: 0.0-26.9) mg/day and a mean value of 26.4 (SD: 60.4) mg/day. Berries were the highest contributors to total ACN intake with a mean contribution of 34%, followed by wines with 33%, and non-alcoholic drinks (which included fruit smoothies and juices) with 20% of the total reported intake. Other fruits (i.e., cherries, apples, and plums) and vegetables were minor contributors with 4% and 2%, respectively. Bakery (pastry, biscuits, and desserts), dairy products (yogurts and strawberry ice creams or ice creams with berries), and other mixed dishes (dishes including vegetables with ACNs) contributed within a similar range between 2 and 3% (Supplementary Table S1). Participants were divided into tertiles based on the consumed reported intakes of total ACNs as shown in Table 1. There were no significant differences in clinical characteristics across tertiles of ACN intake. Consistently, there were no statistically significant associations between total ACN intake and cardiometabolic risk factors (data not shown). Dietary characteristics are illustrated in Supplementary Table S2. Participants in the highest compared to the ones at the lowest tertile of ACN intake showed statistically significant higher consumption of total protein, saturated fatty acids (SFA), monounsaturated fatty acids (MUFA), polyunsaturated fatty acids (PUFA), alcohol, fruits, and berries. Association between Intake of ACN Dietary Sources and Cardiometabolic Risk Factors Several inverse and direct associations between self-reported intake of ACN-containing food groups and cardiometabolic risk factors were observed ( Figure 1). For example, intake of berries, dairy products with berries, and ACN-containing vegetables had inverse associations with visceral adipose tissue volume, while wine had direct associations with total cholesterol, HDL-C, and systolic blood pressure. Other direct associations were found between the intake of berries and hemoglobin A1c, and between ACN-containing drinks with hsCRP. 3.0 ± 0.9 3.0 ± 0.9 2.9 ± 0.9 3.1 ± 0.9 hsCRP (mg/L) BMI, body mass index; WC, waist circumference; VAT, visceral adipose tissue; SBP, systolic blood pressure, DBP, diastolic blood pressure; HbA1c, hemoglobin A1c; TG, triglycerides; TC, total cholesterol; HDL, high-density lipoproteins; LDL, low-density lipoproteins; hsCRP, high-sensitivity Creactive protein. Variables following a normal distribution are shown as mean ± SD, and those with a skewed distribution are shown as median (p25-p75). Association between Intake of ACN Dietary Sources and Cardiometabolic Risk Factors Several inverse and direct associations between self-reported intake of ACN-containing food groups and cardiometabolic risk factors were observed ( Figure 1). For example, intake of berries, dairy products with berries, and ACN-containing vegetables had inverse associations with visceral adipose tissue volume, while wine had direct associations with total cholesterol, HDL-C, and systolic blood pressure. Other direct associations were found between the intake of berries and hemoglobin A1c, and between ACN-containing drinks with hsCRP. Association between different self-reported food groups and cardiometabolic risk factors in the DCH-NG MAX study (n = 624, k = 1351). Standardized coefficients according to linear mixed models with random intercepts adjusting for age, sex, and BMI. * p <0.05, ** p <0.01. n = number of subjects, k = total number of observations. TG, triglycerides; SBP, systolic blood pressure; DBP, diastolic blood pressure; WC, waist circumference; HbA1c, hemoglobin A1c; hsCRP, high-sensitivity C-reactive protein; VAT, visceral adipose tissue; TC, total cholesterol; HDL, high-density lipoproteins; TC, total cholesterol; LDL, low-density lipoproteins. Association between different self-reported food groups and cardiometabolic risk factors in the DCH-NG MAX study (n = 624, k = 1351). Standardized coefficients according to linear mixed models with random intercepts adjusting for age, sex, and BMI. * p < 0.05, ** p < 0.01. n = number of subjects, k = total number of observations. TG, triglycerides; SBP, systolic blood pressure; DBP, diastolic blood pressure; WC, waist circumference; HbA1c, hemoglobin A1c; hsCRP, high-sensitivity C-reactive protein; VAT, visceral adipose tissue; TC, total cholesterol; HDL, high-density lipoproteins; TC, total cholesterol; LDL, low-density lipoproteins. Metabolome Biomarkers Associated with Intake of ACNs Related to Different ACN Dietary Sources MGM analysis showed associations between self-reported ACN intake from different dietary sources and 16 metabolites (Figure 3). ACNs derived from dairy products were associated with plasma asparagine, epicatechin sulfate, urolithin C-glucuronide, and acesulfame K. ACNs from the intake of berries were associated with linoleoyl carnitine, salsolinol sulfate, glycochenodeoxycholic-3-sulfate (GCDCA-3S) and 4-methylcatechol sulfate. ACNs from wine consumption were linked with methylpyrogallol sulfate (Met-Pyr-S) and ethyl glucuronide. ACNs from vegetable intake were associated with 2-hydroxybenzoic acid and bergaptol glucuronide. ACNs from other fruits were associated with 3,4-DHPHVA-3S, 5-(3′-hydroxyphenyl)-γ-valerolactone 3'-sulfate (3-HPV-S) and 3,4-dihydroxyphenylacetic acid sulfate (3,4-DHPA-3S). Lastly, the consumption of ACNs from mixed dishes was associated with 1-methylhistidine and 2-hydroxybenzoic acid. Overall, not all the metabolites selected in the MGM analysis were related to ACNs or its microbial metabolites, but to other food components such as acesulfame K, ethyl glucuronide, etc. Therefore, metabolome biomarkers were selected considering both statistical Metabolome Biomarkers Associated with Intake of ACNs Related to Different ACN Dietary Sources MGM analysis showed associations between self-reported ACN intake from different dietary sources and 16 metabolites (Figure 3). ACNs derived from dairy products were associated with plasma asparagine, epicatechin sulfate, urolithin C-glucuronide, and acesulfame K. ACNs from the intake of berries were associated with linoleoyl carnitine, salsolinol sulfate, glycochenodeoxycholic-3-sulfate (GCDCA-3S) and 4-methylcatechol sulfate. ACNs from wine consumption were linked with methylpyrogallol sulfate (Met-Pyr-S) and ethyl glucuronide. ACNs from vegetable intake were associated with 2-hydroxybenzoic acid and bergaptol glucuronide. ACNs from other fruits were associated with 3,4-DHPHVA-3S, 5-(3 -hydroxyphenyl)-γ-valerolactone 3'-sulfate (3-HPV-S) and 3,4-dihydroxyphenylacetic acid sulfate (3,4-DHPA-3S). Lastly, the consumption of ACNs from mixed dishes was associated with 1-methylhistidine and 2-hydroxybenzoic acid. Overall, not all the metabolites selected in the MGM analysis were related to ACNs or its microbial metabolites, but to other food components such as acesulfame K, ethyl glucuronide, etc. Therefore, metabolome biomarkers were selected considering both statistical analyses, censored regression, and MGM, to be used for the study of its association with cardiometabolic risk factors. analyses, censored regression, and MGM, to be used for the study of its association with cardiometabolic risk factors. Figure 3. First-order neighborhood of ACNs intake related to different self-reported food groups with plasma metabolome biomarkers according to Mixed Graphical Models in the DCH-NG MAX study (n = 624, k = 1351). Edge intensity reflects the strength of the association from strong direct (dark green) to strong inverse association (dark red). Variables included in the mixed graphical model were ACN intake related to self-reported intake of dairy, berries, wines, non-alcoholic drinks (smoothies and fruit juices), vegetables, other fruits, mixed dishes, and bakery, and all the 408 plasma metabolites quantified with our targeted metabolomics method. n = number of subjects, k = total number of observations. For a detailed list of foods within each category go to Supplementary Table S1 Associations between Selected ACN-Related Metabolome Biomarkers and Cardiometabolic Risk Factors Metabolites associated with ACN intake in both of the previous analyses were: salsolinol sulfate, 4-methylcatechol sulfate, linoleoyl carnitine, 3,4-DHPHVA-3S, and 3,4-DHPA-S. Figure 4 shows the associations between these metabolites and cardiometabolic risk factors. Out of the metabolites associated with berries' ACNs, salsolinol sulfate and 4-methylcatechol sulfate were inversely associated with visceral adipose tissue volume. In addition, inverse associations were also found between salsolinol sulfate and LDL-C and diastolic blood pressure. Conversely, there was a direct association between salsolinol sulfate and triglyceride levels ( Figure 4). Linoleoyl carnitine, 3,4-DHPHVA-3S, 3,4-DHPA-S did not show any statistically significant association with cardiometabolic risk factors. Figure 3. First-order neighborhood of ACNs intake related to different self-reported food groups with plasma metabolome biomarkers according to Mixed Graphical Models in the DCH-NG MAX study (n = 624, k = 1351). Edge intensity reflects the strength of the association from strong direct (dark green) to strong inverse association (dark red). Variables included in the mixed graphical model were ACN intake related to self-reported intake of dairy, berries, wines, non-alcoholic drinks (smoothies and fruit juices), vegetables, other fruits, mixed dishes, and bakery, and all the 408 plasma metabolites quantified with our targeted metabolomics method. n = number of subjects, k = total number of observations. For a detailed list of foods within each category go to Supplementary Table S1 Associations between Selected ACN-Related Metabolome Biomarkers and Cardiometabolic Risk Factors Metabolites associated with ACN intake in both of the previous analyses were: salsolinol sulfate, 4-methylcatechol sulfate, linoleoyl carnitine, 3,4-DHPHVA-3S, and 3,4-DHPA-S. Figure 4 shows the associations between these metabolites and cardiometabolic risk factors. Out of the metabolites associated with berries' ACNs, salsolinol sulfate and 4-methylcatechol sulfate were inversely associated with visceral adipose tissue volume. In addition, inverse associations were also found between salsolinol sulfate and LDL-C and diastolic blood pressure. Conversely, there was a direct association between salsolinol sulfate and triglyceride levels ( Figure 4). Linoleoyl carnitine, 3,4-DHPHVA-3S, 3,4-DHPA-S did not show any statistically significant association with cardiometabolic risk factors. Discussion The present study shows for the first time the specific associations between ACNs related to different dietary sources, and plasma metabolome biomarkers and their association with cardiometabolic risk factors in a free-living population. These results may take into account not only the quantitative and qualitative heterogeneity of ACNs presence in foods but also the internal dose of specific microbial metabolites generated from ACNs which could have been affected by the food matrix. Indeed, food matrices have been shown to influence the microbial metabolism of (poly)phenols [24]. Ultimately, we observed different associations between ACN-related metabolites and cardiometabolic risk factors in relationship with specific foods suggesting a stronger cardiometabolic benefit associated with the consumption of berries. Up to 80% of the total intake of dietary ACNs came from the consumption of berries, wines, and non-alcoholic drinks in this observational study. Minor contributors were dairy foods, other fruits, and vegetables. While the MGM analysis revealed different metabolomic fingerprints associated with different dietary sources of ACNs, the resultant metabolites were not specific to ACNs. Therefore, we selected metabolites that were also significantly associated with the censored regression analysis. This was a strict criterion but in the context of such low levels of ACN intake in the overall population (median 1.6 mg/day), it is justified. After applying this selection criterion, only metabolites related to ACNs from berries and other fruits (according to MGM) were tested for their association Figure 4. Association between ACN-related selected metabolites and cardiometabolic risk factors in the DCH-NG MAX study (n = 624, k = 1351). Standardized coefficients according to linear mixed models with random intercepts adjusting for age, sex, and BMI. Foods associated with the metabolites according to MGM analysis are displayed by colors in the food column. * p < 0.05, ** p < 0.01, *** p < 0.001. n = number of subjects, k = total number of observations. Sal-S, salsolinol sulfate; 4-Met-Cat-S, 4-methylcatechol sulfate; C18:2-Car, linoleoyl carnitine; 3,4-DHPHVA-3S, 5-(4hydroxy(3,4-dihydroxyphenyl)-valeric acid sulfate; 3,4-DHPA-3S, 3,4-dihydroxyphenylacetic acid sulfate; TG, triglycerides; SBP, systolic blood pressure; DBP, diastolic blood pressure; WC, waist circumference; HbA1c, hemoglobin A1c; hsCRP, high-sensitivity C-reactive protein; VAT, visceral adipose tissue; TC, total cholesterol; HDL, high-density lipoproteins; TC, total cholesterol; LDL, low-density lipoproteins. Discussion The present study shows for the first time the specific associations between ACNs related to different dietary sources, and plasma metabolome biomarkers and their association with cardiometabolic risk factors in a free-living population. These results may take into account not only the quantitative and qualitative heterogeneity of ACNs presence in foods but also the internal dose of specific microbial metabolites generated from ACNs which could have been affected by the food matrix. Indeed, food matrices have been shown to influence the microbial metabolism of (poly)phenols [24]. Ultimately, we observed different associations between ACN-related metabolites and cardiometabolic risk factors in relationship with specific foods suggesting a stronger cardiometabolic benefit associated with the consumption of berries. Up to 80% of the total intake of dietary ACNs came from the consumption of berries, wines, and non-alcoholic drinks in this observational study. Minor contributors were dairy foods, other fruits, and vegetables. While the MGM analysis revealed different metabolomic fingerprints associated with different dietary sources of ACNs, the resultant metabolites were not specific to ACNs. Therefore, we selected metabolites that were also significantly associated with the censored regression analysis. This was a strict criterion but in the context of such low levels of ACN intake in the overall population (median 1.6 mg/day), it is justified. After applying this selection criterion, only metabolites related to ACNs from berries and other fruits (according to MGM) were tested for their association with cardiometabolic risk factors. Metabolites specifically related to ACNs from other major food sources, such as wines, were excluded. Nonetheless, other studies showed that for example 4-methylcatechol sulfate was increased after a 15-day moderate red wine intervention trial [25]. Therefore, we cannot be fully certain that in our study the same metabolites could be related to other ACN dietary sources. Future randomized controlled trials using single foods are warranted to validate the present results. Regarding the association between metabolome biomarkers and cardiometabolic risk factors, 4-methylcatechol sulfate showed an inverse association with visceral adipose tissue volume. According to our MGM analysis, 4-methylcatechol sulfate was associated with the intake of ACNs from berries. Similarly, another metabolite associated with ACNs from berries was salsolinol sulfate. Salsolinol sulfate is an alkaloid that has been suggested as a biomarker of banana intake [26]. However, salsolinol can be produced endogenously through dopamine oxidative metabolism [27,28] and may have a role in modulating dopamine neurons activity in the striatum region of the brain [21]. In fact, patients with obesity showed impaired dopamine brain activity, underscoring a potential role for low dopamine activity in obesity (lower reward associated with food intake) [29]. Hence, we speculate that the inverse association between salsolinol sulfate and visceral adipose tissue could be mediated by brain dopamine activity. An animal study showed that a blackberry extract intervention reversed the effects of a high-fat diet increasing dopamine turnover in the brain striatum region [30]. The role of berries on brain dopamine metabolism should be further studied. On the other hand, the other selected metabolites were not associated with cardiometabolic risk factors. The median value of total ACN intake in the study was 1.6 mg/day, and such intake may not have been high enough to detect metabolome biomarkers found in randomized controlled trials (RCT) with ACN-rich foods [31][32][33]. Many short or long-term RCTs were conducted with capsulated ACNs or berries to discover biomarkers of ACNs intake. In these trials, daily intakes of ACNs typically varied from 100 to 300 mg as single dose intakes [33,34], or between 50-350 mg/day for four weeks [35][36][37]. In general, many parent ACNs and up to 70 phenolic compounds resultant of the gut microbial metabolism of ACNs have been identified [35,36]. Even though the majority of these metabolites were not identified in our study, 4-methylcatechol sulfate, 3,4-DHPHVA-3S, and 3,4-DHPA-3S had been previously associated with ACN intake. Maybe, longer half-lives of these metabolites vs. the others, or the competition of polyphenol substrates for bacteria able to metabolize them limited the production of ACNs metabolites under the low levels of ACN intake (exposure) in the study. Among the strengths of this study are its observational nature and the fact that dietary data were assessed with 24-HDRs instead of food frequency questionnaires. This last characteristic allowed us to have exact intake data both in terms of amounts and specific food items compared to food frequency questionnaires. However, this also brings the limitation of measurement errors in estimating ACN intake and the short time period surveyed (one 24-HDR at each evaluation time). Another limitation was that the median consumption of dietary ACNs within the population of the DCH-NG MAX study was 1.6 mg/day, which was considerably lower than other studies in which the median intake varied between 9.3 to 52.6 mg/day [38][39][40][41]. This fact could have limited the number of plasma metabolites associated with dietary ACNs. Furthermore, the mean fasting time of the participants at the time the blood samples were drawn was 5 h and the impact of fasting on serum metabolome is uncertain. Nonetheless, this is the first study evaluating the impact of ACNs coming from different dietary sources on plasma metabolome and therefore our results cannot be contrasted with others. While berries contain other polyphenols in addition to ACNs, further research is needed to fully understand the individual and combined effects of different polyphenols of berries on health outcomes. Our approach to isolating the effects of ACNs from berries was conducted from a bioinformatic approach and a more precise study testing the effects of isolated ACNs from berries should corroborate our results. While berries contain other polyphenols in addition to ACNs, further research is needed to fully understand the individual and combined effects of different polyphenols of berries on health outcomes. Our approach to isolating the effects of ACNs from berries was conducted from a bioinformatic approach and a more precise study testing the effects of isolated ACNs from berries should corroborate our results. Last, it is not clear if the microbial metabolites were exclusively related to the ACNs from the dietary source pointed out in the MGM analysis or could have been also produced from ACNs coming from other foods, or even from food components other than ACNs (e.g., other polyphenols apart from ACNs). Although MGM models adjust every association for all the other variables included in the analysis, these sources of confounding cannot be ruled out. In conclusion, this study shows that the metabolomic fingerprint of ACN consumption depended on its dietary sources. Metabolites associated with the consumption of berries' ACNs showed inverse associations with visceral adipose tissue. Future RCTs should validate the importance of these foods for cardiometabolic health and their potential mechanisms of action. Supplementary Materials: The following supporting information can be downloaded at: https:// www.mdpi.com/article/10.3390/nu15051208/s1. Table S1. ACN-containing food list in the DCH-NG MAX study, Table S2. Dietary characteristics of the MAX study population according to tertiles of dietary ACN intake. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: Data may be available upon request to the Danish Cancer Society (contact: dchdata@cancer.dk).
2023-03-08T16:13:51.998Z
2023-02-28T00:00:00.000
{ "year": 2023, "sha1": "b95789d3f6316c82bd6022ee3c6484d6b87c7a89", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6643/15/5/1208/pdf?version=1677563780", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a09f656a5c75a1787d121e304152b96fb9d98cd4", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
88709320
pes2o/s2orc
v3-fos-license
Viewshed and sense of place as conservation features : A case study and research agenda for South Africa ’ s national parks A growing number of scientists believe that Earth has transitioned into a new geological epoch – the Anthropocene – characterised by a single, dominant species (Homo sapiens) that is affecting the planet’s life support system at an unprecedented scale, including changes to landscapes and ecosystems, biological distributions, climate and atmospheric chemistry (Steffen, Crutzen & Mcneill 2007; Zalasiewicz et al. 2010). The proclamation and effective management of representative networks of protected areas (PAs) is seen as critical to buffering society against adverse changes to the biosphere attributable to this human-dominated era (Watson et al. 2014). In this context, expected benefits from PAs include conserving biodiversity, safeguarding ecosystems and the services they provide, mitigating climate change and promoting social–ecological resilience, with associated economic and social benefits, at regional scales. Introduction A growing number of scientists believe that Earth has transitioned into a new geological epochthe Anthropocene -characterised by a single, dominant species (Homo sapiens) that is affecting the planet's life support system at an unprecedented scale, including changes to landscapes and ecosystems, biological distributions, climate and atmospheric chemistry (Steffen, Crutzen & Mcneill 2007;Zalasiewicz et al. 2010).The proclamation and effective management of representative networks of protected areas (PAs) is seen as critical to buffering society against adverse changes to the biosphere attributable to this human-dominated era (Watson et al. 2014).In this context, expected benefits from PAs include conserving biodiversity, safeguarding ecosystems and the services they provide, mitigating climate change and promoting social-ecological resilience, with associated economic and social benefits, at regional scales. There is increasing recognition of the importance of less tangible or quantifiable benefits that people derive from nature and PAs.Such benefits are, for example, referred to as 'Nature's Gifts' in the Intergovernmental Platform on Biodiversity and Ecosystem Services Conceptual Framework (Díaz et al. 2015).These benefits are often grouped under the collective of 'cultural ecosystem services', including experiential, spiritual, educational and recreational interactions with nature that contribute to human well-being (Millennium Ecosystem Assessment 2005).Importantly, an appreciation of such benefits is often dependent on a pro-environmental identity or ability to engage or form a connection with the natural environment (Hinds & Sparks 2008), for example, through 'meaningful nature experiences' (Zylstra et al. 2014). Sense of place (SoP) refers to the meanings and values that people attach to places.The concept can be used to frame how people engage or form a connection with the natural environment.At a sensory level, SoP is influenced by people's visual experiences, which in turn can be linked to the concept of viewsheds.Viewsheds can be transformed, either abruptly (e.g. by infrastructure development such as wind turbines) or more gradually (e.g. by non-native trees invading a landscape).In this study, we focus on the Garden Route National Park to explore the potential importance of viewsheds as a conservation feature, specifically in the context of non-native (especially invasive) tree species.Using mixed information sources, we explore the potential role of invasive trees on experiences of visitors to this protected area and speculate on how viewsheds may shape SoP associations and how such associations may inform protected area management.Our investigation shows that people's experiences regarding natural and modified viewsheds are varied and intricate.Both SoP and viewsheds have the potential to inform conservation action, and these concepts should form an integral part of objective hierarchies and management plans for national parks.However, while legislation and park management plans make provision for the use of these concepts, associated research in South Africa is virtually non-existent.We conclude by proposing a conceptual model and research agenda to promote the use of viewsheds and SoP in the management of national parks in South Africa. Conservation implications: Viewshed and sense of place can be used as boundary concepts to (1) facilitate interdisciplinary research between social and natural scientists, (2) help understand the connectedness and feedbacks between people and nature and (3) promote communication between science, management and stakeholders regarding desired conditions of landscapes in and around parks. http://www.koedoe.co.za Open Access How people connect with nature is invariably a function of their value systems (see Chan, Satterfield & Goldstein 2012;Raymond et al. 2009), which are context-specific and evolve dynamically over time.For example, while one group may value a particular landscape for its tangible materials (such as harvestable fruits and medicinal plants), another may value the same landscape for intangible benefits (such as relaxation and therapy) derived from its tranquil and scenic features.For the purpose of this study, we will focus on intangible benefits that people derive through experiential interactions with nature.In this context, connection between humans and nature relate strongly to aesthetics (Plieninger et al. 2015) and therefore link closely with concepts such as 'viewsheds' and 'landscapes'.Perceptions of 'beautiful scenery' may be a predictor for environmental connection. Likewise, the sensitivity of 'the public' to scenery or 'how the landscape looks' is an important driver of support for conservation actions.This is borne out by spatial metrics studies (Palmer 2004) and evidence that positive nature experiences (such as hikes) may predispose people to financially support conservation efforts (Zaradic, Pergams & Kareiva 2009). Ironically, although the creation of PAs has often led to the separation of humans and nature (West, Igoe & Brockington 2006), these remnants of wilder, more natural or intact ecosystems and landscapes potentially offer greater opportunity to experience connection with nature and benefits such as psychological rejuvenation (see Ulrich et al. 1991).Higher ecological integrity should result in higher 'visual landscape quality', and together with the notion of 'sense of place' (SoP) and the related concept of 'place attachment' (the environmental psychologist's equivalent of the geographer's SoP) (Farnum, Hall & Kruger 2005) is subsumed in the landscape-quality construct (Daniel 2001). Although sometimes regarded as vague (Shamai 1991) and elusive (Williams & Stewart 1998), the concept of SoP has been applied widely to describe the relationship between people and physical environment.It is generally used for framing the meanings and values that people attach to places (Larson, De Freitas & Hicks 2013;Williams & Stewart 1998) and may incorporate experiences of dependence, attachment, identity and satisfaction (Jorgensen & Stedman 2001;Stedman 2003).Such meanings and values are typically rich and varied (Williams & Stewart 1998), commonly based on a mix of cultural histories and natural features in a landscape (Larson et al. 2013), and develop as a result of biological, individual and sociocultural processes that take place while interacting with the physical environment (Hausmann et al. 2016).Many feel that SoP resides primarily in human experiences, interpretations and value endowment, rather than being intrinsic to the physical setting itself -'space becomes place when we endow it with value' (Tuan 1977; but see Stedman 2003). At a sensory level, what people do (e.g.fish from the bank of a river), feel (e.g.grass under their feet or warmth of the sun), hear (e.g. the sound of birds or the wind in the trees) and see (e.g. a seascape or forest) will contribute to their experiences in relation to a place.Such experiences are likely to change over time (e.g.different seasons) and space (e.g.vantage points) and to be mediated by memory of previous such experiences.While the natural sciences have found ways to measure, for example, changes in soundscapes (Pijanowski et al. 2011) and viewsheds (Camp, Sinton & Knight 1997), neither of these concepts has been incorporated into the predominantly social construct of SoP.Apart from the social variable, there may be many different ways in which SoP can be altered or lost, through changes in physical appearance within a landscape, and thus its aesthetics.It stands to reason that such viewsheds (and associated place-value) may be transformed, or even destroyed, by evidence of human presence or activities.Viewshed transformation can intuitively be linked to structural developments, for example, housing on a lake shoreline (Stedman 2003), presence of roads (Selva et al. 2011) or power infrastructure such as overhead pylons and wind turbines (Gee 2010).However, there are other, less explicit, humanmediated changes.One such 'slow' transformer of viewsheds may be non-native plants that have become invasive, especially large, woody, tree species (Figure 1) -widely referred to as weeds, invaders or invasive alien plants (IAPs).While we acknowledge the variety of terms and connotations associated with describing IAPs (e.g.Richardson et al. 2000;Schlaepfer, Sax & Olden 2011), we choose to use the more neutral term 'non-native' except when explicitly referring to declared invasive alien species or 'weeds' as defined by the Conservation of Agricultural Resources Act (CARA, No. 43 of 1983, and amendments).The impacts of IAPs on biodiversity and ecosystems, and resultant ecosystem service loss, such as water quality (e.g.Chamier et al. 2012;Van Wilgen et al. 2008), are widely known and are often the primary determinants for allocation of funding and human resources to manage invasions (Marais, Van Wilgen & Stevens 2004).Less well understood is the impact of IAPs on cultural ecosystem services, such as mediated through SoP experiences (but see Le Maitre et al. 2011), and this aspect is generally not considered when prioritising areas for IAP clearing and restoration. In this study, we use an interdisciplinary narrative approach, including the use of various pieces of 'evidence' and 'exhibits', to explore the potential importance of viewsheds as conservation features, specifically in the context of nonnative (especially invasive) tree species.We restrict our attention to the Western Cape, the province regarded as the epicentre for development of awareness about IAPs, mainly because of regional public reverence for the native vegetation of the globally recognised Cape Floristic Region (Bennett 2014).Furthermore, we focus on the Garden Route National Park (GRNP) along the southern Cape coast (Garden Route), a region widely known for its scenic beauty and an example of a PA embedded or integrated within a greater socialecological landscape.Using mixed information sources, including media, unpublished studies, scientific literature and management documents, we explore the potential role of invasive plants (especially trees) on experiences of visitors to this PA and speculate on how viewsheds may shape SoP associations and the interplay between such associations and PA management.To this end, we present a conceptual model relating the concepts of viewshed and SoP to the high-level objectives in a management plan for a national park in South Africa.Lastly, we propose a research agenda to inform the future incorporation of viewsheds and SoP in park management decisions.http://www.koedoe.co.zaOpen Access The evolving association between people, non-native trees and sense of place When considering terms such as 'naturalness', or whether non-native trees have a place in natural viewsheds, it is important to acknowledge that associated perceptions develop over time.Societal perceptions influence, and are influenced by, the reigning utilitarian values (e.g.forestry or dune stabilisation) and socio-economic circumstances of the period (Baard & Kraaij 2014).Therefore, sentiments about a specific 'immigrant species' may change as a society does, or according to whether it is associated with a negative impact or trait, such as being invasive or harmful (Coates 2007). Perceptions may further depend on how informed individuals are, their level of knowledge of biodiversity, or on more personal values of aesthetics, or notions of scenic beauty (Dhami & Deng 2010;García-Llorente et al. 2008) (also see Box 1 and Figure 2).A case in point is the black wattle (Acacia mearnsii), arguably one of the most problematic IAP species in coastal areas of the Western Cape, originally introduced to support the tanning industry (Carruthers et al. 2011).Especially for a botanist or invasion biologist today, it may be unsettling to travel through the Garden Route where black wattle, together with other Acacia and Pinus species (see photographs in Cowling et al. 2009), dominate the viewsheds of coastal, riparian, mountain and production landscapes (Henderson 1998).That the wattle is now considered unwanted in this region was not always the case, as shown by a letter written to the George and Knysna Herald on 13 September 1893, titled 'Wattle growing at Knysna': Last Sunday the writer took a stroll 'over the hills', but by no means so 'far away' to the Nursery at Concordia, and would recommend to those of his fellow townsmen who are blessed with the aptitude for enjoying natures beauties to lose no time in hieing tither ere [hastening there before] the many varieties of acacia lose their magnificent bloom.The wattles are just now simply one blaze of bright yellow, and are really a sight worth beholding.(n.p.) The writer then goes on to suggest that establishing wattle plantations in the area would not only provide economic benefits on otherwise 'idle ground' or 'sour waste lands', but that anyone who has seen them in full bloom (as described) would be supportive of his suggestions, presumably on aesthetic grounds.Indeed, wattle growing did become an important economic mainstay in the region well into the 1960s and was strongly supported by the local authorities, as shown by a piece titled 'The Municipal wattle plantations' in 03 June 1914 edition of The Herald: The public of George will learn with pleasure that at the last meeting of the Municipal Council it was unanimously resolved to prepare another 6 morgen of land for the planting of wattle.This is a move in the right direction and if steadily persevered in, this City will find itself within a few years the owner of a very valuable asset … (n.p.) The above example shows that the lower value attributed to local landscapes and vegetation and the higher value to the non-native species played an important role in promoting their deliberate spread.Contrast this with the present-day situation, where A. mearnsii is now one of the top invaders in the Garden Route (Baard & Kraaij 2014) of especially riparian BOX 1: Appreciating natural viewsheds and sense of place (see Figure 2). EXHIBIT 1: Appreciating natural viewsheds The appreciation of natural viewsheds or ability to detect the presence of nonnative trees (whether invasive or not) often requires in-depth biogeographical and botanical knowledge or experience in identifying and interpreting ecosystem patterns in a landscape.Sense of place not only depends on the physical locality but also the reference baseline or place attachment of the observer.Personal experiences can shape expectations, for example, European tourists may not experience pine trees as unusual or disturbing in the mountainous and forested landscape of the Garden Route, while Australian visitors may be reminded of places near 'home'. habitats (Holmes et al. 2005).Country-wide, it has cost an estimated R62 million from 1997 to 2006 to control invasions by Acacia spp.alone (Marais & Wannenburgh 2008) through national initiatives such as Working for Water. The 'duality' in perspective on whether invasive species are 'good' or 'bad' has a strong cultural dimension (Tassin & Kull 2015) and ties closely to the leading discourse at any given time (Bennett 2014;Carruthers et al. 2011), but it can also vary between different industry sectors, for example, forestry and conservation.In King's 1951 paper 'Tree planting in South Africa', he stated: There is a small section of the population who wage a wordy warfare against the planting of exotic trees.This element with fanatical zeal, presents only one side of the picture.Let us look at the other side.Criticism is often levelled against wattles on the Cape flats and pines on the mountains of the Cape peninsula.In order to put the matter in proper perspective we must go back to the time when much of the Cape flats was a barren waste of drift sands and the only trees on the mountains were contained in small patches … [on the Cape flats there are] … Only shrubs called blombos (Metalasia) and waxberry (Myrica) both of which are much less valuable than the Australian wattles … [that] yield excellent firewood.Can anyone be so foolish as to imagine that without wattles a population starved for firewood would not have stripped the mountains of woody vegetation?The claim can safely be made that indirectly the wattles have saved the mountain flora from extermination.(p.13) In a similar vein, King (1951) stated the following about pines: Despite their high intrinsic value, pines have been described as weeds, mainly on the grounds that they tend to spread.This is only true of Pinus pinaster, which can readily be kept in check. The allegation that pine plantations are ousting indigenous vegetation is not entirely true, but, even if it were, it cannot be taken seriously, because the plantations occupy less than 3,000 of the 120,000 acres in the ) did recognise that exotic conifers may offer aesthetical and recreational benefits to people, especially in peri-urban areas.Such opposing perceptions about non-native species are also pervasive among members of the public, often reflecting incomplete understanding about ecological processes or biodiversity conservation, as shown by 62% of park visitors to the Addo Elephant National Park who did not consider the potential presence of introduced fauna to be in conflict with conservation objectives (Boshoff et al. 2008).Being uninformed can lead to the confusion of unrelated issues, for example, the felling of plantations perceived as synonymous with deforestation of natural forests, or where 'saving' a tree -any tree -is seen as positive (Van Wilgen 2012).Perceptions are highly context-specific, for example, species that have been naturalised for a long time are not necessarily perceived as 'alien' even by traditional communities (Shackleton et al. 2007); or, despite knowing a species' alien status, a high utility value may reduce the support for its outright eradication (De Neergaard et al. 2005).This may further vary according to socio-economic variables, where people of higher economic status or better education may rate non-consumptive values of indigenous plants (e.g.aesthetics) higher than poorer or less educated people (Le Maitre et al. 1997).The resultant conflict (between informed and uninformed parties) over non-native species (Dickie et al. 2014) may reduce the potential support for clearing IAPs in PAs, in particular those located close to urban areas, or 'embedded' in cities (Van Wilgen 2012). This does not mean that the potential contribution of PAs in preserving natural viewsheds or landscapes on the basis of aesthetics has escaped recognition.Possibly the earliest local proponent of this cause was Wicht (1943) who -using the cluster pine Pinus pinaster (=maritima) as an examplesuggested (almost prophetically, see Cowling et al. 2009;Kraaij, Cowling & Van Wilgen 2011) that exotic plants would, over time, dominate everywhere except in nature reserves and that 'To botanists and all other lovers of nature the thought that such a change is likely to come is very distressing' (p.34).He further suggested that 'species that are spreading into natural vegetation … [are] undesirable from an aesthetic or scientific view' (p.43) and (quoting the Forest and Veld Conservation Act of 1941) that nature reserves should be set aside for the 'preservation of natural scenery, forests, flora or fauna thereon' (p.45) (Wicht 1943). The above raises three important questions, which we discuss in the ensuing sections, using the GRNP and surrounds as example: Under these management guidelines, protection is provided not only to existing natural assets such as existing and proposed PAs but also to transformed, so-called 'Productive Green Areas', that include existing agricultural and commercial forestry areas, which: have historically been, and should remain important sources of productive economic activity in the municipal area, as well as being contributors to the sense of place.(n.p.) This role of SoP and visual amenity of non-native vegetation thus finds legal application (which may be in conflict with the application of NEM:PA) in urban expansion developments.In a recent case at the coastal town of Plettenberg Bay on the Garden Route, a part of a residential development was not granted environmental authorisation in order to retain a stand of mature, non-native Eucalyptus trees alongside an indigenous forest on the same property. Park management plans A somewhat more detailed consideration of viewshed is found at the park management plan level.For example, the Garden Route National Park Management Plan (GRNPMP; SANParks 2012) -and in fact all other park management plans -make generic provision for viewshed protection areas and defines a Viewshed Protection Zone as 'an area where any developments should be screened to prevent excessive impact on the aesthetic appeal of the park'.The GRNPMP recognises five different zones (Table 1, Figure 2) that stipulate limits of acceptable change in terms of aesthetics and recreational activities, including consideration of facilities and infrastructure development and visitor numbers. Although terms such as 'wild appearance and character', 'natural appearance' and 'wilderness characteristic' are used to distinguish between the different types of zones, there is no specific mention of IAPs and their potential influence on the aesthetic appeal in these zones, or visitor's experiences.We are unaware of any initiatives in the national parks through which viewsheds or the SoP experiences of visitors in relation to viewsheds are being monitored.This might be a function of NEM:PA providing only limited support for monitoring and reporting against progress in implementing plans, with legal obligations to monitor largely confined to the impact of revenue-generating activities. Public perceptions towards invasive alien plants in protected areas Information on the perception of park visitors regarding the presence of IAPs in viewsheds, or more generally on SoP in South African PAs, appears to be non-existent.Here, we present results from an exploratory and an opportunistic survey.The first was conducted during 2013 in the Knysna Section of the GRNP at three sites: Spitskop Viewpoint and the Fisantehoek and Sinclair Huts on the Outeniqua Hiking Trail (Figure 3).The second opportunistic survey was conducted along the course of the 5-day Tsitsikamma Hiking Trail (in January 2014), which crosses mountain fynbos and indigenous forest areas managed by South African National Parks (Tsitsikamma section of the GRNP) and pine plantations (Cape Pine). In the first survey, a composite panoramic photograph was taken of the available viewshed at each of the three sites, and each photo was delineated into numbered sections based on the visible extent of IAP coverage (see examples in Figure 4a), but excluding barren areas such as roads.For each of the identified numbered sections, this was equated to actual IAP densities (see Remote Activities that impact on the intrinsically wild appearance and character of the area will not be tolerated Primitive Activities that impact on the intrinsically wild appearance and character of the area should be restricted, and impacts limited to the site of the facility Quiet Activities that impact on the relatively natural appearance and character of the area should be restricted, though the presence of larger numbers of visitors and the facilities they require, may impact on the feeling of wildness found in this zone Low-intensity leisure Although it is inevitable that activities and facilities will impact on the wild appearance and reduce the wilderness characteristics of the area, these should be managed and limited to ensure that the area still provides a relatively natural outdoor experience High-intensity leisure Although it is inevitable that the high visitor numbers, activities and facilities will impact on the wild appearance and reduce the wilderness characteristics of the area, these should be managed and limited to ensure that the area generally still provides a relatively natural outdoor experience appropriate for a national park Source: Adapted from SANParks 2012 Also see Figure 3. natural).Overnight hikers and day visitors were approached between 08 May and 07 July 2013 and shown a photo of the viewshed (with numbered sections) at the given site.They were then asked to score each section on how it influenced their viewshed experience, where 1 = a reduced experience, 2 = no effect and 3 = an enhanced experience.They were asked to motivate each score: every time IAPs were mentioned, it was recorded against the specific section.The survey was concluded with the question, 'Why did you specifically choose to visit this site'?If the answer included words or phrases alluding to the aesthetic appeal of the site (e.g.'naturalness', 'wildness', 'prettiness'), it was recorded (see Box 2 and Figure 5). During the second opportunistic survey, hikers encountered were asked three questions.This was after hiking for at least 2 days through a landscape with a high prevalence of nonnative trees, including dense stands of pines (see Figure 6) and wattles in the fynbos sections (see Box 3). Synthesis of insights Our varied assemblage of evidence suggests that SoP, although provided for in legislative spheres and by national park and environmental management, remains a poorly developed concept in South Africa.Where and when considered, it rarely relates to a holistic appreciation of viewsheds, landscapes or biodiversity, including nonnative trees.Specifically for the GRNP, the extant zonation EXHIBIT 2: Results from visitor surveys In total 73 visitors were interviewed: 38 at Fisantehoek Hut, 29 at Sinclair Hut, and 6 at Spitskop Viewsite.This yielded a total of 508 experience scores for all 25 numbered sections (38 respondents × 4 sections for Fisantehoek Hut; 29 respondents × four sections for Sinclair Hut; six respondents × 11 sections for Spitskop Viewpoint; see Figure 4).Overall, 57 respondents mentioned invasive alien plants (IAPs) at least once, bearing in mind that they had the opportunity to do so for every numbered section.This suggests that there is a relatively high awareness about IAPs in the surroundings, but this does not necessarily translate into negative viewshed experiences, especially when it comes to mature trees of any type (Figure 5).scheme (Table 1) recognises changes to aesthetics due to human activities, albeit at a very coarse scale, but it does not consider non-native trees or IAPs.The same applies to the demarcated Viewshed Protection Zone (Figure 3) where, ironically, high AIP densities may occur in areas zoned as 'Remote' or along the Outeniqua Hiking Trial (Figure 2c), particularly in fynbos vegetation.In buffer zones, our examples show that over the past century, nonnative trees have become established features of iconic view sites (Figure 1) and main tourist routes (Figure 2b).This is indicative that wattles and pines continue to form part of a publicly acceptable viewshed, firstly, due to the historic role of these non-native trees in the development of the region, and secondly, due to a lack of a collective mental model of what a representative Garden Route viewshed should look like.Thus, while Le Maitre et al. (2011) argue that invasive Australian acacias can negatively affect both tourist experience and SoP by reducing 'landscape diversity' and degrading recreational areas, our exploratory surveys suggest that this relationship is not so straightforward.Even where park visitors were aware of non-natives, such knowledge did not necessarily translate into negative experiences (as suggested by results from our survey -Figure 5a), and it seems that mature (> 2 m) nonnative trees may even contribute to positive scenic or aesthetic experiences.Likewise, while some hikers displayed fairly good understanding of invasive pines and alluded to negative experiences, others were apparently oblivious or tended towards absolute biocentricity -simply enjoying any trees (Box 3).Therefore, while scientists may be able to distinguish between ecosystem service benefits offered by initial introductions of non-natives, and ecosystem service losses due to subsequent invasions, this may not be intuitive for non-scientists, especially when such species also provide tangible benefits (e.g.De Neergaard et al. 2005).This may hinder public support or understanding of clearing and restoration efforts.http://www.koedoe.co.zaOpen Access The same applies at park management level, where managers may be aware of the negative impact of IAPs on biodiversity and provision of ecosystem services, but may not value the preservation of aesthetics (e.g. in California - Funk et al. 2013) or consider the potential impact on tourism (Forsyth et al. 2012) as a reason to combat IAPs.This is similar for attitudes towards non-native ornamental species that are found in rest camps in Kruger National Park (Foxcroft, Richardson & Wilson 2008), where it required education and increased awareness of staff to gain support for the removal of such plants.Similarly, tourists in Pilanesberg who were aware of the invasive prickly pear (Opuntia stricta) indicated a willingness to contribute financially to its control (Nikodinoska et al. 2014).In an 'open-access' scenario such as the GRNP where visitors do not necessarily have to pass through a gate to experience the park, the demarcation between PAs, surrounding buffer zones and the rest of the landscape is less easily observable.Given that non-native trees are entrenched as part of the Garden Route's cultural heritage, it will require a particularly pragmatic (and creative) approach to use feedbacks from visitor SoP experiences to inform park management. The role of shifting baselines in sense of place Given how attachment to place and SoP develop over time and is closely related to individual baselines, it is worth considering how the concepts of naturalness or reference states are formed.Retaining historical character of a region may serve as motivation for AIP eradication but over time people become accustomed to non-natives (Schlaepfer et al. 2011).There may be differences between experiences of local and foreign visitors, making 'novel' ecosystems more acceptable to some due to a lack of historic perspective, or personal familiarity with similar viewsheds elsewhere. Internationally, there is some indication that the way in which people appreciate natural areas is evolving.A recent case study in the River Piedra floodplain (Spain), has shown that over the past 50 years, there has been a positive shift towards appreciation of social and cultural ecosystem services, including aesthetics, inspiration and SoP (see Figure 5 in Felipe-Lucia, Comín & Escalera-Reyes 2015).Similarly, land managers of The Nature Conservancy in the USA consider impacts of non-natives on aesthetics at least equally important to degradation of other (provision) ecosystem services (Kuebbing & Simberloff 2015).While restoration and AIP clearing can have positive effects on ecosystem service provision and tourism in certain biomes or vegetation types (e.g.fynbos -Currie, Milton & Steenkamp 2009), the 'human element' also needs due consideration.An example from Finnish national parks showed that the natural characteristics such as scenery and biotype diversity are significant determinants of park visitation, along with factors such as recreational opportunities, for example, availability of trails (Neuvonen et al. 2010). While it is easy to understand the danger of an 'absolute ecocentric' attitude (Sharp et al. 2011), which would oppose the clearing of any tree at all costs, a single-minded, negative focus on non-native species without considering other factors may be counter-productive.In the case of the Garden Route, the contribution of non-native trees and plantation forestry to the historic heritage (e.g.'flower gums', Figure 2d) deserves consideration.There are legal mechanisms to guide some of these trade-off situations, such as the 'Champion Tree' project by the Department of Agriculture, Forestry and Fisheries (in terms of Section 12 of the National Forests Act of 1998) which recognises individual specimens or clumps of non-native trees of historic value or exceptional size.Also, the South African National Heritage Resources Act (Act No. 25 of 1999) mandates protection of 'heritage objects' including natural landscapes of cultural significance; even non-native trees associated with such landscapes, for example, historic arboretums.This emphasises the need for a deliberate process of consultation informed by history, public participation and BOX 3: Responses of nine hikers on the Tsitsikamma Hiking Trail. EXHIBIT 3: Responses of nine hikers on the Tsitsikamma Hiking Trail to an informal survey about their experience of viewsheds, sense of place and invasive alien plants along the trail.The results suggest varying levels of awareness about invasive alien plants and different appreciation of the natural surroundings.It appears that more obvious signs of human impacts were found more disturbing than pine trees. Conceptual framework for incorporating viewsheds and sense of place in the management plans of national parks In South Africa, the NEM:PA makes it obligatory for authorities responsible for PAs to develop a management plan for each such area, to submit this plan for the approval of the Minister responsible for the environment and to manage the PA in accordance with the approved plan.In the case of SANParks, an adaptive planning approach is followed to formulate a hierarchy of objectives that serve as a basis for developing management plans for the various national parks (see Biggs & Rogers 2003;Foxcroft & McGeoch 2011).Adaptive planning takes place in consultation with relevant organs of state, local communities and other affected parties. In Figure 7 we propose a conceptual framework for linking the concepts of viewshed and SoP to the typical high-level management objectives of a national park.According to our framework, viewsheds (and other sensory features such as soundscapes) can be described in both biophysical and social or cultural terms.SoP experiences can be viewed as an emergent property of the interaction between people and the environment.Such experiences are mediated by factors such as community history, identity and value systems and can be facilitated or hindered by park management actions.Regarding the latter, there is an obvious interplay between management action and visitor experiences: while conservation initiatives have the potential to build on existing, or to create new, SoP associations (Larson et al. 2013), being aware of SoP experiences can be an effective driver of conservation actions (Ardoin 2014). Several arrows in Figure 7 indicate bi-directional influence.Such two-way feedbacks should be important considerations in designing monitoring and management interventions for incorporating viewsheds and SoP into park management plans. Concluding thoughts and proposing a research agenda In this article, we have explored the potential impact of IAPs on viewsheds, within a broader context of SoP as experienced by visitors to GRNP.The novelty of our study also results in limitations, in that insights derived from focussing on IAPs, the GRNP and park visitors are not necessarily applicable to other landscape transformers, PAs or sectors of society. Biodiversity objective: To conserve the plants, animals, ecological processes, landscapes and cultural assets of the area for the appreciation of all users. -terrestrial ecosystems http://www.koedoe.co.zaOpen Access However, our study also highlights some pertinent points with generic relevance to PAs in South Africa. Firstly, the mixed information sources considered in this study suggest that viewshed and SoP are important conservation features from both conceptual and legal perspectives.As such, these concepts need to inform conservation action and therefore should be incorporated into park management plans. Secondly, viewshed and SoP should be considered through both natural and social lenses to facilitate discussions of the 'desired future conditions' of landscapes under conservation from both ecological and social perspectives (Williams & Stewart 1998).To this end, viewshed and SoP can potentially serve as 'boundary concepts' to promote interdisciplinary learning between social and natural scientists as well as communication between science, management and stakeholders (Chapin III & Knapp 2015). Thirdly, the links and feedbacks between conservation features such as viewshed and SoP, disturbances such as nonnative and invasive plants and various park management objectives are multiple and intricate.These relationships may straddle human history (shifting baselines) and thus environmental and social contexts.Place-specific monitoring will be required to meaningfully incorporate these concepts into park management practice. Fourthly, the current lack of formal research in South Africa on viewsheds and SoP, especially relating to national parks and their buffer areas, represents a considerable void in our understanding of the relationship between park management and visitor experiences.Some studies on visitors' motivations to visit South African PAs have identified activities such as photography (e.g.Saayman, Saayman & Ferreira 2009), implying recognition of scenic values.However, there is a need to explicitly evaluate the role and value of SoP and natural viewsheds in PAs as well as the potential implications that may result from various threats.Threats are likely to be region-and context-specific (e.g.IAPs in GRNP, hydraulic fracturing activities and infrastructure in the Karoo parks and wind farms in coastal areas), further emphasising the breadth of research opportunities. How do the relationships depicted in Figure 7 play out in the real-world setting of a specific park?We conclude by proposing research questions that could serve as a basis from which to develop a more comprehensive research programme for improved appreciation of viewsheds and SoP as conservation constructs.Firstly, we consider questions related to viewsheds: • Considering a broad definition of biodiversity, encompassing genetic, species and ecosystem (including habitat) diversity, could viewshed serve as a surrogate feature (similar to an umbrella species) for conservation? Source: a, c, d and e were provided by Lynne Thompson, George Museum Research Library; b, e, and g were taken by Jaco Barendse FIGURE 1 : FIGURE 1: Non-native trees and invasive alien plants are examples of 'slow transformers' (compared to other human developments) of landscapes and viewsheds in and around the Garden Route National Park (also see Figure 3), as shown by photographs taken in different years near the iconic 'Kaaimansgat' -where the Kaaimans and Swart Rivers enter the Indian Ocean and located within the western Buffer/Viewshed Protection Zone of the GRNP.The view to the north from the Dolphin Point lookout in 1929 (a) and 2015 (b); the view to the south from next to the N2/Kaaimans Pass, pre-1910 (c), 1927 (d) and 2015 (e) -note cellular phone tower 'disguised' as pine tree shown by arrow and insert; and the 'Map of Africa' Viewpoint, pre-1910 (f) and in 2015 (g). FIGURE 1 (Continues...): Non-native trees and invasive alien plants are examples of 'slow transformers' (compared to other human developments) of landscapes and viewsheds in and around the Garden Route National Park (also see Figure 3), as shown by photographs taken in different years near the iconic 'Kaaimansgat' -where the Kaaimans and Swart Rivers enter the Indian Ocean and located within the western Buffer/Viewshed Protection Zone of the GRNP.The view to the north from the Dolphin Point lookout in 1929 (a) and 2015 (b); the view to the south from next to the N2/Kaaimans Pass, pre-1910 (c), 1927 (d) and 2015 (e) -note cellular phone tower 'disguised' as pine tree shown by arrow and insert; and the 'Map of Africa' Viewpoint, pre-1910 (f) and in 2015 (g). FIGURE 2 : FIGURE 2: Four scenes, three from around the Garden Route National Park: Views from (a) the 'Seven Passes Road' near the Woodville 'Big Tree' looking towards the Upper Touw River Catchment; and (b) Donaghy's Hill (42011'47.04"S,145056'01.97"E) in the Franklin-Gordon Wild Rivers National Park, Tasmania; (c) The area north of Karatara in the Garden Route National Park, looking towards the mountains traversed by the Outeniqua Hiking Trail; (d) the non-invasive Australian red flowering 'gum' trees Corymbia ficifolia in full bloom at Bergplaas forestry station is a common and much-loved sight in the Garden Route; along with related alien Eucalypts (Rejmánek & Richardson 2011; Van Staden 2015) many consider these part of the region's natural-cultural heritage. d FIGURE 2 (Continues...): Four scenes, three from around the Garden Route National Park: Views from (a) the 'Seven Passes Road' near the Woodville 'Big Tree' looking towards the Upper Touw River Catchment; and (b) Donaghy's Hill (42011'47.04"S,145056'01.97"E) in the Franklin-Gordon Wild Rivers National Park, Tasmania; (c) The area north of Karatara in the Garden Route National Park, looking towards the mountains traversed by the Outeniqua Hiking Trail; (d) the non-invasive Australian red flowering 'gum' trees Corymbia ficifolia in full bloom at Bergplaas forestry station is a common and much-loved sight in the Garden Route; along with related alien Eucalypts (Rejmánek & Richardson 2011; Van Staden 2015) many consider these part of the region's natural-cultural heritage. FIGURE 3 : FIGURE 3:The location of the Garden Route National Park Wilderness and Knysna Sections, main geographical features and places, and the Viewshed Protection Zones defined in Table1, relative density of Alien Invasive Plants within the GRNP and selected points of interest mentioned in the text or illustrated in other figures. BOX 2 : Results from visitor surveys in the Knysna Section of the Garden Route National Park. FIGURE 4 : FIGURE 4: Examples of photographs shown to park visitors to determine viewshed preferences and assess awareness about invasive alien plants: Views at (a) Fisantehoek Hut and (b) Sinclair Hut showing 4 and 10 delineated sections, respectively, to illustrate the methodology used; and composite panoramic photo of the view at (c) Spitskop Viewpoint (11 sections, not shown). Source: Photo by Dirk Roux FIGURE 6 :FIGURE 5 : FIGURE 6: Invasive pine trees growing along the Tsitsikamma Hiking Trail may, or may not, impact on the viewshed and sense of place experiences of hikers (Box 3). FIGURE 7 : FIGURE 7: Schematic showing how viewshed and sense of place relate to a typical set of high-level objectives that serve as a basis for developing park management plans in South African National Parks.Viewshed and sense of place are cross-cutting concepts likely to be influenced via multiple objectives and in turn impact on the Responsible Tourism Objective. • Is the preservation of natural viewsheds and associated SoP included or provided for in current national legislation relating to PAs? • Do park management plans recognise the potential impact of IAPs on viewshed and SoP as conservation features?• Do visitors to PAs value 'natural' viewsheds and perceive the presence of IAPs as a threat to SoP in such areas?which guide Environmental Impact Assessments, include visual impact considerations as a component.Further, the National Heritage Resources Act (Act No. 25 of 1999) provides protection for listed or proclaimed heritage resources and sites, such as urban conservation areas, nature reserves, and recognised scenic routes.Western Cape provincial legislation requires the preparation of a Spatial Development Framework (SDF) and an Integrated Development Plan for each subregion or municipality.These documents aim to guide landuse to be compatible with cultural and scenic landscapes and could include reference to open-space and scenic resources, together with management guidelines for the area covered by these plans, for example, the Knysna SDF (http://www.knysna.gov.za/wp-content/uploads/2012/12/Knysna-SDF-Nov-2008-Core.pdf). Another area of governance that holds implications for activities in park buffer zones and takes into consideration viewshed and SoP is land-use planning and environmental management.At a national level, the National Environmental Management Act and the Environmental Conservation Act, TABLE 1 : Zonation and limitations of acceptable change to aesthetics and recreational activities. Question 1: Have you seen a particularly pleasing view on the trail ? 1. 'Looking down at Nature's Valley was a pristine scene; waterfall at Bloukrans [overnight hut]; stream at picnic site' 2. 'View from Bloukrans of mountains; forests, large pool, fynbos flowers' 3. 'View of mountains -sense of space; some evidence of human activity but no noticeable human presence; pools and waterfalls; deep forest and rays of sunlight through trees' Have you noticed alien (that do not naturally belong) plants along the trail? If so, which ones and how did you feel about them Tomato plant at overnight facility' 8. 'Yes: pine trees -I like trees so the more trees the merrier' science to agree on acceptable viewsheds, both native and non-native, that contribute to SoP. • Should South Africa be concerned with conserving a representative sample of natural viewsheds (e.g. per bioregion or biome) and to what degree can or should national parks contribute to such a purpose?• What are 'representative' or 'iconic' viewsheds for specific PAs in terms of historic naturalness and biogeography?• How should specific sites for representative and iconic viewsheds be identified?Should such viewsheds be restored where they no longer exist, and which methods should be used to reconstruct acceptable baselines (e.g.soliciting park visitors to submit historic photographs from of chosen sites)?• What are the main threats to, and modifiers of, natural or cultural viewsheds and how do these affect SoP experiences of visitors?• What is the role of buffer zones in viewshed conservation?• Should thresholds of potential concern (TPCs -Biggs et al. 2011) be developed for viewsheds and how could such TPCs inform monitoring (e.g. through fixed-point photography) for compliance with set objectives? Questions related to SoP are: • How should SoP experiences (based on feedback from stakeholders and visitors) be considered in the design, establishment and management of PAs? • Can we characterise SoP experiences for each national park and surrounding areas?• How do activities, such as guided hikes, animal tracking (e.g.cheetah tracking in Mountain Zebra National Park), trail running and mountain biking (in the GRNP) influence SoP experiences of participating and other visitors to these PAs? • How do SoP experiences differ across age groups, cultures and nationalities of visitors, as well as local versus nonlocal residents, or day versus overnight visitors?• Are the dynamics of SoP experiences different in openaccess PAs to those in fenced-off PAs with distinct boundaries?• What is the relationship between individual and collective experiences in developing attachment to place?Implementing a research agenda as suggested here could significantly contribute to people-centred conservation while at the same time promoting South African National Parks' vision of 'connecting to society' (http://www.sanparks.co.za/about/connecting_to_society/).
2019-04-01T13:14:31.307Z
2016-08-05T00:00:00.000
{ "year": 2016, "sha1": "34ea0d1c1941a31848f96119c5b472783c2796bd", "oa_license": "CCBY", "oa_url": "https://koedoe.co.za/index.php/koedoe/article/download/1357/1889", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "34ea0d1c1941a31848f96119c5b472783c2796bd", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Geography" ] }
252370689
pes2o/s2orc
v3-fos-license
Characteristics of soil microbiota and organic carbon distribution in jackfruit plantation under different fertilization regimes Manure amendment to improve soil organic carbon (SOC) content is an important strategy to sustain ecosystem health and crop production. Here, we utilize an 8-year field experiment to evaluate the impacts of organic and chemical fertilizers on SOC and its labile fractions as well as soil microbial and nematode communities in different soil depths of jackfruit (Artocarpus heterophyllus Lam.). Three treatments were designed in this study, including control with no amendment (CK), organic manure (OM), and chemical fertilizer (CF). Results showed that OM significantly increased the abundance of total nematodes, bacterivores, bacteria, and fungi as well as the value of nematode channel ratio (NCR) and maturity index (MI), but decreased plant-parasites and Shannon diversity (H′). Soil microbial and nematode communities in three soil depths were significantly altered by fertilizer application. Acidobacteria and Chloroflexi dominated the bacterial communities of OM soil, while Nitrospira was more prevalent in CF treatment. Organic manure application stimulated some functional groups of the bacterial community related to the C cycle and saprotroph-symbiotroph fungi, while some groups related to the nitrogen cycle, pathotroph-saprotroph-symbiotroph and pathotroph-saprotroph fungi were predominated in CF treatment. Furthermore, OM enhanced the soil pH, contents of total soil N, P, K, and SOC components, as well as jackfruit yield. Chemical fertilizers significantly affected available N, P, and K contents. The results of network analyses show that more significant co-occurrence relationships between SOC components and nematode feeding groups were found in CK and CF treatments. In contrast, SOC components were more related to microbial communities than to nematode in OM soils. Partial least-squares-path modeling (PLS-PM) revealed that fertilization had significant effects on jackfruit yield, which was composed of positive direct (73.6%) and indirect effects (fertilization → fungal community → yield). It was found that the long-term manure application strategy improves soil quality by increasing SOM, pH, and nutrient contents, and the increased microbivorous nematodes abundance enhanced the grazing pressure on microorganisms and concurrently promoted microbial-derived SOC turnover. Manure amendment to improve soil organic carbon (SOC) content is an important strategy to sustain ecosystem health and crop production. Here, we utilize an 8-year field experiment to evaluate the impacts of organic and chemical fertilizers on SOC and its labile fractions as well as soil microbial and nematode communities in different soil depths of jackfruit (Artocarpus heterophyllus Lam.). Three treatments were designed in this study, including control with no amendment (CK), organic manure (OM), and chemical fertilizer (CF). Results showed that OM significantly increased the abundance of total nematodes, bacterivores, bacteria, and fungi as well as the value of nematode channel ratio (NCR) and maturity index (MI), but decreased plant-parasites and Shannon diversity (H ). Soil microbial and nematode communities in three soil depths were significantly altered by fertilizer application. Acidobacteria and Chloroflexi dominated the bacterial communities of OM soil, while Nitrospira was more prevalent in CF treatment. Organic manure application stimulated some functional groups of the bacterial community related to the C cycle and saprotrophsymbiotroph fungi, while some groups related to the nitrogen cycle, pathotroph-saprotroph-symbiotroph and pathotroph-saprotroph fungi were predominated in CF treatment. Furthermore, OM enhanced the soil pH, contents of total soil N, P, K, and SOC components, as well as jackfruit yield. Chemical fertilizers significantly affected available N, P, and K contents. The results of network analyses show that more significant co-occurrence relationships between SOC components and nematode feeding groups were found in CK and CF treatments. In contrast, SOC components were more related to microbial communities than to nematode in OM soils. Partial least-squares-path modeling (PLS-PM) revealed that fertilization had Introduction Soil is the most active carbon pool in the ecosystem, with an organic carbon stock of 1,500 Gt in the first meter (Balesdent et al., 2018). Soil organic carbon (SOC) is an important indicator for soil quality assessment because it contributes to the modification of biological, physical, and chemical properties of soil . Due to the large variations in environmental conditions (geography and climate) as well as the background of relatively stable soil organic C, it is difficult to detect the changes of SOC in the short and medial term (Haynes, 2005). Physicochemical properties and turnover time influence the degree of stabilization of SOC components. Labile organic carbon (LOC) is a small component of SOC, which includes potassium permanganate-oxidizable C (KMnO 4 -C), microbial biomass carbon (MBC), and dissolved organic carbon (DOC) (Haynes, 2005). The labile C fraction responds to fertilization management more quickly than SOC, and exhibits rapid turnover times (Mi et al., 2019). Hence, these components are considered early indicators of soil quality and affect soil function in specific ways (Blanco-Moure et al., 2016). Measurement of a single fraction of LOC does not adequately reflect the changes in soil quality caused by management. Instead, it is necessary to measure several LOC components simultaneously to estimate the effects of management on soil properties. The application of organic manure (OM) (e.g., residues, compost, and manure) is an effective way to increase soil C storage through direct C inputs and/or indirect increase in net primary productivity and root litter and exudation, which contributes mostly to soil sequestered or stable C and the composition of the soil microbiota (Sokol and Bradford, 2019;Lazcano et al., 2021). Different feeding habits of nematodes affect the composition and function of soil microbial communities . Generally, the top-down Abbreviations: CK, no fertilization; CF, chemical fertilization; OM, organic manure; SOC, soil organic carbon; MBC, microbial biomass carbon; POC, potassium permanganate-oxidizable carbon; DOC, dissolved organic carbon. regulation of predators by microfauna positively influences the microbial biomass and community structure (Neher, 2010). The highly complex network between nematodes and microbes in soil plays a vital role in SOC conversion (Jiang et al., 2018). The composition and abundance of microbes that release plant-available nutrients from organic fertilizers was strongly correlated with the mineralization of organic carbon (Zhang H. et al., 2015). However, it is still difficult to explain the relationship between microbiota and SOC components when soils are amended with exogenous organic resources. The composition of SOC in surface soils and its association with environmental variation has been extensively investigated over the years (Doetterl et al., 2015). In a 26-year application of fertilization strategies, reported that OM can increase the concentrations and proportions of labile C as well as the stock of stable C in topsoil. Qaswar et al. (2020) reported that the 34-year application of manure and inorganic fertilizers increased crop yield sustainability and the organic carbon sequestration rate in the top layer. By comparison, soils deeper than 20 cm below ground contain more than half of global SOC pools (Rumpel et al., 2012). Furthermore, microbial community structure, carbon availability, and composition often change with soil depth (Stone et al., 2014). Nevertheless, the composition and preservation of SOC components in deeper layers are poorly understood, especially when it comes to the stability and function of soil biota in tropical agroecosystems. Over the last 20 years, jackfruit (Artocarpus heterophyllus Lam.) has been widely cultivated in tropical and subtropical regions of China due to its high economic benefits. The widespread and inappropriate fertilization regimes [e.g., excessive chemical fertilizer (CF) inputs] have adverse effects on soil C sequestration due to the acceleration of C mineralization (Brown et al., 2014). Organic amendments and the replacement of CF are increasingly recommended as effective measures to supplement soil C sources in orchards (Maltas et al., 2018). As a deep-rooted fruit, its main absorption roots are distributed in the 0-60 cm soil layer. In this study, we focused on the effects of long-term OM on soil microbial and nematode communities as well as organic carbon distribution in different soil layers. Frontiers in Microbiology 02 frontiersin.org We aimed (1) to investigate the distribution of total and labile organic C in the three layers of soil depths (0-20, 20-40, and 40-60 cm) under different fertilization patterns; (2) to evaluate the impact of different fertilization patterns on the abundance and composition of the soil microbiota; (3) to explore and describe the relationships among various components of SOC, soil microbial and nematode communities. Experimental design and sample collection The long-term fertilization experiment commenced in the town of Gaolong in Wanning City, Hainan Province, China (18 • 737 N,110 • 192 E) with an 8-year jackfruit monoculture, including three treatments with triplicates in a random plot design. The individual plot of each treatment consists of 20 jackfruit trees with 450 m 2 (25 m × 18 m). Three treatments, OM, CF, and control (CK, without any amendment), were applied to the field since jackfruit was planted. Chemical fertilizer treatment was adjusted to the same amounts of N, P, and K as OM by applying urea, calcium magnesium phosphate, and potassium chloride, respectively (Table 1). Information about the study site and the characteristics of manure used had been described in detail in our previous manuscript (Su et al., 2021). To evaluate the reproducibility of the experiment, a total of fifty-four soil samples (3 treatments × 3 depths × 3 biological replicates × 2 sampling times) in three different layers of depths (0-20, 20-40, and 40-60 cm) were collected from six random sites under the trunk base of each tree of the treatment plots on June 2019 and 2020. Composite samples of six sites per plot were collected with a shovel. Each sample was collected in an independent sterile plastic bag, sealed, and homogenized thoroughly. Taxonomic analysis of nematodes was classified from about 200 g of fresh soil samples, chemical analyses were chosen from 100 g of soil samples after air-dried and sieved (<1 mm), and soil DNA was extracted from 50 g of soil samples after gently sieved through a 2 mm sieve and stored at −80 • C. The total jackfruit fruit yield in each treatment was weighed from all harvested mature jackfruit fruits in each plot. Soil nematode determination A modified cotton-wool filter method was used for nematode extraction. The number of nematodes was expressed as the number of individuals per 100 g dry soil. And at least 100 nematodes were randomly selected from each sample and identified as four trophic groups: bacterivores (Ba), plantparasites (Pp), fungivores (Fu, and omnivores-predators (Op) (Yeates et al., 1993). In the case where the number of total nematodes did not reach 100 in a sample, all nematodes were identified. The guilds were characterized on the colonizerpersister (c-p) scale (1-5) as previously described (Bongers and Bongers, 1998). The ecological indices of soil nematodes had been described in our previous manuscript (Su et al., 2021) and calculated as follows: maturity index (MI) and Shannon diversity (H ) for genera, and nematode channel ratio (NCR) for detecting the decomposition pathways of soil organic matter. We visualized the potential differences of nematode communities in soil using principal coordinate analysis (PCoA) based on the Bray-Curtis different similarity matrix generated on nematode abundance. The effect of fertilization on the nematode community structure was studied using a permutational multivariate analysis of variance (PERMANOVA) with 999 permutations by the Adonis function (vegan package) in R (Ginestet, 2011). DNA extraction, quantification of the total soil microbial biomass and Illumina sequencing PowerSoil TM DNA Isolation Kit (MoBio Laboratories Inc., Carlsbad, CA, United States) was used to extract total DNA from 0.25 g of soil, as directed by the manufacturer's instructions. The quality of DNA was detected with a spectrophotometer (NanoDrop 2000, United States). The total numbers of soil bacteria and fungi were quantified by Real-Time PCR and performed according to the methods described by Su et al. (2021). The bacterial 16S rRNA gene V4 hypervariable region was amplified with primers 520F and 802R (Claesson et al., 2009) from soil genomic DNA, while fungal ITS1 was amplified using primers ITS1F and ITS2R (Mueller et al., 2014). The sequencing was performed using the Illumina MiSeq PE250 sequencing platform (Illumina, Inc., CA, United States) at Personal Biotechnology Co., Ltd., Shanghai, China. The sequence data were made available in the National Center for Biotechnology Information (NCBI) Sequence Read Archive (SRA) database under BioProject number PRJNA836735. Bioinformatics analyses The adaptors and primer sequences were removed, and the raw sequences were demultiplexed according to a unique barcode. Paired-end reads for all samples were run through Trimmomatic (version 0.35) to remove low-quality base pairs according to the parameters (SLIDINGWINDOW: 50:20 MINLEN: 50). The trimmed reads were merged using FLASH program (version 1.2.11) with default parameters (Magoč and Salzberg, 2011). Briefly, low-quality sequences were removed according to screen.seqs command using the following filtering parameters, maxambig = 0, minlength = 100, maxlength = 580, maxhomop = 8. The reserved sequences were assigned to operational taxonomic units (OTUs) with a threshold of 97% identity level using the UPARSE pipeline (Edgar, 2013). Taxonomic assignment was performed using SILVA reference database (v12_8) (Quast et al., 2013) and UNITE database (v7.0) (Kõljalg et al., 2013) for bacteria and fungi, respectively, with a confidence score ≥0.6 by the classify.seqs command in mothur (Schloss et al., 2009). The taxonomic information of operational taxonomic unit (OTU) (from Phylum to Species) was classified based on NCBI. The alpha-diversity was estimated using the Chao1 richness, Shannon, and phylogenetic diversity indices which were calculated based upon neighbor-joining phylogenetic trees generated with mothur and plotted by R. To explore major similarity and variance components of soil microbial community structures, PCoA based on Bray-Curtis distance was performed on OTUs matrices and sample grouping data in R. Permutational multivariate analysis of variance was performed to evaluate the effect of fertilization and soil depths on microbial community structure (Kusstatscher et al., 2020). To visualize the associations among nematodes, bacteria, fungi, and SOC components in the network interface, a correlation matrix was used to calculate the possible pairwise Spearman's rank correlations. The distribution matrix of total nematodes genera, bacteria, and fungi phyla into the relative abundance for network construction was standardized. A valid co-occurrence was considered a statistically robust correlation between taxa with the Spearman's correlation coefficient (rho) >0.6 and the P-value <0.01 (Shannon et al., 2003). The network analyses and topological characteristics of the networks were performed using R and calculated as the methods described by Su et al. (2021). A partial least squares path model (PLS-PM) was carried out with SmartPLS (Sarstedt and Cheah, 2019) to evaluate the direct and indirect effects of fertilization, SOC components (SOC, POC, DOC, and MBC), microbial and nematode community composition on the jackfruit yield. Microbial and nematode compositions were used as a latent variable, reflecting the relative abundance of each phylum and trophic group, respectively. The goodness of fit of the PLS-PM was evaluated by examining the Goodness-of-Fit index and the coefficient of determination (R 2 ) of the latent variables. For investigating the functions of the bacterial and fungal communities, Functional Annotation of Prokaryotic Taxa (FAPROTAX) and FUNGuild were used for the identification of potential functions in different treatments via the default settings based on taxonomic information of microorganisms, respectively (Louca et al., 2016). Determination of soil physicochemical properties A glass electrode meter was used to determine the soil pH at a ratio of 1:5 after 30 min of shaking. Soil organic carbon was measured with the potassium dichromate external heating method. Dissolved organic carbon was measured with Micro 2,000 N/C Analytic Jena (Jones and Willett, 2006). Potassium permanganate-oxidizable carbon (POC) was measured as described by Blair et al. (1995). Microbial biomass carbon was analyzed using the fumigationextraction method (Vance et al., 1987). Total soil nitrogen (TN) and alkalyzable nitrogen (AN) contents were measured with Kjeldahl digestion and alkaline-hydrolyzable diffusion method, respectively. Total phosphorus concentration (TP) was determined by digesting soil samples in acid (HClO 4 -H 2 SO 4 ), followed by estimation on a spectrophotometer after developing a yellow color using the molybdenum blue method. The concentration of available phosphorus (AP) was extracted with sodium bicarbonate and then measured with the method of molybdenum blue. Total potassium concentration (TK) was extracted by the sodium hydroxide melting method and determined by a flame photometer. The concentration of readily available potassium (AK) was determined by flame photometry after extraction with ammonium acetate. Statistical analyses SPSS version 20.0 statistical software (SPSS Inc., Chicago, IL, United States) was used to perform a one-way analysis of variance (ANOVA), and Duncan multiple range tests on all parameters in the site to examine difference significance at a value of P < 0.05. The figures were prepared in Origin 2016 and the results were reported as mean ± standard error (SE). Results Fertilization affects nematode community assembly in different soil depths of jackfruit The prevalent taxa number and abundance of nematode confirmed the impact of OM on the nematode community (Supplementary Tables 1, 2). The genus of Cephalobus and Mesorhabditis that belongs to bacterivores, and the Tylencholaimus that belongs to fungivores were more distributed in OM treatment in both two field experiments. Moreover, a significantly higher number of bacterivores (Geomonhystera and Acrobeloides) and omnivores-predators (Labronema) was observed in OM soil (P < 0.05), while a lower number of plant-parasites (Rotylenchulus, Meloidogyne, and Tylenchorhynchus) was detected in the second-year experiment. There was no obvious distinction in variation among different soil depths under the same treatment. Organic manure significantly increased total nematodes in different soil depths compared with other treatments ( Table 2). For the trophic groups, bacterivores and fungivores were significantly enriched in OM treatment in soil depths of 0-20 and 20-40 cm in the first-year experiment, respectively (P < 0.05). Bacterivores, fungivores and omnivores-predators were relatively more abundant in OM soil, and the number of plant-parasites in all soil layers was the lowest in the second year experiment. Fertilization had a certain effect on the ecological indices in the first-year experiment, and the values of NCR were higher in fertilizer treatments. The MI in OM soil was the highest in a depth of 0-40 cm (P < 0.05). Higher values of H were shown in CF and OM treatments in soil depth of 20-60 cm. The values of NCR were found higher in CF treatment in 0-40 cm, as well as in OM soil in a depth of 40-60 cm. Fertilization affects the microbial abundance and taxonomic composition in different soil depths of jackfruit The abundances of bacteria and fungi were significantly higher in OM treatment than in CF and CK (Supplementary Tables 3, 4). Richness (Chao1), Shannon and phylogenetic diversity indices confirmed the impact of fertilization on the alpha diversity of both bacterial and fungal communities (Supplementary Figure 1). Bacterial alpha diversity was much lower in all soil depths of OM treatment, compared with CK and CF treatment (P < 0.05, Supplementary Figure 1A). The alpha diversity in deep soil (40-60 cm) treated with OM was significantly lower than that in surface soil (0-20 cm). In addition, the chao1 value and phylogenetic diversity indices were much lower in CK and OM treatment (Supplementary Figure 1B), and the Shannon diversity index was significantly lower in CK. The sampling depth in different treatments had a Values in a column followed by the same letter in each soil depth are not significantly different at P < 0.05. The standard errors are in parentheses. The abundance of the total nematode is expressed as individuals per 100 g of dry soil. CK, no fertilization; CF, chemical fertilization; OM, organic manure. prominent effect on the alpha diversity of fungal communities which showed a higher value in the surface soil. A total of 1,029,274 high-quality bacterial reads and 1,330,571 fungal reads were obtained from 27 soil samples in 2019, and 1,969,457 high-quality bacterial reads and 1,955,677 fungal reads were obtained in 2020. After removing the low-quality and plant-derived reads, the remaining reads were clustered into 10,507 bacterial and 3,670 fungal OTUs in 2019, and 11,045 bacterial and 3,091 fungal OTUs in 2020, respectively. Based on the OTU classification results, Actinobacteria, Proteobacteria, Chloroflexi, and Nitrospirae were the dominant bacterial phyla in treatments with different soil depths, accounting for 70.0-83.9% of the total sequences. Actinobacteria and Proteobacteria dominated the bacterial communities of OM soil in the first-year experiment ( Figure 1A). Moreover, Proteobacteria was more prevalent in CF soil in the second-year experiment ( Figure 1B). Compared with CF, Chloroflexi was significantly enriched in the CK and OM treatments. Nitrospirae was significantly abundant in CF soil, followed by CK in three soil depths. Ascomycota, Basidiomycota, and Zygomycota were the dominant fungal phyla in treatments with different soil depths, accounting for 74.4-95.8% of the total reads (Figures 1C,D). Ascomycota was significantly enriched in the CK soil. Basidiomycota dominated the OM soil fungal communities in the first-year experiment ( Figure 1C) and was more prevalent in CF and OM soils in the second-year experiment (P < 0.05, Figure 1D). Zygomycota was significantly abundant in surface soil of OM (0-20 cm) in the 2-year experiments. The variation of phyla in each treatment with different soil depths had similar trends. Principal coordinate analysis revealed significant differences in the composition of microbial and nematode communities in treatments amended with different fertilizers (Figure 2). Permutational multivariate analysis of the microbial communities was in agreement with PCoAs in that fertilization has a significant impact on microbial and nematode communities in the 2-year experiments. In addition, the difference in microbial community composition explained by fertilization was greater than the difference in nematode community (bacteria: R 2 = 0.66/0.62, P = 0.001; fungi: R 2 = 0.89/0.78, P = 0.001; nematode: R 2 = 0.29/0.30, P = 0.001). In general, the effect of soil depth on microbial and nematode communities was not as significant as that of fertilization (Supplementary Table 5). Therefore, the later analysis was more focused on the effect of fertilization on the microbial and nematode communities as well as SOC components. Jackfruit yield, soil physicochemical properties, and their correlation analyses with microbial and nematode communities Compared with CF and CK, OM treatment resulted in a significant (>13%) increase in jackfruit yield in two consecutive years (Supplementary Tables 3, 4). Soil pH, total nitrogen, phosphorus, potassium concentrations, and SOC components, especially MBC were generally higher in OM treatment in all soil depths in the 2-year field experiments ( Supplementary Tables 3, 4). Chemical fertilizers significantly affected AN, AP, and AK concentrations. Network analysis was used to determine the co-occurrence patterns of microbiome, nematode, and SOC components based on strong and significant correlations despite that the calculated modularity index was low (Figure 3 and Supplementary Table 6). Overall, different fertilizer treatments showed a remarkable effect on association networks of nematode and microbiome. The values of average path length (APL) and average clustering coefficient (avgCC) in these empirical networks were higher than those of their respective identically sized Erdös-Réyni random networks (Supplementary Table 6). Furthermore, the number of edges, average connectivity (avgK), avgCC and the percentage of the positive link (P%) of bacteria-fungi were the greatest in OM network, whereas APL was the smallest in CF network. The P% of bacteria-nematode was generally higher in fertilizer treatments, while SOC components showed more positive co-occurrence relationships with microbiota in CK soil. Strikingly, more positive co-occurrence relationships (9.68%) between plantparasites and others were founded in the soil treated with CF. In addition, the soil of CK treatment showed more negative cooccurrence relationships between plant-parasites Meloidogyne and others (SOC, DOC, and the fungi Zygomycota, which were generally lower in CK soil), and more positive co-occurrence relationships between omnivores-Predators Prionchulus and SOC components (SOC and DOC). The dominant plantparasites Tylenchorhynchus showed a significantly negative co-occurrence relationship with DOC and a positive cooccurrence relationship with bacteria Nitrospirae which is the preponderant phylum in CF treatment. There were more positive co-occurrence relationships between SOC components and microbial communities in OM soils. The number of links between microbial taxa and microbivorous nematodes in each treatment was OM (29) < CF (24) < CK (4). As mentioned above, the correlation analysis displays the possible relationship of microbial and nematode communities Interaction networks among bacterial and fungal phyla, nematode trophic groups, and SOC components present in different fertilization treatments. A connection stands for a strong (Spearman's rho > 0.6) and significant (P < 0.01) correlation for the CK (A), CF (B), and OM (C). For each panel, the node size is proportional to the number of node connection across all the samples and the thickness of each connection between two nodes (that is, edge) is proportional to the value of Spearman's correlation coefficients. Lines connecting nodes (edges) represent positive (red) or negative (blue) co-occurrence relationships. CK, no fertilization; CF, chemical fertilization; OM, organic manure. with SOC components. However, it cannot uncover the direct (or indirect) causal relationship. Partial least-squarespath modeling was employed to quantify the specific causality from a holistic view. As can be seen, the model explained 79.4% of the variation in jackfruit yield (R 2 = 0.794, Figure 4). Fertilization had significant effects on jackfruit yield, which was composed of positive direct (73.6%) and indirect effects (fertilization → fungal community → yield). Likewise, fertilization induced changes in soil microbial and nematode communities imposed indirect effects on SOC components and jackfruit yield despite that their effect was not significant. Functional prediction analysis Functional Annotation of Prokaryotic Taxa analysis was performed for the determination of predicted functions of bacterial communities in different fertilizer treatments in jackfruit orchards. Six ecological function groups related to the C cycles, including aerobic chemoheterotrophy, chemoheterotrophy, cellulolysis, phototrophy, photoheterotrophy, and aromatic compound degradation, accounted for an average of 32% of the total abundance of predictive functional analysis (Figures 5A,B). The proportions of chemoheterotrophy, aerobic chemoheterotrophy and cellulolysis in the OM soils were significantly higher than those observed in other treatments in the first year while this trend abated in the second year. Six ecological function groups related to the N cycles were classified as aerobic ammonia oxidation, nitrification, aerobic nitrite oxidation, nitrate reduction, nitrogen fixation, and nitrogen respiration, accounting for an average of 41% of the total abundance of predictive functional analysis in our study. The relative abundance of nitrification, aerobic nitrite oxidation and aerobic Partial least squares path models (PLS-PM) displaying the direct and indirect effects of long-term fertilization, SOC components (SOC, POC, DOC, and MBC), microbial and nematode community compositions on the jackfruit yield (Goodness-of-Fit = 0.484). Microbial and nematode compositions are used as latent variables, reflecting the relative abundance of each phylum and trophic group, respectively. Coefficient of determination (R 2 ) values denote the proportion of variance explaining for each variable. Arrow thickness is scaled proportionally to the standardized path coefficients (numbers on arrows). Solid blue and red arrows indicate positive and negative relationships, respectively. ammonia oxidation identified in the CF soils were significantly higher than those observed in the other treatments. FUNGuild was used to predict the nutritional and functional groups of fungi, and the results showed that symbiotroph, saprotroph-symbiotroph, saprotroph, pathotroph-symbiotroph, pathotroph-saprotroph-symbiotroph, pathotroph-saprotroph, and pathotroph were the major components (Figures 5C,D). The proportions of saprotroph-symbiotroph in the OM soils were significantly higher than those observed in the other treatments in the first-year experiment. On the contrary, the relative abundance of pathotroph-saprotroph and pathotroph-saprotrophsymbiotroph in soil depth of 0-40 cm of CF treatment was significantly higher compared with OM treatment. Effects of fertilization on soil nematode community The treatments of CF and OM are representative of soil management systems commonly used in jackfruit orchards. The carbon and energy inputs to the soil food web can generally be delivered along with the trophic levels and affect the abundances of total nematodes and trophic groups present at different trophic levels (Chen et al., 2021). In this study, OM had significantly positive effects on the abundances of total nematodes and trophic groups, except for plant-parasites ( Table 1). This is consistent with the findings by Liu et al. (2016) that integrated 54 relevant studies around the world and showed that organic amendment input improved soil nematode abundance by 37-50%. In addition, manure is more labile for microorganism decomposition which provided more energy and carbon to the nematode assemblage, and more nutrients were released after manure decomposition (Elzobair et al., 2016;Liu et al., 2020). This explains a higher percentage of bacterivores that was found in the manure amendment. Carrillo et al. (2011) also reported a similar finding that microbivorous nematodes positively affected microbial activity during decomposition. As the response of soil nematode was linked to the soil microbial biomass, soil MBC has been found highly correlated with the nematode beta diversity and community in this study (Figure 3). As the main decomposers in the soil, microbes first metabolize organic matter and then transfer energy and carbon to higher trophic groups, including nematodes . Therefore, soil MBC content and microbivores were increased in OM treatment compared with CK and CF treatments. Organic manure amendments strongly stimulated the basal functional guilds of the nematode community, as indicated by the high populations of c-p 1 (Mesorhabditis) and 2 (Cephalobus, Geomonhystera, and Acrobeloides) bacterivores. The current study showed that the predominant trophic group under CK and CF treatments was plant-parasites, particularly Pratylenchus, Rotylenchulus, and Meloidogyne, which occupied more than 33.6 and 41.4% of the total nematode abundance in the 2-year field experiments, respectively. Liu et al. (2013) also reported that the sole application of mineral fertilizer decreased the physiological resistance of the crop, and the weaker roots would have been easily infected by insects such as plantparasites. In contrast, previous studies have reported that manure application provides carbon for soil organisms and improves plant resistance, thereby, increasing soil bacteria abundance and leading to a transition in the predominant trophic group from plant-parasites to bacterivores (Wu et al., 2016). The higher abundance of nematodes occurred in the 0-20 cm depth, where most roots are distributed and soil has better aerobic conditions, which creates a better environment for the survival of nematodes in this layer (Van Nguyen et al., 2020). The nematode community structure, decomposition environments, and the dynamics of soil food webs could be evaluated by the community indices. In the present study, we observed that OM increased the MI value in soils with a depth of 0-40 cm, indicating that manure amendment drives the soil food web toward a relatively stable environment for crop productivity (Table 1). Higher H in the organic mature treatment indicates that the nematode community was more diverse and that some genera dominated the community. As a result of the higher NCR values in fertilizer treatments, bacteria appear to dominate the organic matter decomposition pathway. Effects of fertilization on soil microbial community and functional groups In the present study, the microbial communities in soils that received long-term manure amendment presented significantly higher microbial abundance than those receiving CFs in both layers, which may have been due to the higher soil pH. A previous study also showed that the increased pH may enhance the spore germination, colonization, and reproduction rates of microbes and consequently increased microbial biomass (Chen et al., 2019). Fertilization with OM typically alters the soil microflora, such as richness, diversity, and community composition (Chen et al., 2019). Our results also supported this finding that both soil microbial α-diversity and community composition were significantly changed by OM treatment. Soil bacterial richness and phylogenetic diversity immediately decreased in OM treatment in the first-year experiment, but they rebounded to greater values in the second-year experiment. These findings were probably related to the higher relative abundances of major microbial groups (Acidobacteria and Proteobacteria) found in the soil (St-Pierre and Wright, 2014). Changes in the soil fungal α-diversity may be similar. The bacterial phylum Proteobacteria (Gamma-or Beta-) which was considered a copiotrophic group and related to C availability or labile substrate supply was one of the predominant taxa in manure amendment in both layers, which may have been due to the rich nutrient in the soil (Liu et al., 2019). Most members of Acidobacteria and Chloroflexi have been identified as oligotrophic groups (Fierer et al., 2007), and our study showed higher relative abundances of the Acidobacteria and Chloroflexi phyla in OM and CK soil than in the CF soil. This result was inconsistent with the results of many previous studies and it might relate to the diversely nutritional profile of Chloroflexi which can change depending on the environmental conditions. The higher relative abundance of Nitrospira in CF than other treatments indicates that Nitrospira, and potentially nitrification, was of greater importance in the CF soil, which might be driven by greatly AN release from CF (Supplementary Tables 3, 4). In addition, some Nitrospirae strains can be a dominant nitrite oxidizer or comammox which convert urea to ammonia and CO 2 and may contribute to nitrogen cycling beyond nitrite oxidation (Wang et al., 2019). In our study, compared with other treatments, the functional groups of the bacterial community related to C cycling (e.g., chemoheterotrophy, aerobic chemoheterotrophy, and cellulolysis) were higher in OM soil, which might be due to the addition of OM. The variation of plant root exudates can directly provide a large number of carbon sources and promote the assimilation, utilization of carbon by microorganisms, thus promoting the increase of chemical heterotrophic microorganisms (Liang et al., 2020). Different vegetation types could cause changes in bacterial community function in the soil. Conversely, nitrification, aerobic nitrite oxidation, and aerobic ammonia oxidation related to the nitrogen cycle increased in the CF soils. The reason for this difference might be due to the increased nitrogen content affected by CF, which stimulated the growth and reproduction of denitrifying microorganisms, significantly increased microbial activity, and altered N cycles (Liang et al., 2020). In the current study, Ascomycota, Basidiomycota, and Zygomycota were the dominant phyla of fungal community (Figures 1C,D). In alignment with a previous study by Ji et al. (2020), Ascomycota and Zygomycota exhibited different growth strategies with organic fertilizer application, and Zygomycota saprotroph exhibited increased sensitivity to C sources than the Ascomycota saprotroph, which consequently resulted in a different relative abundance of Ascomycota and Zygomycota in manure amendment. Moreover, Štursová et al. (2012) found that Zygomycota was an important decomposer for controlling the cycling rate of nutrients and promoting the decomposition of organic compound matrices in agricultural ecosystems. Seven fungal functional groups (i.e., symbiotroph, saprotroph-symbiotroph, saprotroph, pathotroph-symbiotroph, pathotroph-saprotrophsymbiotroph, pathotroph-saprotroph, and pathotroph) were identified according to FUNGuild (Guo et al., 2020). Our results found that saprotroph-symbiotroph fungi were the dominant function fungi and accounted for more than 50% of the whole community under OM treatment in the first-year experiment. In contrast, pathotroph-saprotroph-symbiotroph and pathotroph-saprotroph fungi predominated in CF treatment (Figure 4), which indicated a risk of plant disease (Ji et al., 2020). Effects of fertilization on β-diversity of soil microbe and nematode As for β-diversity of microbe and nematode, irrespective of the soil layers, all soil samples were clustered into three groups according to fertilizer treatments (Figure 2), which suggested that differential fertilization was the dominant factor in shaping the microbial-microfauna in the soil of jackfruit. Variations in β-diversity can be attributed to fertilizer, since shifts in soil microbial communities generally correlate with changes in soil nutrient availability (Zhang et al., 2018). In addition, the variation of nematode fauna was closely related to the microbial community structure. Effects of fertilization on jackfruit yield and soil physicochemical properties The extra resource input either chemical or organic fertilizer enhanced jackfruit yield compared with no input control during the 2-year field experiments. Average jackfruit yield increased by 15% in the sole CF treatments and by 32% in the manure amendment relative to the control (Supplementary Tables 3, 4). This result reiterates the necessity for fertilizer additions to increase crop yields. Organic manure could enhance jackfruit yield not only through the continuous supply of reserve nutrients but also as a result of better soil conditions for crop growth such as soil aeration, porosity, soil pH, and microflora (Cai et al., 2019). In our study, soil properties also indicated fertilization inputs influenced nutrient availability (Supplementary Tables 3, 4). Soil pH in both layers was found to be lower in CF treatment which might be due to the produced H + ions during the nitrification of NH 4 + (Luo et al., 2015). While, the addition of manure prevented soil acidification due to the alkalinity of manure (Rukshana et al., 2014). Organic manure significantly increased soil total nutrient contents and organic carbon components, especially MBC, compared with chemical fertilization alone. This can be confirmed by Tian et al. (2015), who reported that continuous manure application increased the SOC content and sequestration rates by increasing crop yield and organic matter return from stubbles and roots in a meta-analysis study. In addition, the rate of soil mineralization usually remains stable for a short time, so the increasing trend of SOC content in soils could be explained by the C input from organic amendments . The increased soil MBC in the OM treatment may be due to the additional C sources, which are beneficial for the growth of soil indigenous microbiota as well as an increase in soil fertility . In the subsoil layers (40-60 cm), the SOC contents in all treatments were lower than in the top layers. This decrease could be explained by the possible presence of roots and an abundance of microorganisms in the Ap horizon (near the soil surface) (Shahbaz et al., 2017). Labile C (e.g., DOC, MBC, and POC) is sensitive to fertilization management and a good indicator to study SOC changes on a short-term basis. In the present study, the application of OM had a positive effect on LOC in both layers. LOC in subsurface soil was much lower as compared to surface soil, which might be attributed to an increase in the recalcitrant fraction of C in subsurface soil (Ghosh et al., 2012). Correlations between microbial and nematode communities and organic carbon components Network analyses and PLS-PM also indicated that fertilization induced soil microbial and nematode communities, and then indirectly influenced SOC components. Significant co-occurrence relationships between soil microflora and organic carbon components have been observed in the present study. The soil amended with OM showed a higher number of positive co-occurrence relationships than that in other treatments which may be linked to a higher community function (Coyte et al., 2015). And the soils amended with CF and nothing contained more strongly co-occurrence relationships between plant-parasites and others which suggesting that the CF and CK treatments may increase the abundance of plant-parasites and microorganisms associated with them. In the contrast, OM may be associated with a decreased ratio of positive link of plant-parasites. Previous studies had shown that the feeding interrelationship among the soil biota had a strong influence on the flow of resource and energy within the soil food web (Lenoir et al., 2007;. The more co-occurrence relationships between microbivorous nematodes and microbial taxa that had positive associations with SOC components in OM soils were also supported by other studies that have reported that the predation by microbivorous nematodes changed the microbial communities that are responsible for the breakdown of organic matter were linked to soil organic matter decomposition (Freckman, 1988;Jiang et al., 2018). In agroecosystems, nutrient release and dissolved C during long-term decomposition of fertilizer directly affected soil microbial community composition (Diacono and Montemurro, 2010). The variation of this soil property (e.g., soil C, pH) resulted from fertilization exerts significant influences on microbial growth (Hartmann et al., 2015;Sun et al., 2016;Zeng et al., 2016;Zhang et al., 2017). In the current study, a greater addition of OM increased SOC components and pH; these changes increased the niche width and niche differentiation (Dumbrell et al., 2010), and these factors may be important to the diversity with beneficial coexistence of species in the soil habitat. An oligotrophcopiotroph strategy shift of soil bacteria with changes in soil nutrient availability has been observed by Fierer et al. (2012), who reported that low nutrient levels caused an increase in slow-growing oligotrophic microorganisms while high nutrient levels promoted copiotrophic organisms. Differential soil properties affected by fertilizer amendments might exert a direct or indirect influence on nematode fauna via plant growth or microbial activity (Bulluck and Ristaino, 2002;Buchan et al., 2013). These results were most likely due to the increased C, N, P, K, and pH of soil by OM, and these factors may be important drivers of soil microfauna and crop yield. Conclusion An assessment was carried out on the impacts of fertilization amendments on selected soil physicochemical properties and microbial and nematode communities in three soil depths (0-20, 20-40, and 40-60 cm) over an 8-year period in a mono-cultured jackfruit plantation. In general, OM increased the value of NCR, MI, and the abundance of total nematodes and bacterivores, but decreased plant-parasites and H . The microbial β-diversity and taxonomical composition showed a distinct response to the applied treatments, especially at the phyla level. Higher relative abundances of Proteobacteria, Acidobacteria, and Chloroflexi were observed in OM treatment, and Nitrospira was predominated in CF treatment. Furthermore, OM enhances the contents of soil N, P, K, C, and pH, and the variation of this soil property was an important driver of soil microbial and nematode communities, functional groups and crop yield. Our results indicated that the functional groups of the bacterial community related to C cycle and aprotroph-symbiotroph fungi were higher in OM soil, while, some groups related to the nitrogen cycle, pathotrophsaprotroph-symbiotroph and pathotroph-saprotroph were predominated in CF treatment. This research may be beneficial in improving the understanding of the relationship between fertilization amendment, soil quality, and soil microbial and nematode communities, which can contribute to the development of an effective nutrient management system toward sustainability. In future studies, soil microfauna and the functional group should be complemented with the responses of crop roots to enhance our understanding of the mechanisms by which manure affects the soil quality for crop production. Data availability statement The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found below: NCBI-PRJNA836735 and the release date was June 5, 2024.
2022-09-20T13:39:37.819Z
2022-09-20T00:00:00.000
{ "year": 2022, "sha1": "22d56d756c874a7eabaf9263e0c68ceb2e101644", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "22d56d756c874a7eabaf9263e0c68ceb2e101644", "s2fieldsofstudy": [ "Environmental Science", "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
262164773
pes2o/s2orc
v3-fos-license
Study on chemical corrosion properties of titanium alloy in 2A14 aluminum melt Titanium alloy radiation rods have excellent physical and chemical properties compared to other materials, and are commonly used for ultrasonic casting of 2A14 aluminum alloy. However, titanium alloys are chemically corroded in high-temperature aluminum melts for a long time, making it difficult to precisely regulate the elemental composition during casting. In order to better understand the high-temperature chemical corrosion mechanism of titanium alloy radiation rods, this research looks into the corrosion morphology, weight loss, surface roughness, and reaction layer. The study’s findings suggest that the rate of chemical corrosion of titanium alloy in high-temperature aluminum melt is often inversely correlated with the degree of roughness, with the degree of roughness changing nonlinearly during the corrosion process. Titanium alloy weight loss rates with roughness Ra0.4 μm, Ra7.2 μm, Ra9.5 μm and Ra9.8 μm are 0.16 mg per min, 0.25 mg per min, 0.37 mg per min and 0.29 mg per min, respectively. The corrosion product of the chemical corrosion process is TiAl3, which is granular. Under varying roughness conditions, the solid-liquid interface of Al/Ti emerges reactants after 4 min, and the TiAl3 reaction layer arises after 12 min. Furthermore, the reaction layer with little roughness is flat and compact, whereas the reaction layer with great roughness is loose and contains many faults. At the same time, the growth rate of the reaction layer decreases slightly. And the greater the surface roughness, the greater the TiAl3 reaction layer grows at the titanium alloy matrix. Introduction During the ultrasonic casting process of alloys [1][2][3][4], the radiation rod is directly immersed in the melt.It is impacted by ultrasonic cavitation [5][6][7][8] as well as high-temperature aluminum melt, which causes corrosion and damage to the end face and sides of the radiation rod.In the high-temperature aluminum melt, the metal atoms in the radiation rod become active.A portion of metal atoms react with aluminum liquid to generate compounds that enter the aluminum melt, while the other portion of metal atoms directly enter the aluminum melt.This leads to impurities doping in the aluminum melt, which affects the performance of the aluminum melt.The study of high-temperature melt corrosion mechanisms has received increasing attention in order to avoid the short lifespan and high cost of metal caused by high-temperature corrosion damage.Corrosion in high-temperature melts is influenced by various factors such as melting, dissolution, and chemical reactions.The corrosion pattern will undergo modifications based on variations in the temperature of the metal melt, the types of solid/liquid metals, and the external environment [9][10][11]. Researchers have conducted research on the application of ultrasonic radiation rods [12][13][14][15][16] over the past few years.Shi et al [17] prepared an 2A14 aluminum alloy using a ceramic steel structure radiation rod.The grain refinement effect at the edge of the ingot is the best, with smaller secondary phases and more solute elements that can be dissolved.The ability to suppress segregation at the edge of the ingot is stronger.Liang et al [18] employed ultrasonic manufacturing for the casting of 35CrMo steel and examined the impact of radiation rod length on the performance of steel liquid.When the length was 135 mm, the transmission of ultrasound proved to be the most efficacious.They investigated the substance of radiation and found that silicon nitride ceramic radiators can effectively avoid corrosion in molten metal with high stability.Kong et al [19] prepared low-carbon steel by employing titanium alloy radiation rods and found that when ultrasonic treatment was employed, pearlite was broken, reducing its average length from 550 to 140 μm.The corresponding aspect ratio was decreased from 12 to 1. Zhang et al [20] had successfully produced an alloy of Al-Zr-Ti alloy by employing the ultrasonic casting technique.Grain refinement occurs when the undercooling is high enough to activate the primary intermetallic compounds or dispersed inoculants.Dai et al [21] analyzed the thermodynamics and kinetics of titanium alloys' oxidation at high temperatures, and they also conducted prospective research on the upcoming trend of titanium alloys' oxidation modification at high-temperatures.It was summarized in the following table 1 [17][18][19]22]: For the use of titanium alloy radiation rods in casting process, existing research mostly focuses on the corrosion of titanium alloy radiation rods caused by ultrasonic cavitation [23,24].Dong [25] studied the hightemperature oxidation performance of Ti-6Al-4V alloy in air.Very few researchers have studied how well ultrasound radiation rods can resist chemical corrosion at high-temperatures.As a results, this article conducts a thorough investigation into the corrosion phenomenon of titanium alloy radiation rods in high-temperature aluminum alloy melts.The titanium alloy (Ti-6Al-4V) used for ultrasonic casting is grade TC4 [23], which belongs to (α+β) titanium alloy family.Table 2 shows its composition.A number of sets of titanium alloy samples were taken from ultrasonic radiation rods and processed into different roughness, then soaked in hightemperature aluminum melt.The surface morphology and contour, weight loss and weight loss rate, roughness and the growth of the interface reaction layer were investigated.The solid-liquid interface reaction layer was identified using EDS and XRD.The generation time of corrosion products is additionally examined to discover the corrosion behavior of titanium alloy radiation rods with various roughness in aluminum melt. Sample preparation The titanium alloy radiation rod is cut to a size of 25 mm ×25 mm×10 mm sample blocks.These samples were then processed to obtain different original roughnesses of the sample surfaces.The samples with different roughness are called A, B, C, and D. A sample was finely machined on a lathe, while B, C, and D were all rough machined.This created a clear metallic luster on the original surface of the sample.In experiments a sample with a certain degree of original roughness was used to compare the corrosion behavior of rough and smooth surfaces.To achieve the same roughness, the sample's surface was polished with 2000# sandpaper, then the samples were washed with alcohol and acetone, and finally dried with warm air.A graphite crucible containing 5.0 Kg of 2A14 aluminum alloy ingots was placed in a resistance furnace at 800 °C for melting.The aluminum alloy was completely melted, and then stirred to remove the oxide layer from the surface of the aluminum melt.The resistance furnace settings were then adjusted to lower the melt temperature to 700 °C for insulation [26,27].Figure 1 shows taht the titanium alloy sample block is placed at the bottom of the crucible and the sample is removed every 12 min, which is mainly used to study the differences in corrosion weight loss, roughness changes, and reaction layer growth of titanium alloys with different roughness levels in an aluminum melt environment at 700 °C.The sample was cleaned using physical and chemical methods after the corrosion experiment was complete, and then weighed.The titanium alloy was immune to chemical reactions with hydrochloric acid and sodium hydroxide, whereas Al could react with acids and bases.Hence, the samples were thoroughly cleansed with hydrochloric acid and sodium hydroxide respectively, followed by ultrasonic cleaning.The samples were then dried in warm air and weighed using an electronic scale with a precision of 0.1 micrograms.The weight of the samples was determined by taking an average of three measurements, and a weight loss rate graph was constructed based on this. Another set of samples was carried out according to the above experimental process without cleaning the aluminum alloy attached to the surface of the titanium alloy in order to obtain the profile reaction layer of the titanium alloy samples at different stages of the reaction.A wire cutting machine was used to cut the sample profile, then it was ground and polished.Chemical corrosion experiments were conducted in aluminum melt for a duration of 1 min, 2 min, 3 minK and 11 min, respectively, with the aim of determining the precise generation time of Al/Ti interface reactants.The sample preparation and testing were identical to the above. Microstructural characterization A super depth of field microscope (VHX5000) was used to examine the sample's surface.The original surface roughness value of the sample was tested with an optical surface profiler (Wyko NT9100).High resolution scanning electron microscopy (SEM:MIRA3 TESCAN) revealed the morphology of the corroded sample surface and section reaction layer.Energy-dispersive x-ray spectroscopy (EDS:OxfordX-Max20) and x-ray diffraction (XRD) in a Rigaku 600 x-ray diffractometer were used to perform qualitative analysis and identify the reaction layer components.The XRD was operated at a scanning speed of 0.02°s −1 at 40 kV, using Cu Ka radiation (wavelength λ Ka = 1.54056Å). Corrosion micromorphology Figure 2 shows the microstructure of sample A before and after the corrosion happened.Figure 2(a) shows that the surface of sample A displays obvious machining marks, distinct contour lines, and a smooth surface without apparent pits.Figure 2(b) shows that the material's surface deformation is very small because the experiment only lasted a short time.But a certain amount of corrosion has happened and the roughness has also increased.There are pits of varying sizes distributed on the corroded surface, especially at the protrusions on the original surface of the material.This causes the originally sharp surface processing marks to become blurred due to corrosion. Corrosion weight loss rate The weightlessness method was used to characterize the corrosion degree of titanium alloy materials in hightemperature aluminum melts, considering the convenience of experimental equipment and the accuracy of experimental results.The initial weight, post experimental weight, and total weight loss of each group of samples with different roughness are presented in table 3. The weight loss rate of titanium alloy samples in 700 °C aluminum melt over time was summarized and plotted in figure 3, in order to visually see the magnitude of the weight loss rate of the material at different experimental periods.The formula for determining the rate of weight loss: Where v t , m t , and t respectively represent the weight loss rate, weight, and experimental duration at that time. Figure 3 shows that if the surface is smooth, the less weight the block sample loses, and vice versa.In four groups of samples, the weight loss rate remained relatively stable under high-temperature aluminum melt corrosion.The average weight loss rates for samples A, B, C, and D are 0.16 mg per min, 0.25 mg per min, 0.37 mg per min, 0.29 mg per min, respectively.But the Ra9.8 sample loses weight a little less than the Ra9.5 sample.This means that the rougher the surface of the sample, the more weight it loses. Figures 4(a) and (b) show the surface morphology of Ra9.5 and Ra9.8 samples, respectively.It can be seen that the Ra9.5 sample is a little rougher, but its surface is smoother and its maximum surface protrusion is over 40 μm.But the height of the convex body on the surface of Ra9.8 is not more than 25 μm.Figures 4(c) and (d) depict the morphology curves of two samples at their respective sampling lengths.The maximum size pit found on the Ra9.5 sample has a height of 44.7 μm and a width of 272.8 μm, which is approximately twice that of the Ra9.8 sample.The greater width of the aluminum melt results in a greater contact area between the pit and the aluminum melt.The aluminum melt rapidly spreads on the surface of the titanium alloy, which leads to a higher rate of diffusion, chemical reaction, and recombination of Ti atoms in the aluminum melts.On the contrary, the surface roughness of a smooth sample is smaller, and the fewer pits there are on the surface, the smaller the contact area with the aluminum melt.The diffusion rate of exposed Ti atoms in aluminum melt is hindered, resulting in a decrease in the chemical reaction rate and a decrease in their weight loss rate.It is important to note that the weight loss rate is influenced by Rz (micro roughness cross height) and Ry (maximum contour height), but these two parameters are not the main research focus of this article. Surface roughness In order to further investigate the mechanism of the influence of original surface roughness on hightemperature corrosion of titanium alloys, curves of the variation of different original surface roughness over time were plotted based on the measured data during the experimental process, as shown in figure 5.The samples with an initial surface roughness of Ra0.4 μm, Ra7.2 μm, Ra9.5 μm and Ra9.8 μm are subsequently increased to Ra0.7 μm, Ra7.7 μm, Ra10.1 μm and Ra10.2 μm, respectively, following a 48 min soak in high temperature aluminum melt.From the results, it can be seen that the roughness of each sample has changed little and improved just somewhat.However, the roughness increase of the sample with high roughness is greater.Its variation law is usually the same as the above weightlessness. The height difference between a material's surface and its roughness is related to how rough it is.The rougher the surface, the higher the height difference will be.The diffusion rate of Ti atoms from the matrix to the Al melt is more different on the surface with a bigger original roughness because the difference in vertex height between convex and concave is bigger.With the increase in reaction time, the contour of the edge of the bulge and the slope of the surface will become more obvious, which will further increase the height difference of the original surface contour. Figures 6(a) and (b) show the surface contour curves of sample A (Ra = 0.4) during the experiment time of 0 min and 48 min, respectively.As shown in the figure, the original surface of the material had a small contour fluctuation, and only the local concave-convex body was sharp.However, after 48 min, the fluctuation of the material surface was intensified, especially the original concave-convex body was more clearly displayed on the matrix and appeared more sharp.The maximum contour height difference of a concave-convex body increased from 4 μm to 6 μm. Microstructure Figure 7(a) shows the profile microstructure of the corrosion layer of sample A after 48 min of soaking in 700 °C aluminum melt.The figure shows that the titanium alloy is corroded by the aluminum melt to form a layer material.EDS was used to analyze the components of layered materials, and the results of energy spectrum analysis showed that the atomic ratio of Al and Ti at point 1 was approximately 3:1.EDS functions as an elemental analysis method and does not have the capability to identify precise compounds.But it could be confirmed that the layered material was TiAl 3 comparing the Al-Ti binary phase diagram and combining with reference [25]. Figure 8 shows SEM images of reaction layers from four groups of samples with different roughness, soaked in melt for different times.Titanium alloy samples with different roughness will react with Al, and the thickness of the reaction layer will increase with the increase in experiment time.At 4 min, the reactants began to form at the Al/Ti solid-liquid interface at 4 min.When soaked for 12 min, a reaction layer about 1 μm thick appeared at the solid-liquid interface.When soaked for 48 min, the thickness of the reaction layer increased to 5 μm.However, the reaction layer is a little different in density.The more roughness of the sample, the more loose the reaction layer. The morphology of the reaction layer at the solid-liquid interface exhibited slight variations among all groups.Figure 8(c) shows that the thickness of the reaction layer formed at the Al/Ti interface is uniform and without obvious defects, and the interface between the matrix and the reaction layer is smooth at 48 min.Figures 8(f), (i), (l) show that the thickness of the reaction layer at the interface is inconsistent and there are many cracks.TiAl 3 with granular is distributed near the reaction layer and floats away from the interface.This happens because sample A is smooth, and Ti atoms dissolve on the surface at the same rate.This makes TiAl 3 reaction layers grow at similar rates in different places.Therefore, a dense reaction layer is formed.But samples B, C and D have a relatively large roughness and the chemical reaction rate is different, resulting in a loose texture of the TiAl 3 reaction layer.On the less smooth surface, the Ti atoms can diffuse further away from the matrix into the aluminum melt.So that the TiAl 3 can be separated from the reaction layer.At the same time, it also explains the change in the weight loss rate of the titanium alloy sample in section 3.2, which is caused by the change in the diffusion rate of the reaction layer particles to the aluminum melt. Figure 9 shows the growth curves of the interfacial reaction layer over time for samples with different roughness soaked in aluminum melt.The complete reaction layer did not appear until 4 min for each sample, so the thickness of the interface reaction layer was set to 0 at this time.The thickness of the reaction layer at the solid-liquid interface of samples A, B, C and D increased from 0 to 3.2, 4.0, 4.7 and 4.7 μm during the experiment period of 4 to 48 min.It is also found that the growth rate of the reaction layer decreases with time.After the titanium alloy is immersed in the aluminum melt, the titanium element will dissolve and diffuse into the aluminum liquid for chemical reaction, forming the TiAl 3 compound.A dense reaction layer will soon form at the Al/Ti solid-liquid interface, as shown in figure 8(b).If Ti in the matrix wants to enter the aluminum liquid and react with Al, it needs to cross the TiAl 3 layer after a certain time.With the thickness of TiAl 3 layer increasing, it is harder for Ti to escape from the matrix, so the growth rate of the reaction layer also slows down.However, due to the poor density of the reaction layer formed by samples B, C and D, it is less difficult for Ti atoms to escape, which results in a higher formation rate.The formation rate is consistent with its roughness. XRD analysis of reaction layer In order to further confirm the phase of the reaction layer, the components of reaction layer that were generated at 12 min were analyzed by XRD.XRD examination of sample implies that there were almost only α-Al and TiAl 3 present in the sample A, as shown in figure 10.Some unknown phases occurred, but with very minor fraction.So this substance can be confirmed as TiAl 3 combined with the above EDS spectrum. Discussion At the beginning of contact between Al/Ti liquid and solid, Ti atoms dissolve and diffuse in the aluminum melt.At the same time, some Al atoms move around in the aluminum melt and into the Ti matrix.The diffusion mode is mainly composed of Ti atoms diffusing into liquid aluminum and auxiliary Al atoms diffusing into the titanium matrix.Based on the diffusion characteristics, the concentration of Ti atoms decreases from the surface of the titanium alloy to the aluminum melt.Therefore, the rapidly saturated Ti atoms near the matrix can immediately react with Al atoms to form the compound TiAl 3 , which is only distributed on the matrix surface as scattered particles at the beginning.The diffusion rate of Ti atoms in titanium alloys with small surface roughness is consistent with the increase of infiltration time due to the uniform height of the surface profile.As a result, a uniform and dense reaction layer gradually forms at the liquid-solid interface of Al/Ti.The appearance of the reaction layer makes it harder for Ti atoms to escape to the melt, so the concentration near the wall decreases and the growth rate of the TiAl 3 reaction layer from the Ti matrix to the Al melt also decreases.The reaction layer is uniform in all places, and the growth rate is roughly the same, but it decreases slightly with time.It is closely bound to the Ti matrix. The matrix of titanium alloy with a rough surface has different surface profiles and more cracks than the smooth one.Therefore, the actual contact area of Al/Ti is greater than that of a surface with less roughness, and its diffusion rate is also greater.The uneven distribution of Ti atoms concentration diffused from titanium alloy are attributed to the varying height of contour on the surface, resulting in the formation of a TiAl 3 reaction layer with varying thickness and discontinuity.The interior of the reaction layer is also relatively loose, which is susceptible to the flow of aluminum liquid and consequently separates from the interface into the aluminum melt.The loose reaction layer on the rough surface makes TiAl 3 particles break away more easily.The growth rate of reaction layer on rough surfaces is greater than that on smooth surfaces.With the increase of reaction time, the reaction layer of TiAl 3 grows from the Ti base to the Al liquid side, and the growth rate also decreases, but the growth rate is still higher than that of the smooth surface. Conclusions In this paper, the corrosion properties of titanium alloys with different surface roughness were studied, and the surface morphology and profile, weight loss and weight loss rate, roughness, and growth of the interfacial reaction layer were analyzed.The reaction layer at the solid-liquid interface was identified by EDS and XRD phase analysis, and the formation time of corrosion products was observed.The corrosion behavior of a titanium alloy radiation rod with different roughness in aluminum melt was investigated.The conclusions are as follows: (1) The chemical corrosion of titanium alloys occurs in aluminum melts.The rougher the surface of titanium alloy, the higher the weight loss rate, and the weight loss rate is nonlinear during the corrosion process.The weight loss rates for titanium alloys with roughness of Ra0.4 μm, Ra7.2 μm, Ra9.5 μm and Ra9.8 μm are 0.16 mg per min, 0.25 mg per min, 0.37 mg per min and 0.29 mg per min, respectively.The weight loss rate is influenced by surface smoothness.The weight loss rate of a surface with poor flatness and roughness is greater than that of a surface with good flatness and roughness. (2) The Al/Ti interfacial material formed by chemical corrosion between titanium alloy and aluminum melt is TiAl 3 compound.Under different roughness conditions, reactants appear at the Al/Ti solid-liquid interface in about 4 min, and TiAl 3 reaction layer appears in 12 min. (3) The reaction layer at the interface with low roughness is flat and dense, while the reaction layer at the interface with high roughness is loose and has many defects.Furthermore, the growth rate of the reaction layer decreased slightly with the reaction time.The higher the surface roughness, the higher the growth rate of the TiAl 3 reaction layer on the matrix of titanium alloy material. Figure 3 . Figure 3.The curve of weight loss rate curve. Figure 4 . Figure 4.The surface morphology of samples, (a) Ra9.5, and (b) Ra9.8.(c) and (d) shows the morphology curves of two samples at their respective sampling lengths. Figure 7 . Figure 7. (a) Profile topography of the reaction layer, (b) EDS element analysis. Figure 8 . Figure 8. SEM images of reaction layers with different experimental duration. Figure 9 . Figure 9. Reaction layer growth curves of samples with different roughness. Figure 10 . Figure 10.XRD pattern of sample A at 12 min. Table 1 . Summary of different types of radiation rods and alloys. Table 2 . Contents of major solute elements in the TC4 (wt%).
2023-09-24T15:50:24.318Z
2023-09-20T00:00:00.000
{ "year": 2023, "sha1": "68f69f5a0d05757d4cebeb9f0607c7fcfd6c8c22", "oa_license": "CCBY", "oa_url": "https://iopscience.iop.org/article/10.1088/2053-1591/acfbdc/pdf", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "f17ede1e45c2fe0f319c731cb49b437d6db92d01", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [ "Physics" ] }
146807654
pes2o/s2orc
v3-fos-license
Structure-mining: screening structure models by automated fitting to the atomic pair distribution function over large numbers of models Structure-mining finds and returns the best-fit structures from structural databases given a measured pair distribution function data set. Using databases and heuristics for automation, it has the potential to save experimenters a large amount of time as they explore candidate structures from the literature. Introduction The development of science and technology is built on advanced materials, and new materials lie at the heart of technological solutions to major global problems such as sustainable energy (Moskowitz, 2014). However, the discovery of new materials still needs a lot of labor and time. The idea behind materials genomics (White, 2012) is to develop collaborations between materials scientists, computer scientists, and applied mathemeticians to accelerate the development of new materials through the use of advanced computation such as artificial intelligence (AI), for example, by predicting undiscovered materials with interesting properties Simon et al., 2015;Curtarolo et al., 2013). The study of material structure plays a key role in the development of novel materials. Structure solution of well ordered crystals is largely a solved problem, but for real materials, which may be defective or nanostructured, being studied under real conditions, for example in high-throughput in situ and operando diffraction experiments such as in situ synthesis (Cravillon et al., 2011;Jensen et al., 2012;Friščić et al., 2013;Saha et al., 2014;Shoemaker et al., 2014;Katsenis et al., 2015;Olds et al., 2017;Terban et al., 2018), determining structure can be a major challenge that could itself benefit from a genomics style approach. Here we explore a datamining methodology for the determination of inorganic materials structures. The approach can rapidly screen large numbers of structures in a manner that is well matched to the kinds of high-throughput experiments being envisaged in the materials genomics arena. A number of structural databases are available for inorganic materials containing structures solved from experimen-tal data such as the Inorganic Crystal Structure Database (ICSD) (Bergerhoff et al., 1983;Belsky et al., 2002), the American Mineralogist Crystal Structure Database (AMCSD) (Downs & Hall-Wallace, 2003), the Crystal Structure Database for Minerals (MINCRYST) (Chichagov et al., 2001), and the Crystallography Open Database (COD) (Gražulis et al., 2009). More recently, databases of theoretically predicted structures have begun to become available, such as the Materials Project Database (MPD) , the Automatic Flow Library (AFLOWLIB) (Curtarolo et al., 2012), and the Open Quantum Materials Database (OQMD) (Saal et al., 2013;Kirklin et al., 2015). Structural databases such as the International Centre for Diffraction Data (ICDD, 2019), have for some time been used for phase identification purposes. In phase identification studies no model fitting is carried out, but phases are identified in a powder pattern by matching sets of the strongest Bragg peaks from the database structures to peaks in the measured diffractogram (Hanawalt et al., 1938;Marquart et al., 1979;Gilmore et al., 2004). Our goal is not just phase identification, but the high-throughput automated refinement of structural models fit to measured diffraction data. In our implementation we fit measured atomic pair distribution function (PDF) data, which has the additional benefit of allowing us to model on the fly nanostructured materials as well as crystalline materials. PDF analysis of x-ray and neutron powder diffraction datasets has been demonstrated to be an excellent tool for studying structure of many advanced materials, especially nanostructured materials (Zhang et (Toby et al., 1989;Billinge et al., 1996;Billinge & Kanatzidis, 2004;Keen & Goodwin, 2015). The PDF gives the scaled probability of finding two atoms in a material a distance r apart and is related to the density of atom pairs in the material. It does not presume periodicity so goes well beyond just well ordered crystals (Egami & Billinge, 2012;. The experimental PDF, denoted G(r), is the Q max truncated Fourier transform of the total scattering structure function, F(Q) = Q[S(Q) − 1]: (Farrow & Billinge, 2009) where Q is the magnitude of the scattering momentum. The structure function, S(Q), is extracted from the Bragg and diffuse components of x-ray, neutron, or electron powder diffraction intensity. G(r) can be calculated from a given structure model (Egami & Billinge, 2012) and once the experimental PDFs are determined they can be analyzed through modeling. The PDF modeling is performed by adjusting the parameters of the structure model, such as the lattice parameters, atom positions, and anisotropic atomic displacement parameters, to maximize the agreement between the calculated PDF from the structure model and the experimental PDF. A number of PDF structure modeling programs are available for crystalline or nanocrystalline inorganic materials (Cranswick, 2008). Small box modeling programs use a small number of crystallographic parameters with a periodic structural model (Egami & Billinge, 2012). Three widely used examples are PDFGUI , TOPAS (Coelho, 2018), and DIFFPY-CMI , among others (Petkov & Bakaltchev, 1990;Proffen & Billinge, 1999;Gagin et al., 2014). Big box modeling programs, which move large numbers of atoms to minimize the difference between the observed and calculated PDFs, usually implement the reverse Monte Carlo (RMC) method (McGreevy & Pusztai, 1988;McGreevy, 2001), such as RMCProfile (Tucker et al., 2007), DISCUS (Proffen & Neder, 1997;Page et al., 2011), and Full-RMC (Aoun, 2016). Other modeling programs use a hybrid approach where a large number of atoms are in the box, but the program refines only a small number of parameters, such as EPSR (Soper, 2005). Though powerful for understanding structure of complex materials, PDF modeling and structure refinement is difficult and presents a steep learning curve for new users. There are two major challenges. The first is that PDF structure refinement requires a satisfactory plausible starting model to achieve a successful result. The second is that the refinement process is a non-linear regression that is highly non-convex and generally requires significant user inputs to guide it to the best fit whilst avoiding overfitting. A more automated refinement program such as we propose here needs to address both issues. Model selection traditionally requires significant chemical knowledge and experience, but can be quite challenging when unknown impurities or reaction products are present in the sample. To address the problem of phase identification, automated search-match algorithms for identifying phases in powder diffraction patterns have been developed and are widely used (Hanawalt et al., 1938;Marquart et al., 1979;Gilmore et al., 2004). There are also programs for helping find candidate structure from structural databases Toby, 2005;Altomare et al., 2008;Degen et al., 2014;Altomare et al., 2015). These search-match programs only work for reciprocal space diffraction patterns, and in general do not allow for automated refinement of the structures. Some attempts have been made to couple Rietveld refinement programs to structural databases such as Full Profile Search Match (Boullay et al., 2014), though this is limited to refining structures from the COD database. Alternatively, programs that use scripting such as TOPAS (Coelho, 2018) have been used to automatically refine large numbers of candidate structures generated by symmetry-mode analysis from a given high-symmetry starting structure (Lewis et al., 2016). Furthermore, a structure screening approach where large numbers of algorithmically generated small metal nanoparticle models were compared to PDF data was recently demonstrated (Banerjee et al., 2019). This approach, called cluster-mining, was successful at obtaining significantly improved fits over standard approaches to nanoparticle PDF data from simple models with a small number of refinable parameters. It also returned multiple plausible and well performing structures rather than just one best-fit structure, allowing the user to choose a model based on more information than just the PDF data. We would like to combine these approaches (database searching, auto-refinement, and screening of large numbers of structures) to the modeling of PDF data in general. Here we describe an approach we call structure-mining, to automate and manage structure model selection and PDF refinement. To make the whole procedure as high-throughout and automatic as possible, the required user inputs are kept to a minimum: simply the experimental PDF data and the searching criterion used to pull structures from databases. When finished, the best-fit candidate structures that were pulled from the data mine are returned to the experimenter for further detailed investigations. structure-mining currently supports both x-ray and neutron PDF datasets. This software enables high-throughput autorefinement that may be used right after the PDF is obtained at a synchronton x-ray or neutron beamline, unlike more traditional human intensive approaches that typically take a large amount of time and effort after the experiment is over. It is designed to lighten the PDF modeling work after an experiment, but could also, in principle, be used for modeling PDF datasets in quasireal-time during the data acquisition at the beamline. Approach Structure-mining first obtains a large number of candidate structures from open structural databases. It then computes the PDFs of these structures and carries out structure refinements to obtain the best agreement between calculated PDFs and the measured PDF under study. The initial implementation pulls from two commonly used open structural databases: the Materials Project Database (MPD) and the Crystallography Open Database (COD) (Gražulis et al., 2009). The structures are pulled directly from the databases using the RESTful API Ong et al., 2015). There are many rules that could be used for selecting candidate structures to try. In this initial implementation of structure-mining, we are using the following heuristics: (1) Pulling all the structures that have the same stoichiometric composition as provided by the experimenter. (2) Pulling all the structures that contain all the elements in the originally provided composition, but not necessarily having the same stoichiometry. (3) Pulling all the structures that contain all the elements provided in the composition but also additional elements. (4) Finally, pulling all the structures that contain a subset of elements in the originally provided composition, and any other elements. These heuristics go from more restrictive to less restrictive and may be selected as desired. The results on representative datasets are presented below. After pulling the structures from the database structuremining builds a list of candidate structures and loads their cif files from the database into the DIFFPY-CMI PDF structure refinement program. DIFFPY-CMI works by first building a fit-recipe which is the set of information needed to run a model refinement to PDF data, and then executing it. The PDF fit recipe for each pulled structure is generated automatically. The fits are carried out over the range of 1.5 < r < 20Å on the Nyquist-Shannon sampling grid (Farrow et al., 2011). The following phase related parameters are initialized and refined: a single scale factor uses initial value 1.0; lattice parameters are constrained according to the crystal systems using the initial lattice parameter values of pulled structures; isotropic atomic displacement parameter (ADP), U iso , for each element atom of the pulled structure is applied with initial value 0.005Å 2 ; spherical particle diameter (SPD) parameter can be used if the PDF data are from nano-sized objects, by having the experimenter specify an initial value (in the unit ofÅ). The instrument resolution parameters, Q damp and Q broad , which are the parameters that correct the PDF envelope function for the instrument resolution (Proffen & Billinge, 1999;Farrow et al., 2007), are preferrably obtained by measuring a standard calibration material in the same experimental setup geometry as the measured sample, and are fixed in the subsequent structure refinements of the measured sample PDF. They are applied according to the following strategy. If the experimenter specifies Q damp and Q broad values, the experimenter's values are used and they are fixed during the structure refinement. If they are not specified by the experimenter, the program will make a best-effort attempt to allocate meaningful values. This is done currently by storing a table of reasonable values by instruments. So far, we have established reasonable values for the XPD x-ray instrument and the NOMAD and NPDF neutron instruments. If the program cannot find reasonable values in its lookup table for a specified instrument, or if no instrument can be determined, standard global default values are selected. These are Q damp = 0.04Å −1 for rapid acqui-sition x-ray PDF (RAPDF) experiments (Chupas et al., 2003) and 0.02Å −1 for time-of-flight (TOF) neutron PDFs. Similarly, Q broad = 0.01Å −1 and 0.02Å −1 are the global defaults for RAPDF x-ray and TOF neutron measurements, respectively. In all the cases where the user does not specify values for Q damp and Q broad , these parameters are allowed to vary in the refinement process. Different regression algorithms may be used to perform the structure refinement minimizing the fit residual, with the goodness-of-fit R w , given by where G obs and G calc are the observed and calculated PDFs and P is the set of parameters refined in the model. Initially we use the widely applied damped least-squares method (Levenberg-Marquardt algorithm) (Levenberg, 1944;Marquardt, 1963), which is deployed in the Python programming package Scipy (Jones et al., 2001), to vary the adjustable parameters to achieve the best agreement between the calculated and measured PDFs, since none of the algorithms for nonlinear least-squares problems has been proved to be superior to this standard solution (Young, 1993;Floudas & Pardalos, 2001), such as Gauss-Newton method (Gauss, 1809), modified Marquardt method (Fletcher, 1971), and conjugate direction method (Powell, 1964). However, DIFFPY-CMI supports the use of different minimizers and the implementation with different optimizers will be tested in the future. During the structure refinement different types of parameters have quite different characteristic behaviors. A systematic parameter turn-on sequence is important to achieve convergence because turning on unstable parameters too early can result in divergent fits or getting trapped at local false minima. To make the structuremining highly automatic without any human intervention during the whole procedure, here we tested an automatic turnon sequence that was suggested for conventional full-profile Rietveld refinement (Young, 1993) as well as considering the difference between PDF and Rietveld refinement procedures. The current structure-mining deploys the following parameter turn-on sequence: initially scale factor and lattice parameters are allowed to vary for maximum 10 times iterations or until converged, whichever comes first; additionally all the isotropic ADPs are turned on for a maximum 100 iterations or until converged, whichever comes first; if the instrument resolution parameters, Q damp and Q broad , are allowed to refine during the fit, they will be additionally turned on for maximum 100 iterations or until converged. Finally, if SPD is specified by experimenter, it will be additionally turned on for maximum 100 iterations or until converged, whichever comes first. When the whole procedure is finished, if the refinement cannot converge, the refinement will stop, record the latest goodness-of-fit parameter R w value, and continue with the next pulled structure. If the resulted R w > 1.0 (unconverged fit), it would be marked as 1.0. This process is repeated for every structure pulled from databases. When the program has looped over all the pulled structures it returns a plot of best-fit goodness-of-fit parameters R w of each model. We call this plot the structure-mining map (see a representative plot later in Fig. 1). The program also returns a detailed formatted table that is suitable for inserting into a manuscript summarizing the results of the structure-mining. The experimenter can also enter one, or multiple, structural result indices to generate a plot of the corresponding calculated and measured PDFs with the difference curves. Selected structural results may also be saved including the calculated and difference PDF data files, and the initial and refined structures in cif format. testing methodology To test the method we selected PDFs of five different materials, testing both x-ray and neutron PDFs, as listed in Table 1. (Lewis et al., 2018). c (Frandsen et al., 2016b). d (Frandsen & Billinge, 2015). The total scattering measurements were conducted at one synchrontron x-ray facility, the XPD beamline (28-ID-2) at the National Synchrotron Light Source II (NSLS-II), Brookhaven National Laboratory, and two neutron time-of-flight facilities, NOMAD beamline (BL-1B) (Neuefeind et al., 2012) at the Spallation Neutron Source (SNS) at Oak Ridge National Laboratory and the NPDF beamline (Proffen et al., 2002) at the Manuel Lujan Jr. Neutron Scattering Center at Los Alamos Neutron Science Center (LANSCE), Los Alamos National Laboratory. All of the datasets are from previously published work, indicated in the table, except for the Ti 4 O 7 , which is unpublished data. For the XPD beamline the samples were sealed in 1 mm diameter polyimide capillaries mounted perpendicular to the beam and the x-ray datasets were collected at room temperature using the rapid acquisition PDF method (RAPDF) (Chupas et al., 2003). A large area 2D Perkin Elmer detector was mounted behind the samples. The collected data frames were summed, corrected for detector and polarization effects, and masked to remove outlier pixels before being integrated along arcs of constant Q, where Q = 4π sin θ/λ is the magnitude of the momentum transfer on scattering, to produce 1D powder diffraction patterns using the FIT2D program (Hammersley, 2016). Standardized corrections and normalizations were applied to the data to obtain the total scattering structure function, F(Q), which was Fourier transformed to obtain the PDF, using PDFGETX3 (Juhás et al., 2013) within XPDFSUITE . The incident x-ray wavelengths and the calibrated sample-to-detector distances are listed in the Appendix (Table 6). For the NOMAD and NPDF beamlines, the samples were sealed in vanadium cans. The NOMAD experiment was carried out at room temperature (Frandsen et al., 2016b) and the data were reduced and transformed to the PDF using the automated data reduction scripts at the NOMAD beamline. For the NPDF beamline, the data were collected at 15 K (Frandsen & Billinge, 2015) and the data were reduced and transformed to the PDF using the PDFGETN program (Peterson et al., 2000). The full experimental details may be found in Refs. (Lombardi et al., 2019;Lewis et al., 2018;Frandsen et al., 2016b;Frandsen & Billinge, 2015). The maximum range of data used in the Fourier transformation, Q max , and the instrument resolution parameters, Q damp and Q broad , which are relevant parameters for our structure-mining activity, were obtained by calibrating the experimental conditions in each case using a well crystallized standard sample. The values are reproduced in the Appendix (Table 6). Results We first apply this approach to the measured PDF from barium titanate (BTO) nanoparticles, BaTiO 3 . BTO is one of the best studied perovskite ferroelectic materials (Frazer et al., 1955;Kwei et al., 1993). Heuristic-1 is applied, fetching all structures that have the same composition as input BaTiO 3 . The structure-mining results from the MPD and COD are shown in Fig. 1(a) and (b), and Table 2 and Table 3, respectively. Rw values for each of the structures pulled from the databases for the BaTiO 3 nanoparticle x-ray data using heuristic-1, fetching all the structures with composition BaTiO 3 from (a) the MPD (green) and (b) the COD (blue). The Rw parameter represents the goodness-of-fit for each pulled structure. The best-fit structures from each data mine were MPD structure No. 5 (Shirane et al., 1957) and COD structure No. 20 (Kwei et al., 1993) with R w = 0.144 and 0.143, respectively. The calculated and measured PDFs are shown in Fig. 2(a) Table 2: Structure-mining results for the BaTiO 3 nanoparticle x-ray data using heuristic-1 from the MPD. Here No. refers to the structure index ( Fig. 1(a)), which is the order pulled from the database, and s.g. represents the space group of the structure model. The initial isotropic atomic displacement parameter (U iso ) of all atoms in each structure is set to 0.005Å 2 to start the structure refinements. The a, b, and c are the lattice parameters of the structure model. The subscript i indicates an initial value before refinement and the subscript r indicates a refined value. DB ID represents the database ID of the structure model. Q max = 24.0Å −1 , Q qdamp = 0.037Å −1 , and Q broad = 0.017Å −1 were set and not varied in the refinements (see Section 2 for details). No. Rw Table 3: Structure-mining results for the BaTiO 3 nanoparticle x-ray data using heuristic-1 from the COD. See the caption of Table 2 for an explanation of the entries. and (b), respectively. Unlike the traditional manual PDF structure refinement methodology, the structure-mining approach followed by the automated fitting resulted in satisfactory and reasonable fits without any human intervention. These structures may be investigated in more details by traditional manual fitting approaches. PDFs from representative satisfactory and unsatisfactory structures from (a, c) the MPD and (b, d) the COD. Blue curves are the measured PDF of BaTiO 3 nanoparticles. Red curves are the calculated PDFs after retrieving from the databases using heuristic-1 and automatically fitting to the data (see Section 2 for details). Offsets below in green are the difference curves. Some structures retrieved from the mine also resulted in very poor fits, as shown in Fig. 2(c) and (d), which are the automatically determined fits of MPD structure No. 4 and COD structure No. 19 (Shirane et al., 1957), respectively. We expect that this will be due to the fact that the structure pulled from the database is different from that of our sample, and it is this automated screening of database structures to find the most plausible candidates that is the goal of structuremining. However, we investigate this in more details below. The structure of this measured BaTiO 3 nanoparticle dataset has been carefully studied before (Lombardi et al., 2019). In that work, it was reported that the structure of this nanoparticle sample was non-centrosymmetric and had one of the ferroelectric forms of the BaTiO 3 structures (Kwei et al., 1993), among one of the distorted structures with space groups Amm2, P4mm, and R3m. All these structures gave somewhat comparable fit to the data and it was not possible to distinguish which among them was definitively the correct structure. Nearby centrosymmetric space-groups also performed well based on R w but could be ruled out by careful consideration on refined ADPs of Ti ions. From the MPD result, as shown in Table 2, it clearly reveals that the top three best-fit structures are exactly the noncentrosymmetric ferroelectric forms of BaTiO 3 structures with space groups Amm2, P4mm, and R3m. In addition, the closely similar centrosymmetric perovskite model with space group P4/mmm (No. 10, ranked 4) (Srilakshmi et al., 2016) gives sightly worse but comparable R w . The heuristic-1 has therefore found the correct candidate structural models from the MPD, as well as returning nearby structures for a more detailed manual comparison. The COD contained many more candidate structures for this composition (Table 3). Again the structure-mining shows that the best three perovskite models with space groups Amm2, P4mm, and R3m are found as expected, along with the similar general barium titanate perovskite models (with slightly worse R w ) with space groups P4/mmm and Pm-3m. The COD result also returned a space group Pmm2 structure (No. 4) (Zeng & Jiang, 1991) with a reasonable fit (R w = 0.168) which turns out to be a general perovskite structure having two half filled Ti ions at (0.5,0.5,0.509) and (0.5,0.5,0.491) sites, similar to a doubled unit cell of the tetragonal barium titanate perovskite model with space group P4mm, albeit with a small orthorhombic distortion. This illustrates the power of this structure-mining approach as it does a good job of finding all plausible structures in the database. These can then be considered and ruled out by researchers based on other criteria. There is also a hexagonal perovskite structure (space group P6 3/mmc) in the databases for BaTiO 3 , and this gives very poor fit to the BaTiO 3 nanoparticle data from both MPD (No. 1) (Akimoto et al., 1994) and COD (No. 7) (Akimoto et al., 1994), showing that the approach is capable of finding true positive and true negative results. The structure-mining gives the COD structure No. 19 (space group: P4mm) (Shirane et al., 1957) a bad fit because the model is wrong, with Ti ion sitting at 1b (0.5, 0.5, 0.265) and O2 ion sitting at 2c (0.5, 0, 0.236), which is significantly offset from the correct position such that Ti ion is at or near the center of the unit cell. We checked the reference for this database entry (COD ID: 9014273), and it turned out to be correct in the paper but a wrong entry in the database because the reference reported that Ti ion was at 1b (0.5, 0.5, 0.0.515) and O2 ion was at 2c (0.5, 0, 0.486) (Shirane et al., 1957). This indicates that this structure-mining approach may actually help to find errors in the database, but at worst will not return incorrect structures as candidate models. Interestingly, the mining operation did report one false negative. It missed one of the plausible perovskite structural models in the MPD database, the cubic heterostructure model with space group Pm-3m (MPD No. 4) , which was correctly found in the COD database. The reason why this did not give a good refinement was that the starting lattice parameters taken from the database were much too large and the automated refinement could not converge to the correct minimum, resulting in a poor fit. Although we refine the lattice parameter during the process, if the starting value is too far away from the correct one, it is possible that the refinement pro-gram will not be able to find the right solution in the parameter space and result in a poor fit and a false negative result. We could think of strategies for increasing the convergence in the future. However, in some respect it is a success of the program because we actually hope that incorrect models in the database will fit the data poorly, and if the value of the lattice parameter recorded in the database is far from being correct for the measured sample, in some sense this constitutes a bad model. Similar lattice parameter situations happen for MPD No. 0 (Xiao et al., 2008), 2 (Donohue et al., 1958), 3 (Xiao et al., 2008), and 8 (Hayward et al., 2005). The entries in the MPD that are taken from the ICSD database have gone through an energy relaxation step using density functional theory (DFT) (Hohenberg & Kohn, 1964;Kohn & Sham, 1965) before the crystal structures are deposited in the MPD. For some reason, the DFT relaxation took some of the lattice parameters somewhat far away from the experimental values in the original structure reports (Xiao et al., 2008;Donohue et al., 1958;Hayward et al., 2005). Overall the heuristic-1 approach already returned the correct structures for BaTiO 3 nanoparticles. The complete mining operation took 29.3 seconds when searching with the MPD and 47.8 seconds for the COD search to complete, using a general laptop. We would like to further test the more loosely filtered heuristic-2 approach on the BaTiO 3 nanoparticle data. The structure-mining results from the MPD and COD, fetching all structures that contain just Ba, Ti, and O elements with any composition, are shown in Fig. 3(a) Rw values for each of the structures pulled from the databases for the BaTiO 3 nanoparticle x-ray data using heuristic-2, fetching all the structures with Ba, Ti, and O elements from (a) the MPD (green) and (b) the COD (blue). Heuristic-2 found all the structures that were found with heuristic-1, as expected. This approach also found a number of additional good structural candidates. The MPD returned three more that were within ∆R w ≈ 0.1 from the best-fit R w (approximately 0.14), i. (Wada et al., 2000), where ∆R w is the deviation in R w of a structure from the R w of the best-fit structure. Close inspection of these models indicates that they have a stoichiometry that is approximately the Ba:Ti:O = 1:1:3 ratio. They are really oxygen deficient forms of the standard 113 structure that either use fractional occupancies or are expressed in a supercell of the original 113 unit cell. For the nanoparticle data that we mined against, the second best-fit model from heuristic-2, MPD No. 43 (Ba 12 Ti 12 O 27 ) , is an oxygen deficient structure resulting in an R w = 0.146 that is comparable to the best-fit 113 non-defective model, MPD No. 19 (BaTiO 3 ) (Shirane et al., 1957) R w = 0.144. Another oxygen deficient structure (MPD No. 44) (Woodward et al., 2004) was also the third best fitting model from the mine. This does not, a-priori, indicate that the nanoparticle data are oxygen deficient. This proposition has to be considered by more careful modeling, but the result of structure-mining does suggest that the BaTiO 3 nanoparticle sample may have oxygen deficiency. To test this proposition we tried manually fitting the nanoparticle data with a non-defective model, MPD No. 19 (Shirane et al., 1957), but where we allowed the oxygen occupancy to vary. The best-fit structure refined with an oxygen occupancy of 0.91 on each oxygen site, and with a corresponding slight reduction in the oxygen ADP from 0.013Å 2 to 0.012Å 2 and a lower R w . All in all, this suggests that oxygen is most likely deficient in these nanoparticle samples, which was not investigated in the original structure refinements (Lombardi et al., 2019), but is suggested by the structure-mining. The heuristic-2 structure-mining operation also, as expected, returned some structures from the databases for which the atomic composition ratio was not close to 1:1:3. None of these additional structures gave reasonable fits to the PDF, resulting in poor R w values larger than 0.4 for the MPD (such as MPD No. 6 ) and 0.6 for the COD (such as COD No. 34 (Vanderah et al., 2004)). The entire search process took 493.7 seconds for the MPD and 469.5 seconds for the COD. The heuristic-3 approach was also tested on the BaTiO 3 nanoparticle data by pulling all structures that contain Ba, Ti, O elements and one additional element with any stoichiometry. More details about the results can be found in the supporting information CSV files. It took about 10.3 and 41.0 minutes for the MPD (pulled totally 57 structures) and COD (pulled totally 103 structures) to finish, respectively. Of these new structures that were found, most of the best-fit structures have slightly worse R w (∼ 0.2) than those in heuristic-1 and 2 (∼ 0.14). The new structures pulled are mostly substituting Ba or Ti site by another element and they also have an approximate stoichiometry 113, such as MPD No. 43 (Ba 3 Sr 5 Ti 8 O 24 ) and COD No. 22 (Ba 0.93 Ti 0.79 Mg 0.21 O 2.97 ) (Wada et al., 2000), which agrees with what has been found in heuristic-2. Finally we tested the very loose heuristic-4 approach. Here the experimenter can freely choose any searching criteria, such as Ba-Ti-*, Ba-*-O, or even *-*-*, in which * represents an arbitrary element. In our case we set the search to be that where the structure contains Ba and two other arbitrary elements with any stoichiometry, i.e. Ba-*-*. The structure-mining map plot is shown in Fig. 4. This search took much longer, 174.3 and 205.2 minutes on a single CPU core. This may be sped up by running on more cores. Totally 1833 structures were pulled for the MPD and 1046 structures were pulled for the COD. More details about the results are available in the supporting information CSV files. The less restrictive heuristic-4 found all the structures that were found with heuristic-1 and 2, as expected. The normal BaTiO 3 perovskite structures are still ranked at the top. Following that, it additionally returns some perovskite structures that have Ti replaced with other species with similar x-ray scattering power as Ti, such as MPD No. 1660 (BaVO 3 ) (Nishimura et al., 2014), MPD No. 1268 (BaMnO 3 ) , and COD No. 683 (BaFeO 3 ) (Erchak et al., 1946). These gave agreements of R w 0.2 compared to 0.14 for the best-fit structures (BaTiO 3 ). So the structure-mining is able to distinguish these nearby but incorrect structures from the ones with correct atom species. The perovskite structures with B site element replaced by one with a significantly different x-ray scattering power than Ti resulted in significantly poorer R w , away from the best-fit structures by ∆R w ∼ 0.15, such as MPD No. 1482 (BaRhO 3 ) (Balachandran et al., 2017) and COD No. 431 (BaNbO 3 ) (Grin et al., 2014). Overall we achieved a satisfactory result for the barium titanate nanoparticle dataset using all the four structure-mining heuristics. Rw values for each of the structures pulled from the databases for the BaTiO 3 nanoparticle x-ray data using heuristic-4, fetching all the structures with Ba, and two other arbitrary elements from (a) the MPD (green) and (b) the COD (blue). We now test structure-mining for some different structures, for example, the low symmetry Ti 4 O 7 system. Its published room temperature crystal structure is a triclinic model (space group P-1) with all the atoms sitting on (x,y,z) general positions (Marezio & Dernier, 1971). We used the structure-mining heuristic-2 approach, pulling all the structures that contain Ti and O elements with any stoichiometry. The structure-mining map plot is shown in Fig. 5 and the detailed results are available in the supporting information CSV files. The top seven structure-mining results are also summarized in Table 4. The titanium oxides have many different structures, largely depending on the stoichiometry (98 structures were pulled by structuremining from the MPD and 77 structures from the COD), but structure-mining returned the published structure for Ti 4 O 7 on the top, i.e. COD No. 20 (Marezio & Dernier, 1971). This is a challenging problem because there are similar structures belonging to the Ti n O 2n−1 Magnéli homologous series (Andersson & Magnéli, 1956;Andersson et al., 1957). Among the top 7 entries, the other 4 Ti 4 O 7 structures are very similar to COD No. 20. COD 20 is reported in a different structural setting than the other 4 (Setyawan & Curtarolo, 2010), which explains the rather different values for the lattice parameters, but the only real difference in structure between COD 20 and the other Ti 4 O 7 structures reported in Table 4 is that one oxygen position is shifted by about 0.7Å along the b-axis compared to the other four. This is a significant structural difference yet does not result in a very large difference in R w and so differentiating these two structures probably deserves some additional consideration by the experimenter. Atomic positions are not refined independently during the structure-mining process and it is possible that this discrepancy may be resolved by a full refinement of the best performing models, as well as suggesting to the user oxygen b-axis position as a possibly relevant variable. Structure mining also returned some results with slightly different stoichiometry with similar R w values. For example, the MPD No. 38 (Ti 5 O 9 ) (Marezio et al., 1977), which belongs to a different variant in the Magnéli series. The Magnéli phases are constructed from similar TiO 6 octahedral motifs, containing rutile-like slabs extending infinitely in the a-b plane, but the TiO 6 octahedra are stacked along the c-axis in slabs of different widths depending on the composition (Andersson & Magnéli, 1956;Andersson et al., 1957;Marezio et al., 1977). In Ti 4 O 7 , every oxygen atom connects four octahedra, but in Ti 5 O 9 (MPD 38), oxygen atoms link 3 octahedra. Despite these differences, the MPD 38 model performs similarly, albeit some- Table 4: The top seven structure-mining results for the Ti 4 O 7 experimental x-ray PDF using heuristic-2 on data from the MPD and COD. See the caption of Table 2 for an explanation of the entries. The full table can be found in the supporting information CSV files. The initial lattice parameters and refined ADPs are listed. The refined lattice parameters are not listed because they are close to initial values. what worse, than some of the well performing Ti 4 O 7 models, suggesting that it at least warrants being explicitly ruled out as a candidate in a more careful modeling. This illustrates how the structure-mining approach, beyond just automatically finding the "right" structure, additionally can add value by suggesting alternative nearby models to the experimenter. We also note that, from Table 4, COD No. 36 (Ti 5 O 9 , s.g.: P1) (Andersson, 1960) performs worse (R w > 0.2), and it is the first model that has a significantly different structure, where some Ti atoms are tetrahedrally coordinated by oxygen rather than octahedrally. This model can probably be ruled out on the basis of structure-mining alone. Now let us turn to a challenging dataset, nanowire bundles of a pyroxene compound with a generic composition of XYSi 2 O 6 (where X and Y refer to metallic elements such as but not limited to Co, Na, and Fe). This example is particularly challenging because the samples formed as nanowires that were reported to be ∼ 3 nm in width (Lewis et al., 2018). In that work, a series of candidate structures were tried manually and the bestfit model was found to be monoclinic NaFeSi 2 O 6 with a space group C2/c (Clark et al., 1969). The structure-mining heuristic-1 approach is first tested. The MPD found one structure (Clark et al., 1969) and the COD found six non-duplicated structures (Sueno et al., 1973;Thompson & Downs, 2004;Redhammer et al., 2000;Redhammer et al., 2006;Nestola et al., 2007b;McCarthy et al., 2008), all having a quite similar structure, NaFeSi 2 O 6 (s.g.: C2/c). The returned structure-mining results have R w ≈ 0.35. These are poor fits overall, but comparable to the fits reported in the prior work (Lewis et al., 2018). Although the R w is not ideal, possibly due to the sample's complicated geometry, structural heterogeneity, and defects, the structure-mining approach seems still to be working. Using heuristic-2 (Na-Fe-Si-O) and 3 (Na-Fe-Si-O-*) approaches found similar results, with heuristic-3 finding some Ca and Li doped compounds albeit with the same structure. The least restrictive heuristic-4 approach was also tried. Here we show the result of fetching all the structures that contain Si and O elements and two other arbitrary elements with any stoi-chiometry, i.e. *-*-Si-O (Fig. 6). The mining operation took about 12 hours for the MPD (pulled totally 1700 structures) and 122 hours for the COD (totally 3187 structures) to finish, respectively. The COD is significantly more time-consuming because many of the COD pulled structures have large numbers of hydrogen atoms, which could be neglected for x-ray PDF calculation to shorten the running time in future work. More details about the results are available in the supporting information CSV files. However, the top ten entries across the MPD and COD are listed here for convenience in Table 5. The returned NaGaSi 2 O 6 entries (s.g.:C2/c) (Ohashi et al., 1983;Ohashi et al., 1995;Nestola et al., 2007a) have a similar structure to NaFeSi 2 O 6 (s.g.:C2/c). They both fit experimental data comparably well with NaGaSi 2 O 6 slightly preferred. The NaGaSi 2 O 6 solution can be ruled out on the basis that no Ga was in the synthesis. The x-ray scattering power of Fe and Table 5: The top ten structure-mining results for the NaFeSi 2 O 6 nanowire experimental x-ray PDF using heuristic-4 on data from the MPD and COD, pulling all the structures that contain Si and O elements and two other arbitrary elements with any stoichiometry, i.e. *1-*2-Si-O. *1 and *2 represent the first and the second atoms in the formula, respectively. See the caption of Table 2 for an explanation of the entries. The full table can be found in the supporting information CSV files. The refined lattice parameters and ADPs are listed. The initial lattice parameters are not listed because they are close to refined values and the refined lattice parameters are mostly slightly larger than the initial values. Ga are similar with Ga being slightly higher (Z(Fe) = 26, Z(Ga) = 31). The fact that structure-mining prefers to put a slightly higher atomic number, Z, element at this position suggests that we have the right structure, but some details of the refinement need to be worked out by the experimenter. Structure-mining also suggests that the refined lattice parameters are mostly slightly larger than the initial values. This example illustrates how careful interrogation of the fits to the pulled structures compared to the original parameters can highlight possible defects or impurities and guide the experimenter towards what things to search for. The MPD also returned some computed theoretical structures with space group C2, MPD No. 377 (Ca 0.5 NiSi 2 O 6 , s.g.: C2) and MPD No. 294 (Ca 0.5 CoSi 2 O 6 , s.g.: C2) . These perform slightly less well than the fully stoichiometric NaGaSi 2 O 6 and NaFeSi 2 O 6 structures. Inspection of these structures indicates that they are very similar in nature but with a lowered symmetry due to missing Ca ions and can probably be ruled out, though the fact that structure-mining finds them may suggest trying sub-stoichiometry models on the alkali metal site. Overall, the heuristic-4 returned a number of isostructural but with different composition structures. For this system, it is possible that the ground truth answer is not limited to the pure NaFeSi 2 O 6 (s.g.: C2/c) stoichiometry only and substituting impurity ions or atom deficiencies may be occuring for such a complicated synthesis (Lewis et al., 2018). These candidate structures found by structure-mining are valuable to resolve the ambiguity. Furthermore, by taking the structuremining approach yields different but similarly-fitting models which can also give meaningful information about uncertainty estimates on refined parameters such as metal or oxygen ion positions. This test again shows the huge potential of structuremining on PDF data to help experimenters be aware of some possible structural solutions that were overlooked or not real-ized in the traditional workflow. Rw values for each of the structures pulled from the databases for the Ba 0.8 K 0.2 (Zn 0.85 Mn 0.15 ) 2 As 2 neutron data fetching (a) Ba-Zn-As-K-Mn (b) Ba-Zn-As-*-* (c) Ba-Zn-As-* (d) Ba-Zn-As from the MPD (green) and the COD (blue). The best-fit model MPD No. 1 (BaZn 2 As 2 ) in (d) is marked by a red circle. Next, we test structure-mining on a complicated doped material, Ba 1−x K x (Zn 1−y Mn y ) 2 As 2 . We used the neutron PDF data with composition (x, y) = (0.2, 0.15), which has both Asite and B-site dopings. Its published room temperature crystal structure is a tetragonal structure with the space group I4/mmm (Frandsen et al., 2016b). First we applied heuristic-2 specifying all the elements including the dopants, i.e. fetching Ba-Zn-As-K-Mn structures regradless of stoichiometry. This returned no structures from the MPD or the COD. We next tested a heuristic-4 approach with Ba-Zn-As-*-*. This did result in two structures being returned, but they were both incorrect compounds, Ba 2 MnZn 2 (AsO) 2 (Ozawa et al., 1998) and BaZn 2 As 3 HO 11 , with R w values close to 1, as shown in Fig. 7(b). Additionally the heuristic-4 approach was tested to look for a sample with doping on only one site (Ba-Zn-As-*), but still found only incorrect structures, as shown in Fig. 7(c). Finally, we resorted to a heuristic-2 approach but only giving the composition of the undoped endmember, Ba-Zn-As. This did find the correct structure, tetragonal phase MPD No. 1 (BaZn 2 As 2 , s.g.: I4/mmm) (Hellmann et al., 2007), as marked by the red circle in Fig. 7(d), even though we were fitting to the doped data. This suggests a good strategy for doped systems if they are not represented in the databases, which is to try searching for the parent undoped structure, on the basis that the doped structure may be still close to its parent phase, regardless of possible local structure distortions introduced by doping (Frandsen et al., 2016b). Starting from this success, the experimenter could then easily change the occupancy of the A-site or B-site, which was also how people performed structural analysis on this doped material (Zhao et al., 2013;Rotter et al., 2008). So structure-mining has been proved to work well even for the complicated doped system. The neutron PDF of the MnO data (blue curve) measured at 15 K with the best-fit calculated atomic PDF (red) for the MPD No. 41, rhombohedral MnO model from heuristic-2. The difference curve is shown offset below (green). Notice the strong magnetic PDF signal in the difference curve, which did not confuse structure-mining. Finally, we would like to test the robustness of the structuremining approach when the structural data also include nonstructural signals, such as the magnetic PDF (mPDF) signal (Frandsen et al., 2014;Frandsen & Billinge, 2015;Frandsen et al., 2016a) in a neutron diffraction experiment of a magnetic material. To test this we consider the MnO neutron PDF data, measured at 15 K, which has a strong mPDF signal. Early neutron diffraction studies reported that MnO has a cubic structure in space group Fm-3m at high temperature and undergoes an antiferromagnetic transition with a Néel temperature of T N = 118 K, which results in a rhombohedral structure in space group R-3m (Shull et al., 1951;Roth, 1958). More recently it has been suggested that, at low-temperature, the local structure is even lower symmetry, e.g., monoclinic in s.g. C2 (Goodwin et al., 2006;Frandsen & Billinge, 2015). Here we see which of these structural results are returned by the structure-mining process. The heuristic-2 approach is applied, i.e. fetching all the atomic structures with Mn and O elements. The rhombohedral MnO model is the best performing model (MPD No. 41 with R w = 0.236, Fig. 8). The second best fit is the cubic MnO model (COD No. 56 (Zhang, 1999) with R w = 0.310). This correctly reflects the fact that at 15 K the material is expected to be in the rhombohedral phase. The monoclinic s.g. C2 model was not returned by structure-mining but this is because it is not in any of the databases. The fit agreements are similar to those reported in (Frandsen & Billinge, 2015) when the magnetic model is not included in the fit (as is the case here). Therefore, even in the presence of significant magnetic scattering, structure-mining is able to find the correct solution. Interestingly, the cubic model was not present in the MPD and the rhombohedral model was not present in the COD, and the full picture was only obtained by mining multiple databases. Conclusion In this paper, we have demonstrated an new approach, called structure-mining, for automated screening of large numbers of candidate structures to the atomic pair distribution function (PDF) data, by automatically pulling candidate structures from modern structural databases and automatically performing PDF structure refinements to obtain the best agreement between calculated PDFs of the pulled structures and the measured PDF under study. The approach has been successfully tested on the PDFs of a variety of challenging materials, including complex oxide nanoparticles and nanowires, low-symmetry structures, and complicated doped and magnetic materials. This approach could greatly speed up and extend the traditional structure searching workflow and enable the possibility of highly automated and high-throughput real-time PDF analysis experiments in the future. (Lombardi et al., 2019). b (Lewis et al., 2018). c (Frandsen et al., 2016b). d (Frandsen & Billinge, 2015).
2019-05-07T16:30:53.000Z
2019-05-07T00:00:00.000
{ "year": 2020, "sha1": "8cfcd7078e8b5dcf5bd1bec8caee251655720656", "oa_license": "CCBY", "oa_url": "https://journals.iucr.org/a/issues/2020/03/00/vk5039/vk5039.pdf", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "db51eaa4087ec189ed9cf81cd932b17272af4c3a", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Medicine", "Materials Science", "Physics", "Computer Science" ] }
11921889
pes2o/s2orc
v3-fos-license
Clinico-Genetic Study of Nail-Patella Syndrome Nail-patella syndrome (NPS) is an autosomal dominant disease that typically involves the nails, knees, elbows and the presence of iliac horns. In addition, some patients develop glomerulopathy or adult-onset glaucoma. NPS is caused by loss-of-function mutations in the LMX1B gene. In this study, phenotype-genotype correlation was analyzed in 9 unrelated Korean children with NPS and their affected family members. The probands included 5 boy and 4 girls who were confirmed to have NPS, as well as 6 of their affected parents. All of the patients (100%) had dysplastic nails, while 13 patients (86.7%) had patellar anomalies, 8 (53.3%) had iliac horns, 6 (40.0%) had elbow contracture, and 4 (26.7%) had nephropathy including one patient who developed end-stage renal disease at age 4.2. The genetic study revealed 8 different LMX1B mutations (5 missense mutations, 1 frame-shifting deletion and 2 abnormal splicing mutations), 6 of which were novel. Genotype-phenotype correlation was not identified, but inter- and intrafamilial phenotypic variability was observed. Overall, these findings are similar to the results of previously conducted studies, and the mechanism underlying the phenotypic variations and predisposing factors of the development and progression of nephropathy in NPS patients are still unknown. of the patients by physical examination and radiologic studies. The presence of glaucoma in the adult family members of the patients was evaluated based only on a review of the patient history. Mutational analysis of the LMX1B gene was conducted for all 9 patients, and their available family members. Genomic DNA was prepared from peripheral blood nucleated cells. The 8 coding exons of the LMX1B gene were then amplified by polymerase chain reaction (PCR) and directly sequenced. The sequences of the PCR primers are shown in Table 1. A recent study revealed that the coding sequence of the human LMX1B gene is longer than previously reported, and that it includes an additional 23 amino acids at the N-terminus; therefore, the numbering of the LMX1B mutations has been adjusted (10). However, in our study, the older numbering system was used to make the comparison with previously reported mutations easier. This study was approved by the Ethics Committee of Seoul National University Hospital, Seoul, Korea, and informed consent for the genetic analysis was obtained from all patients and/or their parents. RESULTS The probands included 5 boy and 4 girls with a median age at the time of clinical diagnosis with NPS of 6.8 yr (range 1.2-13.6 yr). The presenting manifestations included symptoms associated with knee abnormalities (habitual patellar dislocation, pain, locking or clicking) in 6 patients, elbow contracture in 1 patient, nail hypoplasia in 1 patient, and generalized edema/nephrotic syndrome in 1 patient. The phenotypes and genotypes of the patients and their affected family members are summarized in Table 2. Analysis of the LMX1B gene revealed 8 different mutations, including 5 missense mutations, 1 frame-shifting deletion (c.680delA) and 2 abnormal splicing mutations (IVS1-1 G>C and IVS1+ 5A>G). Two of the missense mutations were located in the LIM-B domain of LMX1B (p.His114Gln and p.Leu127Pro), and 3 were located in the homeodomain (p.Arg200Gln, p.Arg 200Trp and p.Ala213Pro). The p.Arg200Gln mutation was found in 2 unrelated patients. Three novel missense mutations (p.H114Q, p.L127P and p.R200W) that were identified in the probands were not detected in 100 control Korean subjects (200 alleles). In 6 of the families, one of the parents was affected both clinically and genetically; however, no The only phenotype observed in this patient was the absence of skin creases overlying the distal interphalangeal joints. ESRD, end-stage renal disease; PU/mHU, proteinuria and microscopic hematuria; HD, homeodomain; LIM-B, LIM-B domain. Table 2. The phenotypes and genotypes of 9 patients with nail-patella syndrome and their affected parents Table 2) were confirmed to have a de novo mutation. Finally, the patient in family 4 was an adopted son; therefore, his family history was unavailable. The clinical features were analyzed in 9 index cases and 6 affected parents. All patients and affected parents were found to have dysplastic thumb nails with/without the involvement of other finger nails. In addition, dysplastic toe nails were detected in 10 patients. Patellar anomalies were noted in all index cases and 4 of the affected parents. These anomalies included aplasia in 2 patients and hypoplasia in 11. Elbow contractures (cubitus valgus) with/without radial head dislocation were detected in 4 patients and 2 parents, and iliac horns were detected in 5 patients and 3 parents. Renal involvement was detected in 2 patients and their affected parents (families 1 and 2 in Table 2). In the case of the patient in family 1, the renal involvement presented as full-blown nephrotic syndrome at the age of 2.2 yr, which did not respond to conventional oral steroid treatment and progressed to end-stage renal disease within 2 yr. A renal biopsy performed when she was 2.5 yr old revealed focal segmental glomerulosclerosis (segmental sclerosis and global sclerosis in 56% and 28% of the glomeruli, respectively). However, electron microscopic examination was unavailable due to lack of glomeruli in the specimen. Her mother had suffered from asymptomatic proteinuria since the age of 18, and had progressed to end-stage renal disease at age 28. She had also developed vestibular Schwannoma at age 26. The patient in family 2 was diagnosed with NPS at age 6, at which time mild proteinuria and microscopic hematuria were detected. Renal disease in this patient was stable until the follow-up, which occurred at age 9. Her father, who had same mutation and clinical features, developed end-stage renal disease and underwent renal transplantation at age 35. Glaucoma or hearing difficulty was not detected in any patients or affected family members. The range and severity of the clinical manifestations differed between and within families. For an extreme example, while the patient in family 7 had the phenotype for the entire clinical tetrad, the only phenotype shown by his mother was the absence of skin creases overlying the distal interphalangeal (DIP) joints. LMX1B is required for a wide range of developmental processes including dorso-ventral patterning of the limb, differentiation of dopaminergic and serotonergic neurons, patterning of the skull, and normal development of the kidney and eye (1,5,7,9,14). Accordingly, NPS shows variable phenotype with multi-organ involvement. Besides the classic clinical tetrad (dysplasia of the patellae, nails and elbows and the presence of iliac horns), other components of the musculoskeletal system such as muscle, tendons, and ligaments can be affected. In addition, other organs such as kidneys, eyes and possibly ears, the nervous system and the gastrointestinal tract can be affected as part of the syndrome (1, 4). Although NPS is a highly penetrant hereditary disorder, it shows marked inter-and intrafamilial phenotypic variability (4). In this study, autosomal dominant inheritance was confirmed both phenotypically and genetically in 7 of the 9 families, while the remaining two patients had de novo mutations. In addition, although we did not identify a genotype-phenotype correlation, we did observe inter-and intrafamilial phenotypic variability. Changes in the nails, which were detected in all of the patients in this study, are the most constant clinical features of NPS and are detected in almost all patients. These features may include the development of only the triangular lunulae, which is one of the pathognomonic signs of NPS (15). Another common and sensitive sign of digital involvement is loss of DIP skin creases (4), which was the only abnormality detected in mother of Patient 7 in this study. Knee involvement including typical patellar hypoplasia or aplasia is also very common, and 6 of 9 patients in this study visited the hospital due to symptoms associated with their knee joints. Elbow abnormalities including limitation of joint motion, hypoplasia of the radial head, and subluxation or dislocation of the radial heads can also occur with/without antecubital pterygia. Iliac horns are bilateral and conical bony processes that project postero-laterally from the central part of the iliac bones, are considered pathognomonic of NPS. A previously conducted review of NPS reported the following frequencies of symptoms associated with the disease: nail anomalies 95.1%, patellar involvement 92.7%, elbow dysplasia 92.5%, and iliac horns 70-80% (15). Additionally, a British study of 123 NPS patients from 43 families reported that nail changes were detected in 98% of the patients, knee symptoms in 74% of the patients, elbow symptoms in 33% of the patients, and iliac horns in 68% of the patients (4). The frequencies of the abnormalities observed in our study were similar to those observed in the British study. The incidence of renal involvement in patients with NPS has been reported to be 12-62% (4,16), which is comparable to the results observed in the present study (27%). The earliest sign of renal involvement in NPS is proteinuria with or without hematuria, which may remit spontaneously or progress to overt nephritis. Approximately 5-15% of all patients develop chronic renal failure (4,16). Thus, renal involvement is one of the major prognostic factors of NPS. In spite of this, the factors responsible for the development and progression of nephropathy in patients with NPS are largely unknown. However, a recent study drew the following conclusions regarding renal involvement in patients with NPS: 1) quantitative urinalysis revealed the presence of proteinuria in 21.3% of the patients, and microalbuminuria was detected in 21.7% of the patients without overt proteinuria, 2) proteinuria and microalbuminuria were found to occur significantly more frequently in females, 3) patients with an LMX1B mutation located in the homeodomain were found to have a significantly greater occurrence and higher values of proteinuria than those carrying mutations in the LIM domains, and 4) a positive family history of nephropathy and presence of radial head hypoplasia were found to be associated with an increased individual risk of developing renal disease (17). In our study, renal involvement was detected in 2 patients and their affected parents, 3 of whom were females. However, mutation located in the homeodomain and radial head hypoplasia were detected in only one of the families. The most common renal pathologic findings characteristic of NPS are focal or diffuse and irregular thickening of the GBM with patchy electron-lucent areas (so called 'moth-eaten' appearance) and irregular deposition of bundles of fibrillar collagen (type III collagen) within the GBM and the mesangial matrix. These characteristic ultrastructural features of the GBM have been observed in all biopsied NPS patients. However, the severity of these changes do not correlate well with patient's age, the severity of proteinuria, the degree of impaired renal function, or even the presence of nephropathy (15,16,(18)(19)(20). In our study, renal biopsy was performed in only one patient (family 1) who had developed full-blown nephrotic syndrome at the age of 2.2 yr and end-stage renal disease at age 4. Her clinical course was rather unusual because the progression to end-stage renal disease in patients with NPS is usually slow (15,16). Her renal biopsy, which was performed at age 2.5, revealed focal segmental glomerulosclerosis. However, the GBM changes were not evaluated due to lack of glomeruli in the specimen. Open angle glaucoma or ocular hypertension is a recently recognized phenotype of NPS (4, 6, 21). These lesions, which usually develop during adulthood, are treatable; therefore, regular ophthalmologic screening of patients with NPS should be strongly encouraged. In our study, no patients or adult family members with glaucoma were detected, but its presence was evaluated solely based on patient history. In conclusion, the phenotypic and genotypic features of the patients evaluated in this study were similar to those observed in previously conducted studies. In addition, inter-and intrafamilial variability of the phenotypes was observed, but no genotype-phenotype correlation was observed. However, the mechanism underlying the phenotypic variations and predisposing factors of the development and progression of nephropathy in NPS patients are still unknown.
2014-10-01T00:00:00.000Z
2008-01-01T00:00:00.000
{ "year": 2009, "sha1": "6a3fd382d004f71e058a8733250900dc60d748b4", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.3346/jkms.2009.24.s1.s82", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6a3fd382d004f71e058a8733250900dc60d748b4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
815582
pes2o/s2orc
v3-fos-license
Multi-parameter approach to R-parity violating SUSY couplings We introduce and implement a new, extended approach to placing bounds on trilinear R-parity violating couplings. We focus on a limited set of leptonic and semi-leptonic processes involving neutrinos, combining multidimensional plotting and cross-checking constraints from different experiments. This allows us to explore new regions of parameter space and to relax a number of bounds given in the literature. We look for qualitatively different results compared to those obtained previously using the assumption that a single coupling dominates the R-parity violating contributions to a process (SCD). By combining results from several experiments, we identify regions in parameter space where two or more parameters approach their maximally allowed values. In the same vein, we show a circumstance where consistency between independent bounds on the same combinations of trilinear coupling parameters implies mass constraints among slepton or squark masses. Though our new bounds are in most cases weaker than the SCD bounds, the largest deviations we find on individual parameters are factors of two, thus indicating that a conservative, order of magnitude bound on an individual coupling is reliably estimated by making the SCD assumption. Introduction It is well known that the standard model (SM) admits some "accidental" symmetries such as the separate conservation of baryon (B) and lepton (L) number. In other words, the requirement of gauge-invariance and renormalizability of the operators that appear in the Lagrangian does not allow the presence of terms that violate baryon or lepton number conservation. In the framework of the Minimal Supersymmetric Standard Model (MSSM) this is no longer true. In this model, operators that carry the same baryon and lepton number of the standard model, but different spin or mass dimension (the superpartners), violate B and L conservation, which can be enforced by hand with the introduction of R-parity. This additional discrete symmetry of the spinorial charges allows the lightest supersymmetric particle (LSP) to remain stable and is defined as [1]: with S being the spin quantum number. All the standard model particles have R = 1, while their superpartners have R = −1. The phenomenological signatures of an unstable LSP have been investigated extensively in a variety of papers, both at lepton [2,3] and hadron [4] colliders. Generally the signatures are the consequences of new interaction terms that arise in the superpotential or in the soft supersymmetry (SUSY) breaking part of the Lagrangian when the assumption of R-parity is lifted. Wide attention has been given to extracting bounds on these new couplings from precision tests of the standard model and from cosmological constraints. The extent of the literature on the subject is daunting: we refer the reader to Ref. [5] and references therein for a comprehensive review. Given the impact that a precise determination of the coupling size has on the phenomenological consequences, we think it is important to obtain them in the greatest possible generality. In this paper, we do so by relaxing some of the assumptions that are commonly used in the literature. After reviewing the form of the R-breaking couplings and deriving the effective Lagrangians of interest, in Sec. 2 we describe the assumptions commonly used in the literature and introduce our extended approach. In Sec. 3 we derive new bounds on the R-breaking couplings from leptonic processes, while the bounds from semi-leptonic processes are treated in Sec 4. In Sec. 5 we summarize our results and conclusions. We use new PDG2008 [6] data to obtain bounds at 2σ that are, in cases where new data has become available, more stringent than the existing ones under the standard SCD assumptions. R-parity violating couplings and low energy effective Lagrangians There is no theoretical argument that prevents the superpotential from having the following bilinear or trilinear terms 1 : where the carets label the superfields corresponding to the standard model fields, the indices i, j, k = 1, 2, 3 label the fermionic generations, a, b = 1, 2 are SU (2)-doublet indices, while l, m, n = 1, 2, 3 are SU (3)-triplet indices. The λ ijk couplings are antisymmetric in i, j due to the antisymmetry in a, b, imposed by SU (2), while the λ ′′ ijk are antisymmetric in j, k due to the complete antisymmetry 1 Here and throughout we follow the conventions and notations of Ref. [7]. of ǫ lmn , required by SU (3). One can see that the first and second terms in Eq. (2), and Eq. (3) violate L conservation, while the third term in Eq. (2) violates B conservation. On the other hand, phenomenological considerations show that the trilinear terms in λ ijk and λ ′′ ijk cannot be simultaneously present with values large enough to affect the processes we study here, otherwise squark-exchange would lead to unacceptable rates for proton decay [8,9]. Along with the superpotential terms, B and L can also be violated by 51 additional soft SUSYbreaking terms in the Lagrangian. Since they are not pertinent to the following discussion we will not write them explicitly here. They can be found in Ref. [5], along with a discussion of the choice of bases in which the bilinear term in the R / superpotential, Eq. (3), is rotated away by an SU (4) transformation, so that the sneutrinos acquire a vacuum expectation value under electroweak symmetry breaking [10,11,12]. Consistent with the existing literature on trilinear R / bounds as reviewed in [5], here we choose to work in the mass basis, assume all bilinear R / terms in the tree-level Lagrangian are absent and base our analysis solely on the trilinear terms. Since R-parity violating terms are neither forbidden by gauge invariance nor by renormalizability, but rather depend on phenomenological consistency, one can wonder to what extent R-parity could be broken, i.e. how big are the couplings appearing in Eqs. (2) and (3). Restricting our discussion to Eq. (2), determinations of the couplings' size are generally obtained in the literature by comparing an effective Lagrangian expressed in terms of the λ, λ ′ and λ ′′ couplings with the neutral and charged current interaction effective Lagrangian that describes fundamental tests of the standard model. We largely confine ourselves in this paper to flavor-conserving cases, to keep the presentation focused. The most general effective Lagrangian for fermion-fermion neutral current interactionsll →f f at low energies reads: where is the Fermi coupling constant, L = (1 − γ 5 )/2 and R = (1 + γ 5 )/2 are the chiral projectors, g L and g R are the coupling to the chiral components of the fundamental spinors, and the ǫ's describe the "non-standard" part of the interactions. One requires that the R-breaking contributions do not exceed the limit imposed by the precision of the experimental measurements, thus obtaining bounds on the couplings. As we have mentioned above, the simultaneous presence of leptonic and hadronic R-parity violating couplings is tightly constrained experimentally by the stability of the proton. One may then choose to consider either λ ijk couplings, or λ ′′ ijk couplings to be negligible. In this paper we deal strictly with processes that involve λ and λ ′ , as their corresponding experimental signatures are clearer and the reported uncertainties are smaller. An effective four-fermion Lagrangian, applicable to processes at energies small compared to the weak scale, can be obtained from the superpotential of Eq. (2): where r, s span the superfieldsŜ of the superpotential, and ψ r,s are the Majorana fermion fields entering the supermultiplets. The part involving semi-leptonic interactions is given by the second Application of Eq. (5) to this term yields The vertices can be obtained by defining Dirac spinors as so that one gets for the interaction part of the Lagrangian, The effective Lagrangian for scalar mediated four-fermion interactions can be obtained by combining the vertices of Eq. (10) and applying Fierz identities to the result: The effective Lagrangian of Eq. (11) introduces 135 independent parameters: 9 combinations in any two of the indices times 3 combinations in the remaining index which runs through the families of 5 possible exchanged sparticles. The leptonic interaction effective Lagrangian is obtained by applying the same procedure to the first term in Eq. (2): One gets [3] L where i < j is understood in Eq. (13). The same antisymmetry in the i and j indices of the λ couplings reduces the number of effective independent couplings encompassed in Eq. (13) The limits in the literature are obtained under the assumption that a single coupling dominates the R-parity violating contributions to a process (SCD). This assumption rests on the premise that some hierarchy exists between the leptonic, semileptonic and hadronic couplings, or between different fermionic families. Besides, the couplings often enter as sums of squares, so that one might guess the most conservative bounds follow from this hypothesis. We found that in most cases this is not so. It is an open question whether such a hierarchy does indeed exist. In the absence of a theoretical guide, we apply a "multi-parameter" approach to placing bounds, to explore new regions of parameter space. In Sections 3 and 4 we give examples of our approach and contrast the results to those of the SCD simplification. Notation and conventions We think it is important at this point to clarify our notation, as the originality of our contribution rests in making explicit use of some properties of R / -couplings that are often overlooked in the literature, partly because the established notation bears some elements of ambiguity. As far as SCD is concerned, the concept was originally formulated by Dimopoulos and Hall [2]. As generally applied, one assumes that a single coupling (or a single product of couplings) is much larger than the others which, therefore, can be neglected when placing bounds. As is the case in most of the corrections to the SM that involve R-parity violating couplings, more than one coupling is present and often this simultaneous presence is not clear in the notation. For example, when the process at hand involves the four-fermion interactions described by Eqs. (11) and (13), the initial and final states of the scattering or decay are supposed to be completely known, whereas the exchanged sparticle, whether a squark or a slepton, can be of any generation. Thus, since this flavor is unknown, one has always to sum over the families of the sparticles compatible with the relevant vertices. So, it is important to understand that a bound that reads, for example, |λ 12k | ≤ 0.15(ẽ Rk ) can be taken to mean either for each k = 1, 2, 3, or As in Eqs. (14) and (15), we adopt the standard 100 GeV scaling of sfermion masses throughout. One of the ways of implementing the SCD convention consists in setting all but one λ 12k in Eq. (15) to zero, thus effectively obtaining Eq. (14). The strong version produces bounds that are obviously more conservative than the weak one, so we will display the form (14) every time we place a new bound on a coupling, with the caveat that the reader can interpret it in the form of Eq. (15). We will state explicitly when we make an exception to this rule. If the initial and final states of the four-fermion process involve the same vertices, the Rbreaking couplings enter the process only through their modulus squared. In the literature it is then customary to express the corrections to the SM as functions of simplified quantities: r ijk (l i ) (but also r ijk (l j ), r ijk (l k )) or r ′ ijk (f i ) (but also r ′ ijk (f j ), r ′ ijk (f k )). In light of what we have explained above, we want to make clear that these are symbols that stand in full for: where the scaling factor 4 √ 2G F comes from the general form, Eq. (4). Thus, they admit a sum over the flavors of the exchanged sfermionf i (orf j ,f k ) which, depending on the case, can be a slepton (l i ) or a squark (q i ). It is also clear that the value of the mass of the exchanged sparticle is always left unknown. If, instead, the SUSY process involves different vertices, then the correction to the standard model is expressed as a function of a product of couplings, of the kind λ ijk · λ rsk (equivalently, λ ′ ijk · λ ′ rsk or λ ijk · λ ′ rsk ). In these cases too, a sum over k needs to be considered. Analyses that return one product as dominant are a common extension of the SCD. We have decided to label the fermion (sfermion) generations by a number index i (or j or k) = 1, 2, 3 whenever the families are summed over, as in Eqs. (15) and (16), or when one of the indices is free to take any values, as in Eq. (14). But, for clarity's sake, if the bound involves just one single particular coupling we will label the generation by name so that, for example,ẽ R1 ↔ẽ R , ν L2 ↔ν µL ,ũ L3 ↔t L , and so on. We have mentioned above that implementation (14) of the SCD produces bounds that are more conservative. In most cases, though, a physical process cannot be expressed in terms of only one combination of couplings such as (16). The bounds from experiment are placed generally on a function of combinations It is a common approach in the literature to set all the r's of Eq. (17) but one to zero, so as to place bounds on the surviving combination of couplings. Moreover, one term at a time of the combination is then assumed to dominate. It is this particular implementation of the SCD that we find excessively severe, as it reduces the dimensionality of the allowed regions of parameter space thus missing any information on the combined action of different couplings embedded in the function F . In the next two sections we show that allowing the full dependence on F does indeed give more information and in some cases also extends the allowed bounds on the couplings. Leptonic case In order to show how our approach works, we start with some classical examples in the leptonic case [3,5]. We begin with constraints required by universality in muon and tau decays, then take up the constraints from ν µ e, ν e e, andν e e elastic scattering cross section measurements. Muon and tau decays Let us consider the two following ratios: and which are sensitive to violation of lepton universality. By comparing the tree-level effective Lagrangian of Eq. (13) with the SM, one can derive bounds on some of the λ-couplings. The SUSY processes that contribute to the decay (18) are shown in Figure 1. Those for (19) can be obtained by replacing j = 2 → 3 in Figure 1b. Besides, one also needs to consider the λ-dependence of the Fermi coupling constant G F [2]. As is well known, G F is experimentally determined from measurements of the muon lifetime. Therefore, when dealing with R-parity violating SUSY, G F receives a correction from the SUSY processes that contribute to µ-decay, Figure 1b. The correction is given by [3]: where a sum over repeated indices is intended, as explained in Sec. 2.1. We have also used the notation introduced in Eq. (16). Taking into account the processes of Fig. 1, Eqs. (18) and (19) can be expressed in terms of their SM expressions and yield, to first order in R-breaking couplings [3], and again, with the conventions of Eq. (16). As explained in the discussion preceding and following Eq. (17), if we were to use the SCD at this point, we would consider one r-combination at a time and obtain a bound on each of them when the remaining couplings are put to zero. By using the measured values of R τ µ and R τ [6] and the standard model values after radiative corrections ( [13,14] and References therein) for R SM τ µ and R SM τ , we would obtain at 2σ: |λ 23k | ≤ 0.063 (ẽ Rk ) and |λ 12k | ≤ 0.045 (ẽ Rk ) from R τ µ , |λ 23k | ≤ 0.051 (ẽ Rk ) and |λ 13k | ≤ 0.048 (ẽ Rk ) from R τ , where the dominant uncertainty is the one on the τ lifetime, and we have used the conventions introduced in the discussion preceding Eq. (14). In principle, though reasonably well motivated and not inconsistent, there is no theoretical justification for considering one coupling at a time, or a sum over families at a time. As Figure 2 shows, the full dependence on the couplings presents a richer structure. and muon lifetime combined data. The allowed region can be enclosed in a box of size Figure 2a shows that Eq. (21) admits degeneracies on the couplings. When taken together, |λ 23k | and |λ 12k | can be taken arbitrarily large 2 , since they cancel each other. A similar picture holds for R τ , as Eq. (22) has the same form as Eq. (21). Thus, an approach extended beyond SCD consists in trying to limit and reduce those degeneracies by combining different experiments that involve the same couplings. In particular, the measurement of the muon lifetime can be used to determine a first bound on the sum of couplings λ 12k , when a right-handed charged slepton is exchanged, Eq. (20). The result is dependent on radiative corrections and on the renormalization scheme. Expressions for λ 12k /m 2 e Rk can be derived in the on shell, M S and Novikov-Okun-Vysotsky (NOV) renormalization schemes [15], and are given by where the scheme dependence ofρ, sin 2 θ W and ∆r can be found in Table 1. According to the different renormalization scheme, we find the 1σ-bounds on λ 12k obtained from the muon lifetime: Scheme Table 1: Analytic expressions for sin 2 θ W , ∆r andρ in the on shell, M S and NOV renormalization We use the following average values at 1σ [6]: (14); ∆r| M S = 0.06962 (12). The ellipsis indicates non-leading order terms that can be found in [16]. And at 2σ: We can therefore decide to use this bound to limit the degeneracies present in R τ µ and R τ . Figure 2b shows the 2σ-allowed region of parameter space in λ 23k , λ 12k and λ 13k when the PDG2008 data for R τ µ , R τ and muon lifetime are combined. We use the M S bound, Eq. (28), on λ 12k . One can see that the λ-parameters undergo a extension up to a factor of two with respect to the value obtained using the SCD. When scaled to the masses of the exchanged sleptons the 2σ region shown in Fig. 2b can be enclosed in a box of size The 2σ bounds are: and The other schemes yield similar values. Note that the new bound on λ 13k does not include λ 133 , since this coupling is separately and more severely bounded by the ν e mass [2]. Similarly, there exist strong bounds on many pair-wise products of the couplings above, coming from experimental bounds on decays disallowed in the SM [5,17]. However, there are always combinations of λ 12k , λ 13k and λ 23k that are still unconstrained by the bounds on products. This comment applies to all the cases that we are considering. To list the detailed conditions takes us beyond the aim of this paper, so we leave them as implicit. Neutrino -electron scattering We now turn to the flavor diagonal neutrino -electron scattering processes ν µ + e → ν µ + e and ν e + e → ν e + e. In the ν µ e and ν e e examples, the energy is always large enough to neglect the electron mass in the kinematics, while in theν e e case, the neutrino energies are in the MeV range, which requires us to keep the electron mass effects in the kinematics. The individual left -and righthanded couplings or, equivalently, axial and vector couplings, have been extracted individually in the experiments on the ν µ e case [18], making the analysis of bounds on R-parity violating parameters quite straightforward; we begin with this process. Neglecting the terms proportional to the electron mass, the total cross sections for ν µ + e → ν µ + e and ν µ + e → ν µ + e can be written as and The direct-channel Mandelstam variable s = 2m e E ν in the target-electron rest frame. We can write g L and g R in terms of the weak angle and the R-parity violating parameters ( Figure 3) as and where x W ≡ sin 2 θ W , and g SM L = x W − 1 2 and g SM R = x W are the SM expressions for the L and R couplings. Since the experimental averages, with errors, are reported by the Particle Data Group [6] for g A = g L − g R and g V = g L + g R , we use these forms to obtain the bounds on the R-parity violating couplings: and Including λ 12k (ẽ Rk ), Eq. (28), in the 2σ joint bounds, we find the corresponding upper bound to be: To put Eq. (38), a bound on the sum of squares of couplings divided by scaled masses, in the context of other bounds, we can use the bound from [17], updated to 2008 data [6]: The bound in Eq. (39) combines the experimental bound on the decay rate for τ → eee with its representation in the "double coupling dominance convention" for R-parity violating trilinear couplings [17]. The representation of the decay involves the sum of squares of five coupling products, and the convention, in this case, serves to place the weakest bound on each product by assuming all the others are effectively zero. This example, though not in line with our restriction to flavorconserving processes, allows us to discuss the implications for sfermion masses that follow from R / bounds. Because three unknown masses appear in Eqs. (38) and (39), what one can say about the implications of the bounds for the λ parameters is limited, even if the individual R / couplings entering the two equations are the same. Concisely put, one can say that whenever the sneutrino mass satifies the bound of Eq. (38) is more restrictive than that of Eq. (39), which becomes irrelevant. If, instead, Eq. (40) is not satisfied, the above bounds have to be considered together, because the hyperbola described by Eq. (39) will cut through the elliptical region defined by Eq. (38), and part of the region allowed by Eq. (38) will be prohibited by Eq. (39). Unless we invoke some theoretical prejudice about the relative mass scales, we cannot conclude more than that. Only if one of the inequalities includes a lower bound, does the combination of bounds lead to a general condition on the masses. We will see an illustration of this situation below, when considering the combined bounds on ν e e and ν e e scattering at 1σ. Turning to the implications of data on the scattering processes ν e + e → ν e + e and ν e + e → ν e + e [19,20,21], we must consider both high energy data, E ν ≫ m e , and low energy data, E ν ∼ m e . General, model independent analyses of bounds on non-standard interactions from these and related neutrino and electron data have recently been carried out for both non-universal and flavor-changing new physics interactions [22,23,24]. We focus here on the bounds on R / trilinear coupling parameters provided by flavor diagonal elastic ν e e accelerator data at tens of MeV [20] and elastic ν e e reactor data at several MeV [19]. The LSND Collaboration provides a measurement of the total cross section for elastic scattering of the electron neutrinos off electrons. Assuming that the final state neutrinos are also electron-type, we can use their reported value and the general expression for left handed neutrinos scattering of unpolarized electrons to set a limit. The general expression for a ( four-fermion interaction differential cross section reads and for the total cross section where E ν is the neutrino energy in the rest frame of the target electron, and T is the kinetic energy of the recoil electron 3 . Theν e e cross sections follow by interchanging g L and g R in Eqs. (41) and (42). When E ν ≫ m e , as in the case of the LSND experiment, m e /E ν is ignorable, and the expression for the cross section simplifies to the familiar high energy form. Including the R / trilinear parameters in the expressions for the g L and g R coupling coefficients, we find where we have considered a SUSY process like the one depicted in Figure 3b, in which ν 2 ≡ ν µ has to be replaced by ν 1 ≡ ν e , λ 2j1 → λ 1j1 , and j = 2, 3, while the λ 12k -dependence is given by the correction to G F , Eq. (20). In our study of the bound on r 12k (ẽ Rk ) that follows from the precision measurement of muon decay and the renormalized expression for the muon decay formula, we found the bounds of Eqs. (25) and (28) respectively at the 1σ and 2σ C.L. The corresponding values of r 12k (ẽ Rk ) are so small that it can be dropped from further discussion. The coupling coefficient g L then has its SM value, and g R is modified from the SM value by the terms that depend on λ 121 and λ 131 . Referring to Eq. (42) and (44), we find the bound on the region of trilinear couplings we are after: at 2σ. Before discussing the tie-in of Eq. (45) with other limits, we look next at the independent limits set by the results for ν e + e → ν e + e from reactor data. In this case, the electron mass-dependent terms are important and must be kept. The cross section expression in Eq. (42) is modified by interchange of g L and g R for application to the ν e e case. ν e + e → ν e + e The highest statistics experiment ν e + e → ν e + e is still that of Reines, Gurr and Sobel [19]. The results are presented as dimensionless factors times the SM charged current, V − A expression thus calculated is a function of the R / parameters, which enter through the coupling g νee R , Eq. (44). The theoretical expressions for electron-neutrino and antineutrino scattering involve the same R / couplings. Thus, in the spirit of our multi-parameter, multi-experiment approach, we can combine data from LSND [20] and Irvine [19] results for both ∆T bins in a way similar to what we did for R τ µ and R τ in Section 3.1. The resulting constraint at the 2σ level is: At first glance it is surprising that the ν e data, with larger uncertainty, produces tighter constraints than the ν e data. The source of the added resolving power is the g L g R term in the cross section expressions, which plays a significant role in the low energy analysis and increases the sensitivity to the variation with respect to the R / parameters. Though the bound Eq. (46) is consistent with zero at 2σ, at the 1σ level, given current values for g L and g R , it is not. This in itself is not of special significance, but it affords the opportunity to illustrate added implications when "new R / -physics" is needed to fill a gap between SM and experiment. The joint bound from LSND and Irvine at 1σ yields: Projected onto each parameter one finds, The smaller the mass becomes, the larger the other must be to satisfy the inequalities. This mass information can only be obtained if the strict SCD approach is relaxed, as we have done here. The preceding discussion, summarized in Fig. 4, is offered to illustrate the added power that multi-parameter analysis provides to probe R / parameters. Experiments delivering data with high statistics at energies of a MeV or so to study ν e e scattering would sharpen the picture, clarifying the possible role of R / SUSY in this sector of neutrino physics. Here we are considering only low energy processes, where the four-fermion effective interactions apply, but at high energies the nonlocal effects of the exchanged particle must be included, directly probing the sfermion masses. This possibility is afforded by e + e − → ννγ results from LEP [28] and, in the future, possibly 100 GeV range ν µ e → ν µ e and ν µ e → ν e µ scattering experiments such as those proposed by NuSOnG [29]. This concludes our exploration of the multi-parameter effects in purely leptonic processes. Next we consider some important constraints from semi-leptonic physics. Semi-leptonic case When R-parity violating interactions are taken into account, charge current and neutral current interaction generally involve more than one coupling at a time, and in some cases these couplings can be large and cancel each other. The lesson we take from the leptonic case is that such degeneracies can be removed by considering a subset of experiments characterized by the same R-parity couplings. Then one bounds the couplings by considering the experimental uncertainties on this subset altogether. This is even more evident when we analyze processes that involve the semi-leptonic couplings λ ′ ijk of Eq. (11). Contrary to the leptonic and hadronic cases, the couplings λ ′ ijk are not required by gauge invariance to have any symmetry in their indices. As a consequence, the number of effective couplings entering the Lagrangian is much greater than those appearing in Eq. (13), as we have mentioned in Section 2. There are thus more processes that must be used simultaneously to bound the couplings. What this also means is that, due to the amazing overall accuracy of the SM predictions and the great number of tests, there are many more ways to cut down the allowed regions of parameter space. As we will see in the following standard examples, when the availability of experiments from which we can draw bounds on a particular coupling increases, the bound on the coupling tends to approach the one obtained under the SCD. Universality in pion and tau decay In the cases of semi-leptonic couplings, we can obtain behavior similar in nature to the one depicted in Fig. 2a. The ratio: would give in the SCD the 2σ bounds |λ ′ 31k | ≤ 0.092 (d Rk ) and |λ ′ 21k | ≤ 0.032 (d Rk ). Here, again, the uncertainty on the τ lifetime is comparable in magnitude to the one on the branching fraction to pions, and has to be taken into account. As in the leptonic case, the simultaneous presence of both couplings introduces a two-fold degeneracy. Such degeneracy can be removed by considering the ratio [3] For the purpose of illustrating our multidimensional approach, it is convenient in this case to follow the restriction mentioned in [3], so we use Eq. (50) to effectively place two alternative 2σ bounds 5 : Fig. 5a. The allowed region, rescaled to the masses of the exchanged squarks, can be enclosed in a box of size {|λ ′ 31k |(d Rk ), |λ ′ 21k |(d Rk )} ≤ {0.098, 0.045}. The resulting 2σ bound on λ ′ 31k reads: exactly equal to the one obtained by SCD. Unitarity of the CKM matrix and forward-backward asymmetry The Cabibbo-Kobayashi-Maskawa (CKM) matrix elements are experimentally determined by comparing the rates of decays that involve quarks in the initial state to the rate of muon decay. In general, nuclear beta decay is used to determine the value of |V ud |, while the rates for s → ulν l and b → ulν l in K and charmless B decay are used to determine |V us | and |V ub |. The R-breaking processes involved in these decays are shown in Figure 6. The unitarity constraint can be imposed on the CKM matrix elements, together with the effective Lagrangian of Eq. (11) and a similar one, constructed from Eq. (10), involving the product of different couplings. One gets [5]: which becomes at leading order in R-parity breaking, where cos(∆θ us k ) ≡ cos(θ us + θ 12k − θ 11k ) and cos(∆θ ub k ) ≡ cos(θ ub + θ 13k − θ 11k ) are the relative phases between the CKM matrix elements and the complex R-parity violating couplings. Using Eq. (55) we can place bounds on the λ ′ couplings involved by separation between the right-and left-hand side. One can substitute the most recent experimental determination of the central values of the CKM matrix element on the right, and use the errors on the unitarity bound on the left at the desired level of precision. In the literature, Eq. (55) is treated in the SCD, with the additional constraint that R-parity couplings and CKM matrix elements are treated as real. With these assumptions we find that the most recent data [6] result in the following bounds at 2σ: |λ ′ 11k | ≤ 0.027 (d Rk ) and |λ 12k | ≤ 0.028 (ẽ Rk ). In Eqs. (54) and (55) the notation of Eq. (16), to express the sum of moduli squared, has been used, together with the correction to G F from the muon lifetime, Eq. (20). As can be seen, the full dependence on the CKM and R-parity violating phases is also indicated. We have adopted the Wolfenstein parametrization [31] to express the CKM matrix elements. In this parametrization V us is real, while V ub is not. Nonetheless, measurements of the absolute values of the CKM elements give |V ub | ∼ 0.004, approximately two orders of magnitude smaller than |V ud | ∼ 0.974 and |V us | ∼ 0.226. Thus, the behavior of Eq. (55) is almost independent of λ ′ 13k , as |V ub | can be neglected. Taking into account the fact that V us is real and |V ub | is tiny, and neglecting for the moment the SUSY correction to G F , Eq. (55) implies 2σ bounds on a three dimensional parameter space spanned by |λ ′ 11k |, |λ ′ 12k | and cos(θ 12k − θ 11k ). λ ′ 11k , can be bounded by π-decay, Eq. (50). λ ′ 12k can be bounded by the forward-backward (FB) asymmetry in fermion pair production reactions e − e + → ff , which we treat in detail in the next subsection. The 2σ-bound region is shown in Fig. 5b. Note that, contrary to the other cases in the paper, here the index k has to be common to the three axes in the picture. One can see that, in spite of the fact that the phases are allowed to take on any values, the λ ′ parameters are allowed a slightly larger region when cos(∆θ us k ) = −1. We come back to this point at the end of this section. Forward backward asymmetry The forward-backward asymmetry in fermion pair production has been studied at PEP, PETRA, TRISTAN, LEP, and SLC. In order to bound λ ′ 12k (d Rk ) we need charm production, e − e + → cc. The SUSY diagram that contributes to this process is depicted in Figure 7a, with u 1 → u 2 , λ ′ 11k → λ ′ 12k . We assume that the right-handed down squark mass is far enough above the Z-pole that we can retain our effective Lagrangians, Eqs. (11) and (13), and use the data in [6], dominated by Z-pole measurements. The SM expression for the charm FB asymmetry reads [32]: where where g L,R are the usual chiral couplings, Q c = 2/3 is the charge of the charm and parametrizes the γ − Z interference. The R-parity contribution is obtained by the substitution: So the correction to the SM reads at lowest order, where and r has to be calculated at the Z-pole. By using the standard SU (2) × U (1) expressions for g L and g R , and adopting the M S scheme value of sin 2 θ W for definiteness, one gets the values: g e L = −0.2688, g e R = 0.2312, g c L = 0.3459, g c R = −0.1541. We obtain the bound at 2σ: As mentioned above, Fig. 5b shows that allowing for the λ ′ 11k and λ ′ 12k couplings to have opposite complex phases (cos ∆θ us k = −1) slightly extends the allowed regions of parameter space with respect to the SCD. Furthermore, such an extension becomes significant when we also introduce the leptonic coupling λ 12k , bounded by the experimental limits on the muon liftime in the M S scheme, Eq. (28). The 2σ-allowed region in λ ′ 11k , λ 12k and λ ′ 12k obtained by simoultaneous combination of the data from CKM unitarity, FB asymmetry in charm production and muon decay in the M S renormalization scheme is shown in Fig. 5c. It is enclosed in a box of size This striking situation, where three parameters are all allowed to be non-zero and larger than their SCD values, is obscured when only one parameter at a time is considered, i.e. SCD is assumed uniformly. In principle the second row of the CKM matrix could be used in a similar fashion to bound |λ ′ 21k | and |λ ′ 22k |: where cos(∆θ cd k ) ∼ cos(θ 21k −θ 22k ) in the Wolfenstein parametrization. |λ ′ 21k | (d Rk ) can be bounded by pion decay, Eq. (50). The dependence on λ 12k (ẽ Rk ) comes from the bounds on universality of the Fermi constant in muon decay. We use, again, the M S bound at 2σ, Eq. (28). The weakest bound consistent with both these constraints is obtained when cos(∆θ cd k ) = −1 and reads, at 2σ, A caveat is necessary at this point, in the sense that Eq. (65) is derived for processes involving the production of charmed particles in deep inelastic ν µ -nucleon scattering, with the assumption of lepton flavor conservation. This is the standard textbook process used for the determination of the CKM couplings |V cd | and |V cs | [33]. Such a choice is reflected by the i = 2 index of the λ ′ ijk couplings entering Eq. (65). The use of recent PDG2008 data for the uncertainty affecting the 6 As in Footnote 5, λ ′ 111 is tightly bounded by neutrinoless double-beta decay [30]. unitarity constraint and for the central values of the CKM matrix elements is not fully consistent with this idealized picture. The most recent and precise values given in [6] are obtained through a weighted average of different processes, some of which involve external particles of the first or third lepton generation. It is clear that the robustness of the bound given in Eq. (66) depends strongly on the amount and nature of the weighting involved. Because such detailed knowledge and extensive analysis in this regard goes beyond the purposes of this paper, we limit ourselves to presenting the bound above, recommending caution in its interpretation. As we will see in Section 4.4, D 0 decay alone places bounds on the same (sum of) couplings. We consider those bounds more robust. Finally, Eq. (55) has the nice feature that it involves the phases of the R-breaking couplings. In general such phases are associated with CP violating effects. So we can envisage a strategy that would combine additional experiments in the CP violating sector with those that can place bounds on the moduli of R / -couplings like the two above, so that a more thorough restriction of parameter space takes place. However we did not find in the literature [5], nor were we able to create a specific example that would help us bound the phases of the couplings involved in this case, namely the product λ ′ * 11k λ ′ 12k (d Rk ), in terms of CP violating processes. Some asymmetries in fermion pair production at leptonic colliders (l + l − → f JfJ ′ ) on and above the Z-pole [34] can be expressed in term of non trivial combinations of R-breaking phases like with obvious summation over dummy indices. A detailed and comprehensive study of such processes would probably shed light on the phenomenological constraints on CP violating phases. Nonetheless, due to the great number of couplings involved, such a study would have to take into account a large number of interactions, many of which cannot be treated as "low energy" processes. This clearly exceeds the purposes of this paper, requiring an extensive, separate investigation. As anticipated above, and shown in Fig 5b, we have tried to constrain the phase difference cos(θ 11k − θ 12k ) by using Eq. (55), where all the absolute values are bounded by some other experiments. We have also tried to constrain the phase cos(θ 21k − θ 22k ) with the second row, Eq. (65). We found no handle to constrain parameters, as any possible values of the phases are allowed by CKM unitarity. Atomic parity violation We can follow the same technique, and use the bounds on λ ′ 11k obtained by CKM unitarity and the bounds on λ 12k obtained in the M S renormalization scheme to place bounds on λ ′ 1j1 from atomic parity violation (APV). In the SM the Z-exchange between the electrons and atomic nuclei leads to parity violating transitions between particular atomic levels. This has been observed for example in the 6S → 7S transitions of 133 55 Cs [35,36]. The SM contributions are encapsulated in the weak charge Q SM W , which is defined as [6] Q SM where Z is the atomic number, A the atomic mass, and the coefficients C 1 (i) are given at tree level by The corresponding experimental quantities can be expressed in terms of the SM contributions and the R / processes depicted in Figure 7 [3]: where we have assumed the R-parity correction to the Fermi constant, Eq. (20). The most recent determination of the difference δQ W = Q exp W −Q SM W for cesium can be found in [6] and its expression in terms of R / -couplings reads: Again, we can first determine the 2σ bounds on the semi-leptonic couplings that one can obtain by use of the SCD: |λ ′ 11k | ≤ 0.051 (d Rk ) and |λ ′ 1j1 | ≤ 0.024 (ũ Lj ). When both semi-leptonic couplings are considered, the region of parameter space that is bounded is two dimensional and its shape is similar to that of Fig. 2a. The dependence of δQ W on the leptonic coupling λ 12k (ẽ Rk ) due to G F -correction introduces an additional direction in parameter space, which becomes three dimensional. µ decay in one of the renormalization schemes described in Sec. 3 can be used to place bounds on λ 12k , while pion decay, Eq. (50), can be used to place bounds on λ ′ 11k . The weakest bound is obtained in the M S scheme, Eq. (28). As we have explained extensively Figure 8: a) 2σ bound region on λ ′ 1j1 (ũ Lj ), λ 12k (ẽ Rk ) and λ ′ 11k (d Rk ) from APV, µ decay in the M S renormalization scheme and R π combined. The allowed region can be enclosed in a box of in Section 4.1, by simultaneously considering these three processes we can delimit a 2σ bounded region of parameter space, which we present in Fig. 8a. It can be enclosed in a box of size .055, 0.043, 0.059}, thus allowing only marginal extension with respect to the SCD for the λ ′ 11k and λ 12k parameters, but roughly a factor of two for the λ ′ 1j1 parameter. The 2σ bound on λ ′ 1j1 we gather from the combined analysis reads 7 : D decays For our last examples, let us now consider Dand D s -meson decays. We can implement our procedure of taking processes that involve one or more of the couplings we have bounded in the previous cases, together with others which are at the moment unbounded, and then use the known bounds to restrict the boundaries of the allowed multidimensional parameter space to obtain bounds on the remaining couplings. Again we use the averages from experimental data as reported in PDG2008 and present bounds at the 2σ level. 7 See Footnote 5. Turning to the D − s → ℓ − + ν ℓ decays for further constraints, we can bound λ ′ 32k in the same way, starting from the ratio: where R SM D − = 9.76 accounts for the phase-space suppression. One would get, by SCD use, |λ ′ 22k | ≤ 0.27 (d Rk ) and |λ ′ 32k | ≤ 0.34 (d Rk ). The combined analysis of D 0 decay, Eq. (73), and Eqs. (75) and (61) yields the 2σ region depicted in Fig. 8b, whose margins are given by the box .140, 0.034, 0.359}, with only slight extension beyond the bounds obtained by assuming SCD. This translates to the 2σ bound: This bounds is, again, roughly ten percent stronger than that obtained by SCD [38]. Summary and Conclusions In this work, we limited our attention to experimental results from a set of standard leptonic and semi-leptonic processes and allowed R / parameters to vary together, constrained by data at the 2σ level, to place bounds on their values. We compared the resulting bounds with those obtained from the long-standing procedure of allowing only one parameter to be non-zero at a time, which has produced a long, useful list of bounds in the literature over the past twenty years or so. Using our different approach, we showed that a joint analysis of different experiments involving the same subset of couplings can explore regions of parameter space where the bounds are weakened compared to the value set by the SCD procedure. More importantly, the 2σ bounds on individual couplings obtained by the combined approach are generally different from those obtained by strict SCD. This is due to the fact that almost all processes can by expressed in terms of more than one parameter, thus introducing correlations between the couplings and degeneracies in the allowed regions of parameter space. The combined-experiments approach helps eliminate these degeneracies and at the same time maintains the full parameter space structure. These features provide qualitatively different information from that available in the literature, whose results are almost exclusively limited to isolating parameters and considering them one at a time. New bounds obtained with our approach are given in Table 2, where we present a summary of the results described in the preceding sections. In theν e e case, we found that the requirement that certain trilinear couplings were non-zero, combined with simultaneous constraints involving the same couplings but different sfermion masses, we could extract hierarchical relationships among these masses. We illustrated this situation in Fig. 4, where the 1σ allowed area in the space of "mass ratios" is displayed, and in the paragraphs following Eq. (48), where individual 1σ and 1.65σ mass bounds are shown. To the best of our knowledge, this is the first effort, in the context of purely phenomenological bounds on R / parameters, to find constraints among the sfermion masses. In conclusion, we can say that, overall, a richer, more complex picture of parameter space and, in most cases weaker bounds on R / parameters result from a multi-parameter, multi-process analysis, compared to the analysis of each parameter in isolation. This conclusion is non-trivial when, as the case in expressions we consider, parameters enter as sums of squares, suggesting that dropping all parameters but one provides the most conservative limit on each. Nonetheless, since we found the allowed ranges of parameters were larger by at most a factor two, we conclude that the SCD approach is a reliable order of magnitude estimate of the upper bounds on the individual parameters. At the same time, we conclude that fuller analyses, as exemplified here, are needed to search for hints that data are showing R-parity violation in a region of parameter space where several parameters are non-zero. Finally, such analyses are needed to explore mass relations among sparticle masses, which requires disentangling couplings and masses by comparisons of theory with data. Table 2: Summary of constraints on λ values with their corresponding mass scale in parenthesis. Acknowledgements The "Experiment" column gives the measured quantities that are the source of the multi-variable bound, the "Bound" column. The "Corr. λ" column gives the most directly correlated λ determining the constraint, while the final column, "SCD bound" gives the value of the bound when all the relevant λ couplings but the one in the first column are set to zero. The note "none" in a column means that only one coupling appears in the relevant expression to compare to experiment. The note "NA" means that there is no other coupling to set to zero for the case in this row. Azar Mustafayev for discussions about models of SUSY mass patterns. We acknowledge the use of the program JaxoDraw [39]
2009-07-17T20:10:22.000Z
2009-03-01T00:00:00.000
{ "year": 2009, "sha1": "435ef7107396237903cf1125ac55d430a994f15e", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0903.0118", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "d85e0554ff3720e720a3e23ad6bfcd716b3f1268", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
268022859
pes2o/s2orc
v3-fos-license
Bactericidal Biodegradable Linear Polyamidoamines Obtained with the Use of Endogenous Polyamines The work presents the synthesis of a series of linear polyamidoamines by polycondensation of sebacoyl dichloride with endogenous polyamines: putrescine, spermidine, spermine, and norspermidine—a biogenic polyamine not found in the human body. During the synthesis carried out via interfacial reaction, hydrophilic, semi-crystalline polymers with an average viscosity molecular weight of approximately 20,000 g/mol and a melting point of approx. 130 °C were obtained. The structure and composition of the synthesized polymers were confirmed based on NMR and FTIR studies. The cytotoxicity tests performed on human fibroblasts and keratinocytes showed that the polymers obtained with spermine and norspermidine were strongly cytotoxic, but only in high concentrations. All the other examined polymers did not show cytotoxicity even at concentrations of 2000 µg/mL. Simultaneously, the antibacterial activity of the obtained polyamides was confirmed. These polymers are particularly active against E. Coli, and virtually all the polymers obtained demonstrated a strong inhibitory effect on the growth of cells of this strain. Antimicrobial activity of the tested polymer was found against strains like Staphylococcus aureus, Staphylococcus epidermidis, and Pseudomonas aeruginosa. The broadest spectrum of bactericidal action was demonstrated by polyamidoamines obtained from spermine, which contains two amino groups in the repeating unit of the chain. The obtained polymers can be used as a material for forming drug carriers and other biologically active compounds in the form of micro- and nanoparticles, especially as a component of bactericidal creams and ointments used in dermatology or cosmetology. Introduction The threat of contamination with pathogenic microorganisms is a serious problem not only in the medical and healthcare industry but also in other branches of social activity related to the production, transport, and storage of food.Unfortunately, this problem is growing year by year, mainly due to the excessive widespread use of biocides and the related increasing presence of bacteria resistant to currently used antibiotics and antiseptics [1,2].The World Health Organization (WHO) has declared that antimicrobial resistance (AMR) is one of the top 10 global public health threats facing humanity [3,4].Post-implantation bacterial infections are considered a huge problem in modern surgery.Of the total number of clinical complications related to the implantation procedure, infections related to microbiological contamination of the implanted biomaterial are assigned the main role [5,6].The threats associated with this phenomenon, intensified by the emergence of AMR and the limited ability of antibiotics to eradicate biofilms, force us to undertake comprehensive efforts regarding the use of new alternative therapies, antibacterial biomaterials, and biomaterial-assisted delivery of non-antibiotic therapeutics, such as bacteriophages, antimicrobial peptides, and antimicrobial enzymes [7,8].Synthetic polyamines containing amide groups with a higher molecular weight than natural ones are particularly interesting due to the wider possibilities of biomedical applications, including substances with strong anti-cancer properties [9].These compounds are mainly obtained by Michael-type polyaddition with bisacrylamides [10].Polyamidoamines are particularly interesting in this respect due to their biodegradability and useful physicochemical properties.They are extensively used as carriers in drug and gene delivery.These polymers are synthesized in the form of a linear polymer or as a polyamidoamine (PAMAM) dendrimer [11], which is used as a carrier of many drugs in anti-cancer therapies and targeted drug therapies and as a carrier of contrast and marking substances used in computed tomography (CT) and magnetic resonance imaging (MRI) techniques. Unfortunately, these highly cationic polymers exhibit quite strong toxicity, limiting their applications. Significantly less toxic linear polyamidoamines (PAAs) are the aza-Michael polyaddition products of primary monoamines or bis-sec-amines with bisacrylamides [12,13].These polymers are particularly useful as anti-metastatic drugs [14] and as intercellular nonviral carriers of DNA [15].The PAAs obtained so far by polyaddition of amines and bisacrylamides are mostly polymers that are soluble in water or which form hydrogels.Due to the limited possibility of selecting monomers, their chains are always built of repeating units containing bis-amide derivatives and tertiary amines.Due to such a structure, these polymers are most often difficult to degrade in the conditions of the human body, and some of the degradation products are not biocompatible [12].For this reason, various attempts have been made to overcome this problem.Monsalve et al. (2010) synthesized several polyamidoamines with very low molecular weight in the reaction of ethyl acrylate with a diaminopropane derivative using lipase as a catalyst [16].In another study, the use of hydroxyproline was found to be an interesting method for the synthesis of PAAs [17].However, improvement of the biocompatibility and biodegradation efficiency of this type of polymer may be mostly obtained by using endogenous polyamines (spermine and spermidine) and derivatives of organic acids originating from the human body for their synthesis.This is the main goal of the presented study.The new class of polyamidoamines containing repeating units in the chain composed of derivatives of endogenous amines and sebacic acid (a substance normally occurring in the form of esters in lipids found in sebum and other skin secretions) is presented in this study.Sebacic acid is also an active component of many biochemical pathways in humans [18,19]. Initial Attempts at the Synthesis of Polyamidoamines According to the assumptions, a series of linear PAAs with an average molecular weight of not less than 5000 g/mol were obtained using the polycondensation reaction of endogenous polyamines with selected dicarboxylic acid derivatives.In the initial phase of the research, based on the experiments conducted, the optimal composition of the reaction mixture and the method of carrying out the planned reaction were selected.To obtain a linear polymer, prior protection of the secondary amine groups in the polyamines used as monomers is required.Due to the relatively high susceptibility to thermal decomposition of the polyamines selected for the reaction, especially their derivatives with protected secondary amine groups, as well as due to the difficulties in selecting a universal solvent for the monomers and the products obtained, it was necessary to carry out the polycondensation reaction according to the interfacial polymerization technique [20].Moreover, based on preliminary tests, it was observed that the dicarboxylic acid derivative used in polymerization should contain a fairly long aliphatic chain containing at least five or six methylene groups.As a consequence, such PAAs are soluble in most traditional organic solvents and may be melted at a temperature much lower than their decomposition temperature. Preparation of Monomers-Synthesis of Polyamines with Protected Secondary Amine Groups To obtain diamines with blocked secondary amine groups, a three-stage modification process of selected polyamines-norspermidine (N1-(3-Aminopropyl)propane-1,3diamine), spermidine (N1-(3-Aminopropyl)butane -1,4-diamine), and spermine (N1, N4-Bis(3-aminopropyl)butane -1,4-diamine)-was carried out.In the first step, the primary amino groups in these compounds were blocked by reacting them with benzaldehyde to form imine bonds (R 3 R 2 C=R 1 ), which are reversible and easily hydrolyzed in an acidic condition.As a result of the condensation of selected polyamines with benzaldehyde, the disappearance of the original signal coming from the proton -H 2 NCH 2 CH 2 -and the formation of a new signal coming from the proton of the newly formed imine Bz-C=NCH 2 on the 1 H spectra was observed.This is illustrated by the following: for norspermidine, Figure 1(A1) (a signal a at 1.51 ppm) and, after the reaction, Figure 1(B2) (a signal at 3.1-3.7 ppm); for spermine, Figure 1(B1) (a signal at 2.76 ppm) and Figure 1(B2) (a signal at 2.5-3.9 ppm); and, for spermidine, Figure S1a (a signal at 2.76 ppm) and Figure S1b (a signal at 3.0-3.6ppm).Moreover, all these spectra show signals in the range of 7-8 ppm coming from the attached benzyl-protecting group.A proton signal from the unreacted Bz-OH aldehyde, which should occur around δ = 10 ppm, was not observed.At the same time, the new signal of protected norspermidine and spermidine around δ = 4 ppm appeared, which was attributed to the protons of the group resulting from the side reaction of intramolecular cyclization of the amine chain to form a six-membered hexahydropyrimidine derivative (signal H, Figures ditional organic solvents and may be melted at a temperature much lower than their decomposition temperature. Preparation of Monomers-Synthesis of Polyamines with Protected Secondary Amine Groups To obtain diamines with blocked secondary amine groups, a three-stage modification process of selected polyamines-norspermidine (N1-(3-Aminopropyl)propane-1,3-diamine), spermidine (N1-(3-Aminopropyl)butane -1,4-diamine), and spermine (N1, N4-Bis(3-aminopropyl)butane -1,4-diamine)-was carried out.In the first step, the primary amino groups in these compounds were blocked by reacting them with benzaldehyde to form imine bonds (R3R2C=R1), which are reversible and easily hydrolyzed in an acidic condition.As a result of the condensation of selected polyamines with benzaldehyde, the disappearance of the original signal coming from the proton -H2NCH2CH2and the formation of a new signal coming from the proton of the newly formed imine Bz-C=NCH2 on the 1 H spectra was observed.This is illustrated by the following: for norspermidine, Figure 1(A1) (a signal a at 1.51 ppm) and, after the reaction, Figure 1(B2) (a signal at 3.1-3.7 ppm); for spermine, Figure 1(B1) (a signal at 2.76 ppm) and Figure 1(B2) (a signal at 2.5-3.9 ppm); and, for spermidine, Figure S1a (a signal at 2.76 ppm) and Figure S1b (a signal at 3.0-3.6ppm).Moreover, all these spectra show signals in the range of 7-8 ppm coming from the attached benzyl-protecting group.A proton signal from the unreacted Bz-OH aldehyde, which should occur around δ = 10 ppm, was not observed.At the same time, the new signal of protected norspermidine and spermidine around δ = 4 ppm appeared, which was attributed to the protons of the group resulting from the side reaction of intramolecular cyclization of the amine chain to form a six-membered hexahydropyrimidine derivative (signal H, Figures 1(Ib) and S1b).This cyclization reaction was previously observed by Culf's team [21] during the process of protecting the amino groups of norspermidine with salicylaldehyde.Accord- This cyclization reaction was previously observed by Culf's team [21] during the process of protecting the amino groups of norspermidine with salicylaldehyde.According to our estimates, based on the relative intensity of NMR signal H-a characteristic resonance at ca. δ = 4 ppm of the methine proton of the cycle (Figures 1(A2) and S1b)-about 37% of the obtained compounds contained this cycle group.Moreover, this cyclic product practically did not affect the course of the further stage of the planned synthesis, because, at a later stage of the protection of secondary amino groups, it was easily regenerating the primary and secondary amino groups. 1 H NMR spectra of the obtained compounds are presented in Figure 2 and the Supplementary Materials (Figure S2).After the process, a characteristic g signal was observed coming from the methyl groups of the formed carbamate group (Boc group) at δ = 1.44 ppm for the norspermidine derivative and 1.45 ppm for the spermidine derivative.Moreover, signals coming from the protons of the benzyl group were still visible, indicating the stable protection of the primary amino groups. In the next stage of the synthesis, the obtained compounds had to be subj selective deprotection of primary amino groups.Hydrolysis of imine bonds w formed in the presence of dichloroacetic acid.The deprotection efficiency was es using the 1 H NMR spectra.In the example spectrum of norspermidine after depr (Figure 2b), a new signal was observed around δ = 10 ppm coming from the proto released benzaldehyde HOBz, and a series of signals originating from its benzy (7.54-7.89ppm) were also observed.These signals were shifted compared to the of the benzyl ring present in the derivative with protected primary amino groups 2a, 7.39-8.27ppm).After the deprotection of these groups, the remaining a, b, an nals shifted slightly, and the g signal of -CH3 protons of the secondary amine-pr group (Figure 2b, 1.44 ppm) remained unchanged.The process of deprotection mary amine groups in the remaining polyamines proceeded similarly.There was no signal H attributed to the presence of a proton coming from a hexahydropyrimidine derivative formed during the side reaction of their intermolecular cyclization on the spectra of polyamines with all amino groups protected (Figures 2a and S2). In the next stage of the synthesis, the obtained compounds had to be subjected to selective deprotection of primary amino groups.Hydrolysis of imine bonds was performed in the presence of dichloroacetic acid.The deprotection efficiency was estimated using the 1 H NMR spectra.In the example spectrum of norspermidine after deprotection (Figure 2b), a new signal was observed around δ = 10 ppm coming from the proton of the released benzaldehyde HOBz, and a series of signals originating from its benzyl group (7.54-7.89ppm) were also observed.These signals were shifted compared to the signals of the benzyl ring present in the derivative with protected primary amino groups (Figure 2a, 7.39-8.27ppm).After the deprotection of these groups, the remaining a, b, and c signals shifted slightly, and the g signal of -CH 3 protons of the secondary amine-protecting group (Figure 2b, 1.44 ppm) remained unchanged.The process of deprotection of primary amine groups in the remaining polyamines proceeded similarly. Synthesis and Properties of Linear Polyamidoamines The previously obtained and purified derivatives of the polyamines with protected secondary amine groups, as well as putrescine, were used for a polycondensation reaction with sebacoyl dichloride.The interfacial reaction was carried out at the phase boundary: water/chloroform at room temperature (Scheme 1).The optimal reaction conditions were determined by modifying data reported in the literature [20,22,23] with the help of additional experimental tests.The reaction was carried out for 1 h, regardless of the type of amine, at room temperature. 1H NMR and FTIR spectra of the obtained polymers are shown in Figures 3, 4 and S3-S5. stretching vibrations of C-C=O.They confirm the formation of ami amide chain structures, as well as 1 H NMR spectra (signal d) and chain sequences originating from both sebacic acid (signals A, F, a polyamines (signals a, b, and c).Under the polymerization condition the secondary amino group was generally stable.However, a deta NMR spectra of polyamidoamines obtained with norspermidine s case, a certain part of the amino groups undergoes self-deprotection.tive intensity of the signals associated with the protons of methylene the blocked amino group (Figure 3(A1); signals b and c), their inten theoretically expected, and at the same time we observed a slightly the F signal associated with the methyl groups of the sebacic acid de probably is caused by the presence of methylene groups in the vicin amino group (Figure 3 and melting of the semi-crystalline phase (first run) of the synthesized polymers determined using DSC measurements (Figures S6-S9).The obtained DSC thermograms of PAAs with protected amino groups were essentially very similar to the corresponding long-chain aliphatic polyamide thermograms [25,26].The Tg of all polyamidoamines with blocked amino groups was relatively low, around −25 °C to −35 °C.For polymers with no blocking groups, like in the case of polyamides obtained with putrescine, this temperature was higher (Table 1).The presented FTIR spectra of polyamides (Figure S4) obtained in the reaction of putrescine with sebacic dichloride and other polyamidoamines with a protected amino group (Figures 4(Ia,IIa) and S5a) show characteristic bands typical of polyamides: these are absorption bands at 3298 cm −1 related to stretching vibrations for the NH group; amide I bands, with C=O deformation vibrations occurring around 1694 cm −1 ; amide II bands at 1537 cm −1 related to N-H deformation vibrations coupled with C-N stretching vibrations; and amide III bands at around 1300 cm −1 corresponding to the coupled deformation vibrations of the NH bond, stretching vibrations of the C-N bond, and stretching vibrations of C-C=O.They confirm the formation of amide bonds and polyamide chain structures, as well as 1 H NMR spectra (signal d) and the presence of the chain sequences originating from both sebacic acid (signals A, F, and E) and the used polyamines (signals a, b, and c).Under the polymerization conditions, the protection of the secondary amino group was generally stable.However, a detailed analysis of the NMR spectra of polyamidoamines obtained with norspermidine showed that, in this case, a certain part of the amino groups undergoes self-deprotection.Analyzing the relative intensity of the signals associated with the protons of methylene groups adjacent to the blocked amino group (Figure 3(A1); signals b and c), their intensity was lower than theoretically expected, and at the same time we observed a slightly higher intensity of the F signal associated with the methyl groups of the sebacic acid derivative.This effect probably is caused by the presence of methylene groups in the vicinity of the deblocked amino group (Figure 3 They confirm the effectiveness of unblocking amino groups.After deprotection, a characteristic strong band was observed at the absorption of 3440 cm −1 caused by N-H stretching vibrations, which confirms the presence of secondary amino groups in the obtained PAAs.In the case of other polyamide esters (Figure 4II), these bands are not so distinct, because in this range there are bands related to the stretching vibrations of the N-H amide bonds at 3320 cm −1 .The ratio of amide to amino groups in polymers containing repeating units of the chain with spermidine or norspermidine derivatives is 2:1 (for comparison, in a polymer with units derived from spermine, it is 2:2); therefore, the bands associated with the presence of amino groups are less visible (Figure 4(IIb)).The successful unblocking of amino groups is also evidenced by the following: - The presence of a signal associated with stretching vibrations in the range of the 2779 cm −1 characteristic band in aliphatic amines of secondary and tertiary origin; - The presence of the first amide band (from the second-row amide groups), with C=O deforming vibrations occurring around 1670 cm −1 ; -Signal decay 1663 cm −1 vibrations from carbamide groups after unblocking the amino groups; -Formation of a band at 1543 cm −1 originating from amines caused by N-H deforming vibrations.This is a low-intensity band that overlaps the band originating from the II Unfortunately, attempts to use the GPC chromatography technique to determine average molecular weights did not allow for obtaining reliable results, due to difficulties in selecting the solvent and measurement conditions.Based on the determined viscosities of polymer solutions in THF, it was only possible to estimate the values of the average viscosity mass (Mv) of the obtained polymers, using previously obtained data.The parameters of the Mark-Houwink-Sakurada equation determined for N-trifluoroacetylated nylon 6 (a polyamide with a structure slightly similar to the object of our research) in THF solution were used in the calculations [24].However, the calculated M v values in Table 1 should be treated as indicative data; they amount to approximately 20,000 g/mol.Table 1 also includes the glass transition temperatures (second run) and melting of the semi-crystalline phase (first run) of the synthesized polymers determined using DSC measurements (Figures S6-S9).The obtained DSC thermograms of PAAs with protected amino groups were essentially very similar to the corresponding long-chain aliphatic polyamide thermograms [25,26].The T g of all polyamidoamines with blocked amino groups was relatively low, around −25 • C to −35 • C. For polymers with no blocking groups, like in the case of polyamides obtained with putrescine, this temperature was higher (Table 1). Table 1.Properties of PAAs with protected secondary amine groups and after deprotection obtained by polycondensation of sebacoyl dichloride with selected polyamines. No. Before Deprotection of Amine Groups After Deprotection of Amine Groups Synthesis was carried out in an interfacial polycondensation reaction at a temperature of 25 • C for 1 h.M v -estimated average viscosity molecular weight determined from the Mark-Houwink-Sakurada equation-the parameters of the equation determined for a solution of nylon 6 in THF were adopted [24]; η inh -inherent viscosity; T g -glass transition temperature; T m -melting point of the crystalline phase; ∆H-heat of fusion of the crystalline phase. All samples showed one endothermic peak with a broad shoulder pointing towards the lower temperature.The appearance of double melting peaks for polyamidoamines obtained with Boc-spermidine and Boc 2 -spermine can be attributed to the existence of two crystalline phases, similar to the polyamide obtained in the reaction of octadecanedioic acid with diaminodecane [25]. To examine the degree of hydrophilicity of their surfaces, thin films were produced from the obtained polyamidoamines with protected secondary amines.All materials obtained were hydrophobic, showing a contact angle of approximately 80 • (Figures S7a-S9a, Table 1). The last stage of the synthesis was the deprotection of secondary amino groups in the final obtained polyamides by hydrolysis of carbamide bonds carried out in a chloroform solution in the presence of hydrochloric acid.The efficiency of the deblocking reaction was estimated using the obtained 1 H NMR spectra.The structure of the obtained polyamidoamines was also confirmed using FTIR measurements.Figure 3 shows the changes that occurred in the 1 H NMR spectra as a result of this process in a sample of polyamidoamines obtained in the reaction of Boc-norspermidine and Boc 2 -spermine.The practical disappearance of the g signals of methyl protons (1.47 ppm) presented in the protecting amine t-butyl group (Figure 3(A1,B1)) after removing the protection of secondary amine groups in polymers was the crucial aspect of this reaction.The spectra of polyamidoamine samples taken before their purification show g ′ signals (about 1.1 ppm) coming from the methyl groups of pivalaldehyde released in the reaction of deprotection of the secondary amino groups.Before and after deprotection, the presence of a d signal of approximately 5.98 ppm, originating from the proton of the amide group -NH-CO-, was observed.Weak e signals related to amino groups, previously assigned to the protons of the secondary amino group in the protected polyamines, also appeared (Figures 1(A1,B1) and S1a). Testing polymers after the deprotection of amino groups in the obtained polymers, it was difficult to find an appropriate solvent for NMR measurements.The synthesized polyamidoamines (PAAs) contain aliphatic hydrophobic segments and hydrophilic segments in the chain.When these polymers are dissolved in a non-polar solvent (chloroform), they probably form micellar structures, which causes a strong suppression of 1 H NMR signals related to the protons of the hydrophilic segments (Figure 3(B2)).In turn, in polar solvents like (DMSO + H 2 O), the intensity of proton signals connected with hydrophilic segments increases in the NMR spectrum of the same polymer, but some of the proton signals of the hydrophobic segment weaken.A similar phenomenon was previously observed and described for amphiphilic polymers with the ability to self-assemble micellar structures [27,28].This phenomenon is particularly visible when the hydrophilic segment is the longest in the case of polymers containing repeating units of spermine derivatives containing two amino groups in the chain.Moreover, in the 1 H NMR spectra, very weak g signals of approximately 1.4 ppm assigned to protons of t-butyl groups can still be observed, which indicates that some of these groups have not been deprotected.However, in all polymers, the number of blocked groups does not exceed 10% of the total secondary amino groups. The effectiveness of unblocking amino groups in the obtained polyamidoamines was confirmed also by FT-IR measurements and analyzing changes in the absorption spectra of the bands obtained before and after deprotection.Figure 4I pictures the spectra of polymers obtained with spermine before and after deprotection of the amino groups.They confirm the effectiveness of unblocking amino groups.After deprotection, a characteristic strong band was observed at the absorption of 3440 cm −1 caused by N-H stretching vibrations, which confirms the presence of secondary amino groups in the obtained PAAs.In the case of other polyamide esters (Figure 4II), these bands are not so distinct, because in this range there are bands related to the stretching vibrations of the N-H amide bonds at 3320 cm −1 . The ratio of amide to amino groups in polymers containing repeating units of the chain with spermidine or norspermidine derivatives is 2:1 (for comparison, in a polymer with units derived from spermine, it is 2:2); therefore, the bands associated with the presence of amino groups are less visible (Figure 4(IIb)).The successful unblocking of amino groups is also evidenced by the following: - The presence of a signal associated with stretching vibrations in the range of the 2779 cm −1 characteristic band in aliphatic amines of secondary and tertiary origin; - The presence of the first amide band (from the second-row amide groups), with C=O deforming vibrations occurring around 1670 cm −1 ; -Signal decay 1663 cm −1 vibrations from carbamide groups after unblocking the amino groups; -Formation of a band at 1543 cm −1 originating from amines caused by N-H deforming vibrations.This is a low-intensity band that overlaps the band originating from the II amide band in the case of polyamines composed of spermine derivatives (in which there are two amine groups in the unit repeating) after unblocking.This increase in intensity is most visible; - The presence of a signal at 1188 cm −1 , with C-N stretching vibrations characteristic of secondary aliphatic amines.The observed band at 1650 cm −1 is probably due to associated amines. Most striking was the change in the wettability of the surface of the obtained PAAs after deprotecting the secondary amine groups present in the chain (Table 1).This effect was visible in contact-angle measurements (Figures S10-S12).The obtained polyamidoamines with blocked amino groups had a contact angle of approximately 78 • to 80 • , so their surface was highly hydrophobic.However, after deprotection, the contact angle for polymers obtained with norspermidine or spermidine was approximately 20 • ; so, the surface of the samples became strongly hydrophilic.In the case of the PAA sample obtained with spermine (Figure S12), due to the exceptionally strong wettability of the sample surface, accurate measurement of the angle after deprotection was impossible.A water droplet, after falling on the surface, immediately spread on it (initial angle wetting less than 10 • ).In this case, the presence of two amino groups in the repeating unit of this polymer was responsible for such a strong effect. After removing the amino-protecting groups, changes in the thermal properties of these polymers were also noted (Table 1).There was an increase in the glass transition temperature (Tg), the largest (from −35 • C to 2 • C) for the polymer sample obtained with spermine containing two amino groups in the repeating unit.This observation confirms that t-butyl groups were responsible for the low glass transition temperature (T g ) of the PAA with blocked amino groups.DSC thermograms of the first run for all tested polyamidoamines after removing the blocking groups showed an increase in the melting temperature of the crystalline phase to approximately 130 • C and the presence of only one strong and quite narrow endotherm of its melting (Figures S6-S9).A significant increase in the heat of melting was also noted, indicating an increase in the share of the crystalline phase for all samples except the polymer obtained with spermine.The thermograms of these PAAs were very similar to the thermograms of typical aliphatic polyamides [25,29], including the polyamide obtained with putrescine (Figure S6).It should be noted, however, that the melting point of the crystalline phase (Tm) for polyamidoamines was approximately 100 • C lower than the melting point of polyamides. Assessment of Cytotoxicity of the Obtained Polyamidoamines towards Skin Cells The next stage of the described research was to use the obtained PAAs in the formation of carriers of biologically active substances used as an ingredient of dermatological or cosmetic antibacterial creams and ointments.For this reason, human skin cell lines were selected for cytotoxicity assessment: fibroblasts (WI-38) and keratinocytes (HaCaTs).Figure 5 shows the effect of the presence of extracts from the obtained polymers on the proliferation of human fibroblasts.Cells cultured under standard conditions in the presence of the polyamide extract obtained with putrescine (PA) in the concentration range of 0.16-2000 µg/mL and with spermidine (PAA2) in the concentration range of 0.16-1000 µg/mL demonstrated proliferation comparable to the control group and therefore did not show any toxicity.Interestingly, the effect of stimulating cell growth, practically in the entire range of the tested concentrations, was demonstrated by a polymer containing a spermidine derivative (PAA2).However, in the case of the polymer obtained using spermine (PAA3), a reduction in cell viability was noted below 60% of the viability of cells cultured in the control group at concentrations of 62.5-2000 µg/mL and for the polymer obtained using norspermidine (PAA1) in a concentration range of 500-2000 µg/mL. Figure 6 shows the results of tests on changes in the viability of keratinocytes cultured in the presence of extracts of the same polymers.For the polyamide synthesized using putrescine (PA), regardless of the concentration, no cytotoxic effect was proved.In several cases, a reduction in cell proliferation was observed after incubation in a medium containing high concentrations of polymer extracts obtained with norspermidine (PAA1) (500-2000 µg/mL), spermidine (PAA2) (1000-2000 µg/mL), and spermine (PAA3) (250-2000 µg/mL).The effect stimulating the growth of keratinocytes occurred only in the case of polyamide samples obtained with putrescine, and only in high concentrations.Cells cultured under standard conditions in the presence of the polyamide extract obtained with putrescine (PA) in the concentration range of 0.16-2000 µg/mL and with spermidine (PAA2) in the concentration range of 0.16-1000 µg/mL demonstrated proliferation comparable to the control group and therefore did not show any toxicity.Interestingly, the effect of stimulating cell growth, practically in the entire range of the tested concentrations, was demonstrated by a polymer containing a spermidine derivative (PAA2).However, in the case of the polymer obtained using spermine (PAA3), a reduction in cell viability was noted below 60% of the viability of cells cultured in the control group at concentrations of 62.5-2000 µg/mL and for the polymer obtained using norspermidine (PAA1) in a concentration range of 500-2000 µg/mL. Figure 6 shows the results of tests on changes in the viability of keratinocytes cultured in the presence of extracts of the same polymers.Cells cultured under standard conditions in the presence of the polyamide extract obtained with putrescine (PA) in the concentration range of 0.16-2000 µg/mL and with spermidine (PAA2) in the concentration range of 0.16-1000 µg/mL demonstrated proliferation comparable to the control group and therefore did not show any toxicity.Interestingly, the effect of stimulating cell growth, practically in the entire range of the tested concentrations, was demonstrated by a polymer containing a spermidine derivative (PAA2).However, in the case of the polymer obtained using spermine (PAA3), a reduction in cell viability was noted below 60% of the viability of cells cultured in the control group at concentrations of 62.5-2000 µg/mL and for the polymer obtained using norspermidine (PAA1) in a concentration range of 500-2000 µg/mL. Figure 6 shows the results of tests on changes in the viability of keratinocytes cultured in the presence of extracts of the same polymers.For the polyamide synthesized using putrescine (PA), regardless of the concentration, no cytotoxic effect was proved.In several cases, a reduction in cell proliferation was observed after incubation in a medium containing high concentrations of polymer extracts obtained with norspermidine (PAA1) (500-2000 µg/mL), spermidine (PAA2) (1000-2000 µg/mL), and spermine (PAA3) (250-2000 µg/mL).The effect stimulating the growth of keratinocytes occurred only in the case of polyamide samples obtained with putrescine, and only in high concentrations.For the polyamide synthesized using putrescine (PA), regardless of the concentration, no cytotoxic effect was proved.In several cases, a reduction in cell proliferation was observed after incubation in a medium containing high concentrations of polymer extracts obtained with norspermidine (PAA1) (500-2000 µg/mL), spermidine (PAA2) (1000-2000 µg/mL), and spermine (PAA3) (250-2000 µg/mL).The effect stimulating the growth of keratinocytes occurred only in the case of polyamide samples obtained with putrescine, and only in high concentrations. To sum up, it was observed that polymers obtained with spermine (PAA3) and norspermidine (PAA1) showed strong cytotoxicity (especially the ones obtained with the norspermidine derivative-the only non-endogenous polyamine used), but in high concentrations.It is worth highlighting that, quite unexpectedly, polyamidoamines showed greater toxicity towards fibroblasts than keratinocytes. Preliminary Assessment of the Antibacterial and Antifungal Activities of the Obtained Polyamidoamines According to the assumptions of the conducted research, the presented group of synthesized polymers containing amine and amide groups in the chain, obtained with the use of endogenous amines, should demonstrate not only good biocompatibility but also strong bactericidal activity against a wide spectrum of strains.It is known from previous research that polyamines such as spermine or spermidine, and especially synthetic linear polyamines with high molecular weights, show antibacterial activity against, for example, S. aureus clones resistant to antibiotics [30]. Analyses of the antibacterial and antifungal activity of the examined PAAs on selected strains were carried out at polymer concentrations of 0.1 mg/mL for 24 and 48 h. Figure 7 shows the growth of the Pseudomonas aeruginosa strain.The presence of all polymers resulted in a decrease in the numbers of this bacterium.The greatest growth inhibition was observed in the sample containing the polyamidoamine obtained using spermine (PAA3) (the chain unit contained two amino groups) after 48 h (strain concentration decreased from 7.35 to 4.68 log 10 cfu/mL), and, unexpectedly, in the sample of the polyamide (PA) obtained using putrescine, the strain concentration decreased to 4.62 log 10 cfu/mL.To sum up, it was observed that polymers obtained with spermine (PAA3) and norspermidine (PAA1) showed strong cytotoxicity (especially the ones obtained with the norspermidine derivative-the only non-endogenous polyamine used), but in high concentrations.It is worth highlighting that, quite unexpectedly, polyamidoamines showed greater toxicity towards fibroblasts than keratinocytes. Preliminary Assessment of the Antibacterial and Antifungal Activities of the Obtained Polyamidoamines According to the assumptions of the conducted research, the presented group of synthesized polymers containing amine and amide groups in the chain, obtained with the use of endogenous amines, should demonstrate not only good biocompatibility but also strong bactericidal activity against a wide spectrum of strains.It is known from previous research that polyamines such as spermine or spermidine, and especially synthetic linear polyamines with high molecular weights, show antibacterial activity against, for example, S. aureus clones resistant to antibiotics [30]. Analyses of the antibacterial and antifungal activity of the examined PAAs on selected strains were carried out at polymer concentrations of 0.1 mg/mL for 24 and 48 h. Figure 7 shows the growth of the Pseudomonas aeruginosa strain.The presence of all polymers resulted in a decrease in the numbers of this bacterium.The greatest growth inhibition was observed in the sample containing the polyamidoamine obtained using spermine (PAA3) (the chain unit contained two amino groups) after 48 h (strain concentration decreased from 7.35 to 4.68 log10 cfu/mL), and, unexpectedly, in the sample of the polyamide (PA) obtained using putrescine, the strain concentration decreased to 4.62 log10 cfu/mL.Results for the antibacterial activity of the obtained polyamides against Staphylococcus aureus are illustrated in Figure 8. Results for the antibacterial activity of the obtained polyamides against Staphylococcus aureus are illustrated in Figure 8.To sum up, it was observed that polymers obtained with spermine (PAA3) and norspermidine (PAA1) showed strong cytotoxicity (especially the ones obtained with the norspermidine derivative-the only non-endogenous polyamine used), but in high concentrations.It is worth highlighting that, quite unexpectedly, polyamidoamines showed greater toxicity towards fibroblasts than keratinocytes. Preliminary Assessment of the Antibacterial and Antifungal Activities of the Obtained Polyamidoamines According to the assumptions of the conducted research, the presented group of synthesized polymers containing amine and amide groups in the chain, obtained with the use of endogenous amines, should demonstrate not only good biocompatibility but also strong bactericidal activity against a wide spectrum of strains.It is known from previous research that polyamines such as spermine or spermidine, and especially synthetic linear polyamines with high molecular weights, show antibacterial activity against, for example, S. aureus clones resistant to antibiotics [30]. Analyses of the antibacterial and antifungal activity of the examined PAAs on selected strains were carried out at polymer concentrations of 0.1 mg/mL for 24 and 48 h. Figure 7 shows the growth of the Pseudomonas aeruginosa strain.The presence of all polymers resulted in a decrease in the numbers of this bacterium.The greatest growth inhibition was observed in the sample containing the polyamidoamine obtained using spermine (PAA3) (the chain unit contained two amino groups) after 48 h (strain concentration decreased from 7.35 to 4.68 log10 cfu/mL), and, unexpectedly, in the sample of the polyamide (PA) obtained using putrescine, the strain concentration decreased to 4.62 log10 cfu/mL.Results for the antibacterial activity of the obtained polyamides against Staphylococcus aureus are illustrated in Figure 8.The strongest inhibition of the growth of this strain was observed in the case of 24 h contact with the polyamide prepared with putrescine (PA).The number of cells was only 3.8 log 10 cfu/mL.After 48 h, this value increased significantly to 6.87 log 10 cfu/mL but did not exceed the level of the control sample (7.8 log 10 cfu/mL).Polyamidoamines synthesized with spermine (PAA3) also had a strong antibacterial effect, regardless of the incubation time (decrease from 7.8 to 4.63 log 10 cfu/mL).After 48 h of incubation, only the polyamidoamine obtained with norspermidine (PAA1) showed almost no antibacterial activity.Figure 9 demonstrates the results of the antibacterial activity of the synthesized polymers against the Staphylococcus epidermidis strain.The strongest inhibition of the growth of this strain was observed in the case of 24 h contact with the polyamide prepared with putrescine (PA).The number of cells was only 3.8 log10 cfu/mL.After 48 h, this value increased significantly to 6.87 log10 cfu/mL but did not exceed the level of the control sample (7.8 log10 cfu/mL).Polyamidoamines synthesized with spermine (PAA3) also had a strong antibacterial effect, regardless of the incubation time (decrease from 7.8 to 4.63 log10 cfu/mL).After 48 h of incubation, only the polyamidoamine obtained with norspermidine (PAA1) showed almost no antibacterial activity.Figure 9 demonstrates the results of the antibacterial activity of the synthesized polymers against the Staphylococcus epidermidis strain.In this case, the polyamide obtained with putrescine (PA) was inactive.All other polyamidoamines showed a strong effect, with a very active polymer containing a spermine derivative (PAA3) presenting a particular effect; after 48 h, a decrease in the concentration of cells from 9.0 log10 to 3.3 log10 cfu/mL was recorded. Figure 10 shows the antibacterial activity of the tested polymers against the E. coli strain.Virtually all samples showed high activity with respect to inhibiting cell growth, regardless of the exposure time.The strongest growth inhibition effect was determined for the PAA3 polymer sample obtained with spermine after 24 h.The number of E. coli cells decreased to 4.86 log10 cfu/mL, and after 48 h there was a slight further decrease in their number to 4.68 log10 cfu/mL.The antifungal activity of the obtained polyamidoamines against two selected strains largely responsible for the most common clinical cases of candidiasis [31] and aspergillosis [32] are demonstrated in Figures 11 and 12. Figure 11 illustrates the decrease in the number of Candida albicans cells after 24 h and 48 h of contact with the tested polymers.In this case, the polyamide obtained with putrescine (PA) was inactive.All other polyamidoamines showed a strong effect, with a very active polymer containing a spermine derivative (PAA3) presenting a particular effect; after 48 h, a decrease in the concentration of cells from 9.0 log 10 to 3.3 log 10 cfu/mL was recorded. Figure 10 shows the antibacterial activity of the tested polymers against the E. coli strain.Virtually all samples showed high activity with respect to inhibiting cell growth, regardless of the exposure time.The strongest growth inhibition effect was determined for the PAA3 polymer sample obtained with spermine after 24 h.The number of E. coli cells decreased to 4.86 log 10 cfu/mL, and after 48 h there was a slight further decrease in their number to 4.68 log 10 cfu/mL.The strongest inhibition of the growth of this strain was observed in the case of 24 h contact with the polyamide prepared with putrescine (PA).The number of cells was only 3.8 log10 cfu/mL.After 48 h, this value increased significantly to 6.87 log10 cfu/mL but did not exceed the level of the control sample (7.8 log10 cfu/mL).Polyamidoamines synthesized with spermine (PAA3) also had a strong antibacterial effect, regardless of the incubation time (decrease from 7.8 to 4.63 log10 cfu/mL).After 48 h of incubation, only the polyamidoamine obtained with norspermidine (PAA1) showed almost no antibacterial activity.Figure 9 demonstrates the results of the antibacterial activity of the synthesized polymers against the Staphylococcus epidermidis strain.In this case, the polyamide obtained with putrescine (PA) was inactive.All other polyamidoamines showed a strong effect, with a very active polymer containing a spermine derivative (PAA3) presenting a particular effect; after 48 h, a decrease in the concentration of cells from 9.0 log10 to 3.3 log10 cfu/mL was recorded. Figure 10 shows the antibacterial activity of the tested polymers against the E. coli strain.Virtually all samples showed high activity with respect to inhibiting cell growth, regardless of the exposure time.The strongest growth inhibition effect was determined for the PAA3 polymer sample obtained with spermine after 24 h.The number of E. coli cells decreased to 4.86 log10 cfu/mL, and after 48 h there was a slight further decrease in their number to 4.68 log10 cfu/mL.The antifungal activity of the obtained polyamidoamines against two selected strains largely responsible for the most common clinical cases of candidiasis [31] and aspergillosis [32] are demonstrated in Figures 11 and 12. Figure 11 illustrates the decrease in the number of Candida albicans cells after 24 h and 48 h of contact with the tested polymers.The antifungal activity of the obtained polyamidoamines against two selected strains largely responsible for the most common clinical cases of candidiasis [31] and aspergillosis [32] are demonstrated in Figures 11 and 12. Figure 11 illustrates the decrease in the number of Candida albicans cells after 24 h and 48 h of contact with the tested polymers.The growth inhibition of this strain was the strongest in the culture carried out in contact with a sample of the polyamidoamine obtained with norspermidine (PAA1).After 48 h, a decrease to 4.52 log10 cfu/mL was recorded.In the remaining samples, after 24 h, a temporary inhibition of the growth of these fungi was noted, while, after another 24 h, the cells multiplied, even exceeding the number of cells in the control sample.The next graph (Figure 12) shows the results of activity tests against the Aspergillus brasiliensis strain.In this case, the greatest decrease in the number of cells was observed after contact with a PAA3 sample of the polymer obtained with spermine after 24 h (2.59 log10 cfu/mL), and, after another 24 h, a slight increase in their amount was noted (3.56 log10 cfu/mL).For the remaining samples, the obtained values exceeded the values of the control sample.To sum up, the antibacterial effect of the obtained polyamides was in many cases strong.This was particularly visible in the case of E. coli, where practically all polymers strongly inhibited the growth of cells of this strain.The broadest spectrum of activity was demonstrated by the polyamidoamine PAA3 obtained with spermine and, surprisingly, the polyamide PA obtained with putrescine against S. aureus.Polymers obtained with spermine and putrescine also showed fungicidal properties.However, this activity was much lower than the antibacterial effect.The special activity of the polyamidoamine obtained from spermine can be explained by the greater number of amino groups in this polymer.The activity of the polyamide obtained with putrescine, i.e., without amino groups, only containing amide groups, is difficult to explain at this stage of research.However, it proves that the amide groups in the polymer are not neutral and enhance the antibacterial activity of amines, similarly to the previously described composite of pol-yamide11 and a guanidine derivative [33], or fibers formed from polyamides grafted by ammonium derivatives [34].The growth inhibition of this strain was the strongest in the culture carried out in contact with a sample of the polyamidoamine obtained with norspermidine (PAA1).After 48 h, a decrease to 4.52 log10 cfu/mL was recorded.In the remaining samples, after 24 h, a temporary inhibition of the growth of these fungi was noted, while, after another 24 h, the cells multiplied, even exceeding the number of cells in the control sample.The next graph (Figure 12) shows the results of activity tests against the Aspergillus brasiliensis strain.In this case, the greatest decrease in the number of cells was observed after contact with a PAA3 sample of the polymer obtained with spermine after 24 h (2.59 log10 cfu/mL), and, after another 24 h, a slight increase in their amount was noted (3.56 log10 cfu/mL).For the remaining samples, the obtained values exceeded the values of the control sample.To sum up, the antibacterial effect of the obtained polyamides was in many cases strong.This was particularly visible in the case of E. coli, where practically all polymers strongly inhibited the growth of cells of this strain.The broadest spectrum of activity was demonstrated by the polyamidoamine PAA3 obtained with spermine and, surprisingly, the polyamide PA obtained with putrescine against S. aureus.Polymers obtained with spermine and putrescine also showed fungicidal properties.However, this activity was much lower than the antibacterial effect.The special activity of the polyamidoamine obtained from spermine can be explained by the greater number of amino groups in this polymer.The activity of the polyamide obtained with putrescine, i.e., without amino groups, only containing amide groups, is difficult to explain at this stage of research.However, it proves that the amide groups in the polymer are not neutral and enhance the antibacterial activity of amines, similarly to the previously described composite of pol-yamide11 and a guanidine derivative [33], or fibers formed from polyamides grafted by ammonium derivatives [34].The growth inhibition of this strain was the strongest in the culture carried out in contact with a sample of the polyamidoamine obtained with norspermidine (PAA1).After 48 h, a decrease to 4.52 log 10 cfu/mL was recorded.In the remaining samples, after 24 h, a temporary inhibition of the growth of these fungi was noted, while, after another 24 h, the cells multiplied, even exceeding the number of cells in the control sample.The next graph (Figure 12) shows the results of activity tests against the Aspergillus brasiliensis strain.In this case, the greatest decrease in the number of cells was observed after contact with a PAA3 sample of the polymer obtained with spermine after 24 h (2.59 log 10 cfu/mL), and, after another 24 h, a slight increase in their amount was noted (3.56 log 10 cfu/mL).For the remaining samples, the obtained values exceeded the values of the control sample. To sum up, the antibacterial effect of the obtained polyamides was in many cases strong.This was particularly visible in the case of E. coli, where practically all polymers strongly inhibited the growth of cells of this strain.The broadest spectrum of activity was demonstrated by the polyamidoamine PAA3 obtained with spermine and, surprisingly, the polyamide PA obtained with putrescine against S. aureus.Polymers obtained with spermine and putrescine also showed fungicidal properties.However, this activity was much lower than the antibacterial effect.The special activity of the polyamidoamine obtained from spermine can be explained by the greater number of amino groups in this polymer.The activity of the polyamide obtained with putrescine, i.e., without amino groups, only containing amide groups, is difficult to explain at this stage of research.However, it proves that the amide groups in the polymer are not neutral and enhance the antibacterial activity of amines, similarly to the previously described composite of polyamide11 and a guanidine derivative [33], or fibers formed from polyamides grafted by ammonium derivatives [34]. Discussion Linear polyamidoamines can be obtained by polycondensation of endogenous polyamines with sebacoyl dichloride via an interfacial reaction.Some difficulties are caused by the preparation of monomers and the protection of secondary amine groups of polyamines, which is necessary before polymerization.The polymerization itself using the method described in this work is relatively easy and quick, and the protection of the secondary amine groups of the monomers was stable throughout the polymerization period.The performed synthesis procedure allows for obtaining linear, soluble polymers with a melting point lower than the decomposition temperature.There are no noticeable amounts of by-products in polycondensation products.During polymerization, there is practically no control over the process, which makes it seem impossible to obtain high-molecular polyamidoamines.For this reason, the obtained polymers will not be suitable for forming implants or scaffolds for tissue culture, but they can be used as an interesting material in the formation of biodegradable antibacterial drug carriers in the form of micro-and nanoparticles, or in the production of temporary antibacterial coatings. We confirmed that the obtained polyamidoamines synthesized using endogenous amines demonstrate relatively low toxicity.Comparing the cytotoxicity of the polymers obtained from norspermidine and spermidine, which are very similar in terms of chain structure and properties, we noticed that the spermidine-based PAA obtained with the use of endogenous polyamine is much less cytotoxic when compared to the other one.The observed cytotoxic effect on the tested cell lines of human skin cells occurred only at high concentrations of the polymer containing norspermidine derivatives in the repeating chain units and the polymer containing spermine derivatives.In the latter case, the reason is clear, since this polyamidoamine contains two amino groups in the repeating unit of the chain, while the others contain only one.It can be seen that the toxicity of this type of polymer is determined by the presence of amines, as evidenced by the lack of cytotoxicity of the polyamide obtained from putrescine.The toxicity effect also depends on the type of cells.Human keratinocytes were significantly more resistant to contact with the obtained polymers than fibroblasts. The performed tests showed the bactericidal activity of the synthesized polymers.The polyamidoamines obtained from spermine exhibited particular activity, which can be explained by the larger number of amino groups in the chain of this polymer.Unlike the polyesteramines described in our previous report [35], the polymers obtained from spermine and putrescine also presented fungicidal activity, though this was much weaker than their antibacterial activity.The quite unexpected strong bactericidal action of the polyamide obtained with putrescine, i.e., without amino groups, against Staphylococcus aureus is difficult to explain at this research stage, and therefore a much more comprehensive study is required.However, it proves that the amide groups in the polymer are not neutral and enhance the antibacterial activity of the amines, similarly to the composite of polyamide11 and a guanidine derivative [33], or fibers formed from polyamides grafted by ammonium derivatives [34]. In our opinion, further, more detailed studies devoted to biogenic amine-based polymers on the optimization of synthesis conditions, rates of biodegradation, and amphiphilic properties optimization should be performed because these materials are very promising from a cosmetology and dermatology point of view.They can be used, for example, as micro-or nanocarriers of biologically active substances, which are components of antibacterial creams or ointments, as well as dressings for non-healing wounds. The use of polyesteramides in the form of nanocarriers of selected antibiotics described in this work, due to the synergistic effect, may also be an effective weapon in the fight against drug-resistant bacteria [36][37][38]. Procedures for the Protection of Primary Amine Groups in the Polyamines Selective protection of primary amino groups in norspermidine, spermine, and spermidine was carried out using reactions with benzaldehyde according to a modified method described in the literature [21,39]-converting amines to imines by forming a Schiff base. In a three-neck flask with a capacity of 25 mL equipped with a Dean-Stark cap, a reflux condenser, and a magnetic stirrer, 0.038 mol, V = 5 mL, of norspermidine was placed and then dissolved in 1 mL of anhydrous toluene.After dissolution, 0.076 mol benzaldehyde was added, V = 8.06 mL.The molar ratio of norspermidine to aldehyde used was 1:2.1.The reactions were heated at 100 • C. The synthesis was carried out until the separation of water in the azeotropic cap ended.The reaction time was approximately 3 h.Selective protection of amino groups occurred under azeotropic conditions.After cooling the reaction mixture, the solvent was evaporated and dried to give a viscous yellow liquid. In the case of protection of primary amino groups in spermidine and spermine, the reactions were carried out analogously. Procedures for the Protection of Secondary Amine Groups in the Polyamines After protecting the primary amino groups in norspermidine, spermidine, and spermine, it was possible to protect their secondary amino groups to obtain more stable blocking groups than was the case when protecting the terminal primary amines.According to a modified procedure described previously [40], this process was performed by reaction with di-t-butyl carbonate (BOC) 2 O (Scheme 2) for the following: The reaction set consisted of a three-neck reactor with a capacity of 250 mL, a reflux condenser, and a dropping funnel.The reactions were carried out under azeotropic conditions.In the reactor, bis-benzylideneimino norspermidine (m = 10 g, n = 0.0338 mol, M = 295 g/mol) was dissolved in 40 mL of anhydrous THF.After dissolving this norspermidine derivative, di-tert-butyl dicarbonate (DDC) (M = 228 g/mol, n = 0.0338 V = 7.716 mL) was added, which was dissolved in 5 mL of anhydrous THF.While the reaction proceeded under intensive stirring, a solution of DDC and THF was slowly added dropwise.The reaction solution was heated to a temperature of 40-50 • C.During the reaction, bubbles were visible, and a precipitate was formed, which dissolved over time.The reaction took approximately 5 h.After cooling the reaction mixture, the solvent was distilled off on a rotary evaporator.The reaction produced an oily liquid.A similar procedure was followed when blocking (2b), the spermidine derivative.It was dissolved (m = 3.5 g, n = 0.00966 g/mol, M = 362 g/mol) in 14 mL of anhydrous THF.Then, once dissolved in the THF, tion took approximately 5 h.After cooling the reaction mixture, the s off on a rotary evaporator.The reaction produced an oily liquid.A sim followed when blocking (2b), the spermidine derivative.It was disso 0.00966 g/mol, M = 362 g/mol) in 14 mL of anhydrous THF.Then, on THF, 2 mL was added dropwise (DDC, m = 4.41 g, n = 0.01932).In the amino groups of the spermine derivative (2c), the molar ratio of the D to polyamine was 2:1.Scheme 2. Process of protection of secondary amino groups in derivatives o (2b) spermidine, and (2c) spermine in reaction with di-tert-butyl bicarbonate amino-blocked analogues of (3a) norspermidine, (3b) spermidine, and (3c) sp Selective Deprotection of Primary Amine Groups in the Polyam To prepare the derivatives of the polyamines (Scheme 3(3a-3c)) mary and secondary amines for the polycondensation reaction, t blocking groups were removed with dichloroacetic acid (DCA) in an The reaction was carried out under such conditions as to maintain th ondary amino groups in the compounds.Compound (3a) (m = 5 g, n 131.22 g/mol) was dissolved in ethylene acetate (AcOEt), 1.4 mL.mixture (2.8 mL, 50/50% v/v) was added and stirred for 1 h at room t ganic extracts were repeatedly washed in turn with half their volum solution KHSO4 and then 1 M aqueous solution NaHCO3.After wash dried over anhydrous MgSO4. Selective Deprotection of Primary Amine Groups in the Polyamines To prepare the derivatives of the polyamines (Scheme 3(3a-3c)) with protected primary and secondary amines for the polycondensation reaction, the primary amine blocking groups were removed with dichloroacetic acid (DCA) in an aqueous medium.The reaction was carried out under such conditions as to maintain the protection of secondary amino groups in the compounds.Compound (3a) (m = 5 g, n = 0.0381 mol, M = 131.22g/mol) was dissolved in ethylene acetate (AcOEt), 1.4 mL.Then, a DCA/H 2 O mixture (2.8 mL, 50/50% v/v) was added and stirred for 1 h at room temperature.All organic extracts were repeatedly washed in turn with half their volumes of 1 M aqueous solution KHSO 4 and then 1 M aqueous solution NaHCO 3 .After washing, the extract was dried over anhydrous MgSO 4 .Scheme 3. Deprotection of primary amino groups in compounds: (3a), (3b), and (3c) to produce (4a) norspermidine, (4b) spermidine and (4c) spermine analogues with a protected secondary amine groups Synthesis of Polyamidoamines A series of polyamidoamines were obtained using interfacial polymerization using previously modified polyamines containing active terminal primary amine groups and protected secondary groups (Scheme 1).The interfacial reaction was carried out using previous descriptions [20,22] using two mutually immiscible solvents: water and the organic-phase chloroform CHCl3.A measured amount of polyamine (0.05 mol) was placed in a 250 mL round-bottom reactor equipped with a magnetic stirrer and dissolved in water (100 mL).After the amine was completely dissolved, Na2CO3 (0.1 mol) was added while stirring was continued.Separately, an organic phase was prepared in a separator, consisting of a solution of sebacoyl chloride (0.05 mol) in chloroform (100 mL).The contents of the separator were quickly poured into a reactor containing an aqueous phase with dissolved amines.The synthesis of polyamidoamines was carried out at room temperature, and the contents of the reactor were stirred at a maximum rotation speed.During the reaction, the formation of a layer between the organic and aqueous phases was visible.The process took about 1 h.During the reaction, the salt precipitated in the form of a white precipitate.The mixture was purified by separating the aqueous and solid phases (salt) from the organic phase.The synthesized polymer dissolved in the organic phase was then precipitated in cold diethyl ether.The resulting product was washed with water and dried in a vacuum oven. Procedures for Deprotecting Amino Groups in Polyamidoamines Polymers containing carboamine-protected amine groups (2.5 g) were dissolved in dichloromethane (20 mL) and placed in a glass reactor equipped with a stirrer.After complete dissolution, 40 mL of the mixture obtained from HCl and H2O was added in a volume ratio of 50:50, and the mixture was stirred.The reaction was carried out at room temperature for 2 h.The solvents and volatile components of the reaction mixture were evaporated on a rotary evaporator.The obtained product was precipitated in cold diethyl ether or hexane and dried to a solid mass. Nuclear Magnetic Resonance (NMR) Spectroscopy The composition of the polymers was determined with NMR measurements.The 1H NMR spectra of the copolymers were recorded at 600 MHz with the Advance II Bruker Ultrashield Plus Spectrometer (Billerica, MA, USA) and with the use of a 5 mm sample Scheme 3. Deprotection of primary amino groups in compounds: (3a), (3b), and (3c) to produce (4a) norspermidine, (4b) spermidine and (4c) spermine analogues with a protected secondary amine groups. Finally, solvents and volatile post-reaction by-products were evaporated from the final extract using a rotary evaporator.Compounds (3b) and (3c) were treated similarly. Synthesis of Polyamidoamines A series of polyamidoamines were obtained using interfacial polymerization using previously modified polyamines containing active terminal primary amine groups and protected secondary groups (Scheme 1).The interfacial reaction was carried out using previous descriptions [20,22] using two mutually immiscible solvents: water and the organic-phase chloroform CHCl 3 .A measured amount of polyamine (0.05 mol) was placed in a 250 mL round-bottom reactor equipped with a magnetic stirrer and dissolved in water (100 mL).After the amine was completely dissolved, Na 2 CO 3 (0.1 mol) was added while stirring was continued.Separately, an organic phase was prepared in a separator, consisting of a solution of sebacoyl chloride (0.05 mol) in chloroform (100 mL).The contents of the separator were quickly poured into a reactor containing an aqueous phase with dissolved amines.The synthesis of polyamidoamines was carried out at room temperature, and the contents of the reactor were stirred at a maximum rotation speed.During the reaction, the formation of a layer between the organic and aqueous phases was visible.The process took about 1 h.During the reaction, the salt precipitated in the form of a white precipitate.The mixture was purified by separating the aqueous and solid phases (salt) from the organic phase.The synthesized polymer dissolved in the organic phase was then precipitated in cold diethyl ether.The resulting product was washed with water and dried in a vacuum oven. Procedures for Deprotecting Amino Groups in Polyamidoamines Polymers containing carboamine-protected amine groups (2.5 g) were dissolved in dichloromethane (20 mL) and placed in a glass reactor equipped with a stirrer.After complete dissolution, 40 mL of the mixture obtained from HCl and H 2 O was added in a volume ratio of 50:50, and the mixture was stirred.The reaction was carried out at room temperature for 2 h.The solvents and volatile components of the reaction mixture were evaporated on a rotary evaporator.The obtained product was precipitated in cold diethyl ether or hexane and dried to a solid mass. Measurements 4.5.1. Nuclear Magnetic Resonance (NMR) Spectroscopy The composition of the polymers was determined with NMR measurements.The 1H NMR spectra of the copolymers were recorded at 600 MHz with the Advance II Bruker Ultrashield Plus Spectrometer (Billerica, MA, USA) and with the use of a 5 mm sample tube.Deuterated DMSO d6 or chloroform was the solvent, and, as the internal standard, tetramethylsilane was used.All 1 H NMR spectra were obtained with 32 scans, a 2.65 s acquisition time, and an 11 ms pulse at 26 • C. The assignment of signals in the obtained spectra was based on the assignments described earlier [41][42][43]. Thermal Properties By differential scanning calorimetry (DSC) (using a DuPont 1090B apparatus calibrated with gallium and indium), thermal properties, such as the glass transition temperatures and the heats of melting and crystallization of the obtained copolymers, were examined.The glass transition temperature was determined with a heating and cooling rate of 20 • C/min in the range between −100 and 220 • C, according to the ASTM E 1356-08 standard [44]. Fourier Transform Infrared (FTIR) Spectroscopy The spectra were recorded in KBr discs in the range of 4000-400 cm −1 at 64 scans of samples using a JASCO FT/IR-6700 spectrophotometer (Easton, MD, USA) with a resolution of 2 cm −1 . Wettability Wettability tests were performed on polymeric films prepared by dissolving 0.5 g of each polymer in 10 mL dichloromethane (DCM), pouring them into a 9 cm diameter glass Petri dish, and leaving the solvent to evaporate for 24 h.Tests were performed via the drop shape analysis system (DSA 25, Kruss, Germany) with the use of ultra-highquality water (UHQ water produced in a UHQ PS apparatus (Elga)) using the sessile drop method.SFE was calculated according to the Owens-Wellt equation using water and diiodomethane (Sigma Aldrich, Germany) as polar and non-polar liquids, respectively.In each case, 10 drops (0.5 µL in volume) were seeded on the surface of the samples and the contact angle was measured automatically. Determination of Intrinsic Viscosity and Estimated Viscosity Average Molecular Mass The intrinsic viscosity (η) of the product in THF was determined at 25 • C using an automatic Ubbelohde viscometer to measure the viscosity of polymer solutions in THF.The viscosity average molecular mass (M) was estimated with the Mark-Houwink-Sakurada equation: [η] = KM a where the calculations used previously determined the parameters for nylon 6, K = 1.66 × 10 −2 mL /g, and a = 0.7 at room temperature in THF [24]. Assessment of Cytotoxicity of the Obtained Polyamidoamines Cytotoxicity testing was performed following the ISO 10993-5 standard [45].Human WI-38 fibroblasts (CCL-75), obtained from the ATCC, were cultured in DMEM supplemented with 10% bovine serum (FBS), 100 U/mL penicillin, and 100 µg/mL streptomycin.Human keratinocytes (HaCaTs) were purchased from the Cell Line Service (CLS) and cultured in DMEM supplemented with 10% bovine serum (FBS), 100 U/mL penicillin and 100 µg/mL streptomycin, and 2 mM L-glutamine.A quantity of 10 mM HEPES (pH 7.3) was also added to the experimental cultures.The cells were incubated at 37 • C, 5% CO 2 .Before cell culture, the materials were sterilized with a UV lamp.Each sample was placed in a vial and DMEM was added to obtain a concentration of 1000 µg/mL.The samples were incubated at 37 • C for 24 h.After this time, dilution of the extract was obtained in the concentration range of 0.78-1000 µg/mL.To test for cytotoxicity, 100 µL of the cell suspension, containing 4 × 10 3 cells, was transferred to the wells of 96-well plates and cultured in standard medium for 24 h to ensure cell adhesion.After 24 h, the medium was replaced with a medium containing the extract of the tested material.Cells were incubated with the tested extracts for 72 h.Untreated cells were used as a negative control (K−), and cells treated with 5% DMSO were used as a positive control (K+).Cell viability was assessed using the Cell Counting Kit-8.Absorbance was read at 450 nm (reference: 650 nm).Statistical analysis was performed using the Statistica 10.0 program, using one-way ANOVA. Results at the significance level of p < 0.05 were considered statistically significant. Assessment of Antibacterial and Antifungal Properties Assessment of the antibacterial and antifungal activity of the tested samples and inhibitory concentrations were estimated by the microtiter broth dilution method, according to the recommendations of the Clinical and Laboratory Standards Institute (1055 Westlakes Drive, Suite 300 Berwyn, PA 19312, USA) [46].Samples of each polymer were prepared at concentrations of 20, 10, 1, and 0.1 mg/mL by preparing an aqueous solution (in the case of water-soluble samples) or an aqueous suspension and then rapidly tested.The most interesting results, obtained at the lowest polymer concentration tested, are presented in the work. Tubes without test compounds were used as positive growth controls.A diluted bacterial suspension was added to each tube to obtain a final concentration of 5 × 10 5 /5 × 10 6 colony-forming units (cfu)/mL, as confirmed by the number of viable cells (determined by turbidimetry).A bacterial inoculum was used as a negative growth control.The plates were incubated at 37 • C for 24 and 48 h.The contents of the tubes showing no visible growth were plated on selective media and, after overnight incubation at 37 • C, the number of colonies was counted.At least three independent determinations were made for each strain and the modal value was taken.The following strains were selected for testing: Figure 2 . Figure 2. 1 H NMR spectra (in CDCl3) of a norspermidine derivative: (a) after the protect amino groups and (b) after selective deprotection of primary amino groups (before cle by-products). Figure 2 . Figure 2. 1 H NMR spectra (in CDCl 3 ) of a norspermidine derivative: (a) after the protection of all amino groups and (b) after selective deprotection of primary amino groups (before cleaning of by-products). (A2); signals b ′ and c ′ ).For this reason, we observed an increase in the intensity of the F signal (Figure 3(A1); signal F + b ′ + c ′ ). Figure 4 . Figure 4. FTIR spectra of polyamidoamine obtained with spermine (I) (a) before deprotection and (b) after deprotection of the amino groups and with spermidine (II) (a) before deprotection and (b) after deprotection of the secondary amino groups. Figure 4 . Figure 4. FTIR spectra of polyamidoamine obtained with spermine (I) (a) before deprotection and (b) after deprotection of the amino groups and with spermidine (II) (a) before deprotection and (b) after deprotection of the secondary amino groups. Figure 7 . Figure 7. Summary of growth results for the Pseudomonas aeruginosa strain. Figure 8 . Figure 8. Summary of growth results for the Staphylococcus aureus strain. Figure 7 . Figure 7. Summary of growth results for the Pseudomonas aeruginosa strain. Figure 7 . Figure 7. Summary of growth results for the Pseudomonas aeruginosa strain. Figure 8 . Figure 8. Summary of growth results for the Staphylococcus aureus strain.Figure 8. Summary of growth results for the Staphylococcus aureus strain. Figure 8 . Figure 8. Summary of growth results for the Staphylococcus aureus strain.Figure 8. Summary of growth results for the Staphylococcus aureus strain. Figure 9 . Figure 9. Summary of growth results for the Staphylococcus epidermidis strain. Figure 10 . Figure 10.Summary of growth results for the E. coli strain. Figure 9 . Figure 9. Summary of growth results for the Staphylococcus epidermidis strain. Figure 9 . Figure 9. Summary of growth results for the Staphylococcus epidermidis strain. Figure 10 . Figure 10.Summary of growth results for the E. coli strain. Figure 10 . Figure 10.Summary of growth results for the E. coli strain. Figure 11 . Figure 11.Summary of growth results for the Candida albicans strain. Figure 12 . Figure 12.Summary of growth results for the Aspergillus brasiliensis strain. Figure 11 . Figure 11.Summary of growth results for the Candida albicans strain. Figure 11 . Figure 11.Summary of growth results for the Candida albicans strain. Figure 12 . Figure 12.Summary of growth results for the Aspergillus brasiliensis strain. Figure 12 . Figure 12.Summary of growth results for the Aspergillus brasiliensis strain.
2024-02-27T17:01:25.938Z
2024-02-22T00:00:00.000
{ "year": 2024, "sha1": "1cb94d528d37ef5858edb2f9a2a7d2e1579eb403", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/25/5/2576/pdf?version=1708616925", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "766b01bb8f28402e87eefb673b20b23e07c7e1d9", "s2fieldsofstudy": [ "Materials Science", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
255186083
pes2o/s2orc
v3-fos-license
Molecular Perspectives of Interfacial Properties in the Water+Hydrogen System in Contact with Silica or Kerogen Interfacial behaviours in multiphase systems containing H2 are crucial to underground H2 storage but are not well understood. Molecular dynamics simulations were conducted to study interfacial properties of the H2O+H2 and H2O+H2+silica/kerogen systems over a wide range of temperatures (298 - 523 K) and pressures (1 - 160 MPa). The combination of the H2 model with the INTERFACE force field and TIP4P/2005 H2O model can accurately predict the interfacial tensions (IFTs) from the experiment. The IFTs from simulations are also in good agreement with those from the density gradient theory coupled to the PC-SAFT equation of state. Generally, the IFTs decrease with pressure and temperature. However, at relatively high temperatures and pressures, the IFTs increase with pressure. The opposite pressure effect on IFTs can be explained by the inversion of the sign of the relative adsorption of H2. The enrichment of H2 in the interfacial regions was observed in density profiles. Meanwhile, the behaviours of contact angles (CAs) in the H2O+H2+silica system are noticeably different from those in the H2O+H2+kerogen system. The H2O CAs for the H2O+H2+silica and H2O+H2+kerogen systems increase with pressure and decrease with temperature. However, the effect of temperature and pressure on these CAs is less pronounced for the H2O+H2+silica system at low temperatures. The behaviours of CAs were understood based on the variations of IFTs in the H2O+H2 system (fluid-fluid interaction) and adhesion tensions (fluid-solid interaction). Furthermore, the analysis of the atomic density profiles shows that the presence of H2 in between the H2O droplet and the silica/kerogen surface is almost negligible. Nevertheless, the adsorption of H2O on the silica surface outside the H2O droplet is strong, while less H2O adsorption is seen on the kerogen surface. Introduction With the rapidly growing demand for the decarbonized energy supply, hydrogen as a clean fuel has drawn great attention since it can be potentially used to progressively replace fossil fuels. 1,2 Although the energy per mass of H 2 is large, the massive H 2 storage has been considered a serious problem mainly because of the low energy per volume of H 2 . Geological sites including salt caverns, deep saline aquifers, basaltic formations, coal seams, and depleted oil/gas reservoirs have been suggested for underground hydrogen storage. [3][4][5] Hitherto, only salt caverns have been operated to store industrial scale of H 2 . 2,4 Considering that salt caverns are not worldwide common, investigations on other types of storage sites are of great significance. Structural, residual, adsorption, dissolution, and mineral trapping are typical mechanisms for trapping gas in geological formations. [6][7][8] Among them, structural and residual trapping are considered the most important mechanisms, where capillary forces are critical as it determines the capacity and stability of the gas storage. 9,10 There are many experimental studies 8,11-20 focusing on the interfacial properties of multiphase systems with H 2 that dictates the capillary force in geological formations. For instance, the interfacial tensions (IFTs) of the H 2 O+H 2 system decrease with pressure (1 -40 MPa) and temperature (298 -523 K), and the reduction of IFT with pressure is smaller at higher temperatures. 11-15 It has been shown that the contact angles (CAs) in the brine+H 2 +quartz system fall in the range from 0 • to 50 • . 8 Increasing pressure increases the CAs in the brine+H 2 +silicate systems. 8,16 The CAs of brine/H 2 on mica decrease with increasing temperature, while an opposite temperature effect on CAs on quartz was reported. 8,16 Meanwhile, the CAs in the brine+H 2 +bituminous coal system increase with pressure at 298 K, and the pressure effect is moderate at 323 and 343 K. 17 Molecular simulations have been applied to understand the interfacial behaviours in the gas+water 21-25 and gas+water+solid system. [25][26][27][28][29][30] The IFTs from molecular dynamics (MD) simulations are generally in good agreement with experimental data and density gradient theory (DGT) predictions. 21,22,24,25 The IFTs in the H 2 O+CH 4 and H 2 O+CO 2 system decrease with temperature, [21][22][23][24] and decrease first with pressure and then increase with pressure as pressure increases. 21,22 The opposite pressure effect on IFT was attributed to the inversion of the sign of relative adsorptions of gas obtained from component density distributions. The simulated CAs in the H 2 O+CO 2 +silica system increase as pressure increases and temperature decreases. 26,30 Similar temperature effects on CAs were reported in the H 2 O+CH 4 +kerogen system. 27 Interestingly, the CAs from MD simulation of the H 2 O+CO 2 +kerogen system increases from 60 (H 2 O-wet) to 180 • (CO 2 -wet) when pressure increases from 0 to 44 MPa. 28 Moreover, the thickness of the CO 2 film between the H 2 O droplet and the kerogen surface increases with pressure. Nevertheless, molecular-level understanding of the multiphase systems with H 2 is lacking. In this article, MD simulations were conducted to study interfacial properties of the H 2 O+H 2 and H 2 O+H 2 +solid systems over a wide range of temperatures (298 -523 K) and pressures . Silica and kerogen were selected to represent the solid phase as they are abundantly found in geological formations. 8,16,17 The simulation results of the fluid system were complemented by DGT calculations with the Perturbed-chain Statistical Associating Fluid Theory (PC-SAFT) equation of state (EoS). Details regarding the calculation and analysis of the interfacial tension, relative adsorption, enrichment, solubility, contact angle, adhesion tension, density distribution, and capillary pressure can be found in the following sections. Simulation details MD simulations were conducted using the LAMMPS 31 package. Our simulation system include H 2 O, H 2 , silica, or kerogen (see Fig.1). The interactions between i and j molecular sites of different molecules are treated according to a pairwise additive where r ij is the distance between the centers of i and j sites. The parameter ε ij controls the strength of the short-range interactions, and the LJ diameter σ ij is used to set the length scale. The LJ parameters σ ij and ε ij are deduced from the Lorentz-Berthelot combining rules: 32 The charged sites are interacting with each other via the Coulomb potential: where q i and q j are the partial charges of the sites i and j, respectively, and ε 0 is the dielectric permittivity of vacuum. The LJ parameters and charges of all the fluid molecules used in this study are presented in Table 1 where K bond is the force constant (350 Kcal/mol/Å 2 ), r HH is the bond length, and r 0,HH is the equilibrium bond length of H 2 molecule (0.7414Å). The second H 2 model includes a unique LJ sphere taken from Hirschfelder et al. 36 The third model includes an LJ sphere and a quadruple moment described with three charged particles which properly reproduce the experimental gas-phase quadrupole moment of H 2 according to Alavi et al. 37 Silica and kerogen were used to represent the solid phase for the wettability study. Silica (SiO 2 ) with Q 3 /Q 4 surface environments was described using the IFF model. 38 The silanol (-OH) density of this silica surfaces was 2.4 per nm 2 and these surfaces are assumed to be nonionized. 38 It has been shown that the water CAs on silica surfaces varies with different density of silanol groups. 39 In real geological formations, the true surface of quartz should be a combination of possible function groups (Q 2 ,Q 3 ,Q 4 ,SiO − , et al.). 39 To simplify the problem, we focus on one type of surface with the silanol density in between Q 3 and Q 4 surfaces. The entire silica framework, except the Hatoms, was considered to be rigid during simulations. The type II-D kerogen molecular model developed by Ungerer et al. 40 was chosen to model mature kerogens. The chemical formula of the exemplified kerogen is The aromaticity of this kerogen is around 79%, and H/C and O/C ratios are 0.58 and 0.051 respectively. 40 It is important to note that the type and the thermal maturity of the kerogen have significant effects on water CAs. 27 Here, the overmature stage of type II kerogen was selected as it is associated with unconventional gas reserves such as the Barnett shale. 40 The force field parameters for the kerogen macromolecule were taken from the consistent valence force field (CVFF). 41 For the construction of the kerogen surface, we followed the protocol described in Jagadisan and Heidari. 27 Briefly, 36 kerogen molecules were first randomly placed in a simulation box. Then two LJ walls were inserted on both sides of the simulation box. One of the walls was rigid, while the other one was allowed to move. The kerogen molecules were then compressed together by exerting an external force on the moving wall. The final density of the kerogen plate is around 1.3 g/cm 3 , which falls within the range of experimental data for type II kerogen (1.18-1.35 g/cm 3 ). 42 All kerogen atoms are flexible during simulations. Fig. 1(a) presents the equilibrium snapshot of the H 2 O+H 2 two-phase systems. We employed 2048 H 2 O and up to 1400 H 2 molecules for this system. The box sizes in the xand y-direction were fixed to be 36Å, which were large enough to remove the finite-size effects. 23,25 Three-dimensional periodic boundary conditions were implemented. The equilibrium box length in the z-direction (L z ) is 3-7 times larger than the lateral cell length, depending on the temperature and pressure conditions. The velocity Verlet algorithm was employed to integrate the coupled Newton's equations. The N P z T (constant number of molecules, pressure in z-direction, and temperature) equilibration and N V T (constant number of molecules, volume, and temperature) production runs were 5 and 5 ns, respectively. The Nosé-Hoover thermostat with a relaxation time of 100 fs and the Nosé-Hoover barostat with a relaxation time of 1000 fs were applied to control the temperature and pressure, respectively. The IFT in the two-phase system is evaluated from the pressure tensor of the simulation box according to the Kirkwood and Buff approach: 43 where P xx , P yy , and P zz denote the three diagonal components of the pressure tensor and L z is the simulation box length in the z-direction. Three independent trajectories were generated with different initial conditions for evaluating the error bars. It has been shown that long-range interactions have a significant influence on the IFT. 34,44 In Fig. S1 A (thicknesses of silica and kerogen plate were approximately 26 and 17Å, respectively). A bounding piston (with short-ranged interactions) that can be moved in the z-direction is used to control the bulk pressure. 46 The N P T equilibration and N P T production runs were 6 and 12 ns, respectively. The contact angles were determined using polynomial fits to the density profiles of the water droplets. 39,45,47 The error bars of contact angles were computed based on the standard deviation of averages of 4 blocks with the block length of 3 ns. Theoretical details PC-SAFT EoS was applied to estimate the bulk properties of the fluid mixture. This EoS can be expressed via the compressibility factor Z: 48,49 where Z hc is the hard-chain term, Z disp is the dispersive part, and Z assoc represents the contribution due to association. Z is a function of the segment number m i , the segment diameter σ i , and the segment energy parameter i . The parameters for a pair of unlike segments were estimated using the Lorentz-Berthelot combining rules: 48 where k ij is the binary interaction parameter. The EoS parameters for pure component were taken from previous studies, 50,51 and in the absence of literature data, the k ij for H 2 O-H 2 pair was fitted to the experimental solubility data. 52 All parameters are given in Table S1 and S2. PC-SAFT EoS was coupled with the DGT for the estimation of interfacial properties. In DGT, for a planar interface of area A, the Helmholtz free energy is given as: 53,54 where f 0 denotes the Helmholtz free energy density of the homogeneous fluid at the local density n, dn i /dz represents the local density gradient of the ith component. The cross influence parameter c ij is defined as: 22,55 where c ii and c jj represent the pure component influence parameters, and β ij denote the binary interaction coefficient. These parameters were taken either from previous study 56 or fitted to the experiment data, 57 which are provided in Tables S3 and S4. In equilibrium, the density profiles across the interface were evaluated through the minimization of the free energy by solving the corresponding Euler-Lagrange where µ 0 i is the chemical potential of the ith component in the bulk phase (µ 0 i ≡ ( ∂f 0 ∂n i ) T,V,n j ), µ i represents the chemical potential of the ith component and N c denotes the total number of components. The nonlinear equations were discretized by a finite difference method and solved by the Newton-Raphson iteration with the in-house code. A total of 200 equidistant grid points were used. The linear density profiles were taken as the initial guess. The interfacial thickness l is initially assumed to be 10Å and then gradually increased until the convergence is reached for the IFT value. 53,54 The boundary conditions were obtained from flash calculations: 58,59 where n I i and n II i represent the bulk densities of the coexisting phases. When the equilibrium density profiles were available, the interfacial tension (γ) was estimated as follows: 53,54 3 Results and Discussion where is Γ i the relative adsorption of the component i relative to the reference component j. The relative adsorption can be calculated from component density distributions: 74,75 where I denotes the i-rich phase and II represents the j-rich phase. Fig. 3 The increment of CAs with pressure is consistent with our results from MD simulation. Nevertheless, our values are higher than those from experiment, which is likely because of the low density of the silanol group on the simulated silica surface. 39 The adhesion tension (γ SH − γ SW ) describes the contribution of the solid-fluid interactions to the contact angle according to Young's equation: 76,77 where θ is the contact angle, γ WH is the IFT between the H 2 O-rich and H 2 -rich phases, γ SH is the IFT between the solid and H 2 -rich phases, and γ SW is the IFT between the solid and H 2 O-rich phases. The adhesion tensions of the H 2 O+H 2 +silica system were calculated based on IFTs and CAs, which are plotted in Fig. S5(a). Similar to the values in the H 2 O+silica system given in Fig. S6 H 2 also adsorbs on the silica surface inside the droplet (see Fig. 6(a)). However, the adsorption peak of H 2 is roughly three orders of magnitude smaller than that of H 2 O suggesting that the interactions of the H 2 O-silica pair are much stronger than that of the H 2 -silica pair. In the interface between the H 2 -rich and silica phase, density peaks of both H 2 and H 2 O can be observed, while the latter is much stronger than the former (see Fig. 6(b)). In The adhesion tensions of the H 2 O+H 2 +kerogen system are displayed in Fig. S5(b). At low pressure, the adhesion tensions are in line with the values in the H 2 O+kerogen system given in Fig. S6. The adhesion tensions fall in the range of 25.1 to 52.7 mN/m. The adhesion tensions decrease with temperature, and the reduction of adhesion tension with temperature is larger at relatively low pressures. Moreover, the adhesion tensions decrease with pressure, and the reduction of adhesion tension with pressure is larger at relatively low temperatures. here the adsorption of H 2 O is much smaller in comparison to the corresponding data in the silica system. Furthermore, the effects of temperature and pressure on the density profiles in the H 2 O+H 2 +kerogen system are similar to those in the case with silica mentioned above. An important difference is that the droplet tends to move away from the kerogen surface as temperature increases (see Fig. S8(c)). It is also important to note that high pressure significantly decreases the adsorption of H 2 O in the interface between H 2 -rich and kerogen phases while only a moderate pressure effect is there for the silica system. This may explain the drop in adhesion tensions with pressure. The behaviour of CAs in the H 2 O+H 2 +kerogen system can be understood as follows. The decrease of CA with temperature can be explained mainly by the greater reduction of γ WH in contrast to a relatively small reduction of the adhesion tension, especially at high pressures. At low temperatures, the change of adhesion tension due to pressure increment is much larger than that of γ WH . Therefore, the increment of CAs at low temperatures with pressure is mainly because of the fluid-kerogen interactions. However, at high temperatures, the change of adhesion tension due to pressure increment is little while γ WH increases with pressure. Hence, the increment of CAs at high temperatures with pressure is mainly due to the fluid-fluid interactions. Furthermore, the capillary pressure can be calculated based on the standard Young-Laplace equation, where r c is the radius of curvature at the interface. 78 The capillary forces block the upward movement of the gas stored in aquifers and their escape since the solid surfaces are hydrophilic. For a given r c , the behavior of P c is similar to that of the adhesion tension as described above. Fig. S10
2022-12-29T06:42:26.737Z
2022-12-27T00:00:00.000
{ "year": 2022, "sha1": "2d7350229b066c5edacb4656435ee5490736a28e", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "2d7350229b066c5edacb4656435ee5490736a28e", "s2fieldsofstudy": [ "Environmental Science", "Chemistry", "Materials Science" ], "extfieldsofstudy": [ "Physics" ] }
261323396
pes2o/s2orc
v3-fos-license
Comparison of Bioactive Compounds and Antioxidant Activities in Differentially Pigmented Cerasus humilis Fruits Chinese dwarf cherry (Cerasus humilis) is a wild fruit tree and medicinal plant endemic to China. Its fruits are rich in various bioactive compounds, such as flavonoids and carotenoids, which contribute greatly to their high antioxidant capacity. In this study, the contents of bioactive substances (chlorophyll, carotenoids, ascorbic acid, anthocyanin, total flavonoids, and total phenols), antioxidant capacities, 2,2-diphenyl-1-picrylhydrazyl (DPPH) and 2,2′-azino-bis (3-ethylbenzothiazoline-6-sulfonicacid) (ABTS+) scavenging ability, and ferric-reducing antioxidant power (FRAP)) in differentially pigmented C. humilis fruits of four varieties were determined and compared. The results revealed that anthocyanin, total flavonoids and total phenols were the three main components responsible for the antioxidant activity of C. humilis fruits. ‘Jinou No.1’ fruits with dark red peel and red flesh had the highest contents of anthocyanin, total flavonoids, and total phenols, as well as the highest antioxidant capacities; ‘Nongda No.5’ fruits with yellow-green peel and yellow flesh had the highest contents of carotenoids and chlorophyll, while ‘Nongda No.6’ fruit had the highest ascorbic acid content. To further reveal the molecular mechanism underlying differences in the accumulation of carotenoids and flavonoids among differentially pigmented C. humilis fruits, the expression patterns of structural genes involved in the biosynthesis of the two compounds were investigated. Correlation analysis results revealed that the content of carotenoids in C. humilis fruits was very significantly positively correlated with the expression of the ChCHYB, ChZEP, ChVDE, ChNSY, ChCCD1, ChCCD4, ChNCED1, and ChNCED5 genes (p < 0.01) and significantly negatively correlated with the expression of ChZDS (p < 0.05). The anthocyanin content was very significantly positively correlated with ChCHS, ChFLS, and ChUFGT expression (p < 0.01). The total flavonoid content was very significantly positively correlated with the expression of ChCHS, ChUFGT, and ChC4H (p < 0.01) and significantly positively correlated with ChFLS expression (p < 0.05). This study can provide a basis for understanding the differences in the accumulation of bioactive substances, and is helpful for clarifying the mechanisms underlying the accumulation of various carotenoids and flavonoids among differentially pigmented C. humilis fruits. Introduction Oxidative stress caused by free radical accumulation is very harmful to the human immune system [1]. Accumulated evidence has revealed that carotenoids (including αcarotene, β-carotene, α-cryptoxanthin, and β-cryptoxanthin), chlorophylls, ascorbic acid, total phenols, and total flavonoids (including flavones, isoflavones, flavanols and anthocyanin), as well as other bioactive substances, have strong antioxidant capacities [2][3][4][5]. Therefore, these bioactive substances are regarded as important sources of new green therapeutic natural compounds [6]. For example, carotenoids are natural pigments beneficial to the eyes and cardiovascular system [7], while phenols and flavonoids have been Comparison of Antioxidant Capacities in Fruits of Four Different C. humilis Varieties The FRAP, ABTS + , and DPPH free radical scavenging abilities of 'Jinou No.1 fruit were all significantly higher than fruits of the other three varieties (p < 0.05) ( Correlation and Principal Component Analysis (PCA) of Bioactive Substance Contents and Antioxidant Capacities PCA of the bioactive substance contents and antioxidant capacities of fruits of the four different C. humilis varieties was performed ( Table 2). The anthocyanin content was found to be very significant positively correlated with the total flavonoid content, total phenol content, FRAP, and ABTS + (p < 0.01). The total phenol content was very significantly positively correlated with DPPH (p < 0.01). There were significant correlations among other parameters as well (p < 0.05). For example, very significant positive correlations were found among the contents of chlorophyll, chlorophyll a, chlorophyll b, and carotenoids (p < 0.01); the ascorbic acid content was very significantly negatively correlated with anthocyanin content, total flavonoid content, total phenol content, FRAP, ABTS + , and DPPH (p < 0.01); and ABTS + was very significantly positively correlated with FRAP and DPPH (p < 0.01). Table 2. Correlation analysis results of the fruit parameters of the four different C. humilis varieties. Chl: Chlorophyll; Chl a: chlorophyll a; Chl b: chlorophyll b; Car: carotenoids; ABA: abscisic acid; Ant: anthocyanin; TFC: total flavonoids; TPC: total phenols; AA: ascorbic acid; FRAP: ferric reducing antioxidant power; ABTS + : 2,2 -azino-bis (3-ethylbenzothiazoline-6-sulfonic acid) radical cation scavenging ability; DPPH: 2,2-diphenyl-1-picrylhydrazyl free radical scavenging ability. * and ** indicate significant correlation (p < 0.05) and very significant correlation (p < 0.01), respectively. Our PCA results revealed that the contribution rates of the first and second principal components (PC1 and PC2) were 65.6 % and 29.0 % ( Figure 2), respectively, indicating that they covered the comprehensive information of most parameters. 'Jinou No.1', with the highest contents of anthocyanin, total flavonoids, and total phenols as well as the strongest antioxidant capacities, scored the highest in PC1, while 'Nongda No.5', with the highest carotenoids and chlorophyll contents, scored the highest in PC2. Expression Analysis of Carotenoid Biosynthesis-Related Genes in Fruits of Four C. humilis Varieties To explore the mechanisms of the differentially accumulated carotenoids in fruits of different C. humilis varieties, quantitative real time PCR (qRT-PCR) was used to compare the expression of ten genes related to carotenoid biosynthesis (ChPSY, ChPDS, ChZDS, ChCRTISO, ChLCYE, ChLCYB, ChCHYB, ChZEP, ChVDE, and ChNSY) and four genes related to carotenoid degradation (ChCCD1, ChCCD4, ChNCED1, and ChNCED5). Meanwhile, the ABA contents in C. humilis fruits were determined as well ( Figure 3A-C). The results showed that the expression levels of ChPSY, ChPDS, ChCRTISO, ChLCYE, and ChLCYB in 'Jinou No.1' fruit were the highest. ChZDS was expressed the highest in 'Nongda No.7'. Except for ChPSY, ChLCYB, and ChNSY, the expression levels of other genes were the lowest in 'Nongda No.6'. The expression levels of the ChCHYB, ChZEP, ChVDE, and ChNSY genes related to carotenoid biosynthesis were the highest in 'Nongda No.5', while the expression levels of the ChCCD1, ChCCD4, ChNCED1, and ChNCED5 Expression Analysis of Carotenoid Biosynthesis-Related Genes in Fruits of Four C. humilis Varieties To explore the mechanisms of the differentially accumulated carotenoids in fruits of different C. humilis varieties, quantitative real time PCR (qRT-PCR) was used to compare the expression of ten genes related to carotenoid biosynthesis (ChPSY, ChPDS, ChZDS, ChCRTISO, ChLCYE, ChLCYB, ChCHYB, ChZEP, ChVDE, and ChNSY) and four genes related to carotenoid degradation (ChCCD1, ChCCD4, ChNCED1, and ChNCED5). Meanwhile, the ABA contents in C. humilis fruits were determined as well ( Figure 3A-C). The results showed that the expression levels of ChPSY, ChPDS, ChCRTISO, ChLCYE, and ChLCYB in 'Jinou No.1' fruit were the highest. ChZDS was expressed the highest in 'Nongda No.7'. Except for ChPSY, ChLCYB, and ChNSY, the expression levels of other genes were the lowest in 'Nongda No.6'. The expression levels of the ChCHYB, ChZEP, ChVDE, and ChNSY genes related to carotenoid biosynthesis were the highest in 'Nongda No.5', while the expression levels of the ChCCD1, ChCCD4, ChNCED1, and ChNCED5 genes related to carotenoid degradation were the highest in 'Nongda No.5' and the lowest in 'Nongda No.6' ( Figure 3D). The correlations among carotenoid and ABA contents and the expression levels of carotenoid metabolism-related genes in fruits of different C. humilis varieties were further analyzed ( Figure 4A). The results show that the content of carotenoids was very significantly positively correlated with ChCHYB, ChZEP, ChVDE, ChNSY, ChCCD1, ChCCD4, ChNCED1, and ChNCED5 (p < 0.01), significantly negatively correlated with ChZDS (p < The correlations among carotenoid and ABA contents and the expression levels of carotenoid metabolism-related genes in fruits of different C. humilis varieties were further analyzed ( Figure 4A). The results show that the content of carotenoids was very significantly positively correlated with ChCHYB, ChZEP, ChVDE, ChNSY, ChCCD1, ChCCD4, ChNCED1, and ChNCED5 (p < 0.01), significantly negatively correlated with ChZDS (p < 0.05), and positively correlated with ABA content. ABA content was very significantly positively correlated with ChPDS and ChLCYE (p < 0.01) and significantly positively correlated with ChZDS, ChCRTISO, and ChNCED5 (p < 0.05). In addition, although the correlation was not significant, ABA content was positively correlated with the expression levels of ChCCD1, ChCCD4, and ChNCED1. PCA based on the contents of carotenoids and ABA and expression of carotenoid metabolism-related genes show that the four C. humilis varieties can be clearly separated; the biological replicates of each variety were closely clustered and located within the 95% confidence interval ( Figure 4B). The contribution rates of PC1 and PC2 were 55.5% and 25.6%, respectively, and the cumulative contribution rate of the two principal components accounted for 81.1%. It is worth noting that 'Nongda No.5', with the highest carotenoid content, scored the highest in PC1, while 'Jinou No.1', with the highest ABA content and the highest expression of ChPSY, ChPDS, ChCRTISO, ChLCYE, and ChLCYB genes, scored the highest in PC2. 'Nongda No.6', with the lowest ABA content, scored the lowest in both PC1 and PC2. Expression Analysis of Flavonoid Metabolism Related Genes in Fruits of Four C. humilis Varieties In order to reveal the mechanism underlying the differences in flavonoid accumulation among fruits of the four C. humilis varieties, the expression of flavonoid metabolismrelated genes was quantitatively verified ( Figure 5). The expression levels of anthocyanin and flavonoid biosynthesis-related genes, especially ChFLS and ChUFGT were significantly higher in 'Jinou No.1' fruits than in the other three varieties. In fruits of 'Nongda No.5', all other genes except ChPAL were found to have low levels of expression. The expression levels of ChF3H, ChDFR, and ChANS in 'Nongda No.6' and 'Nongda No.7' were much higher than those in the other two varieties. PCA based on the contents of carotenoids and ABA and expression of carotenoid metabolism-related genes show that the four C. humilis varieties can be clearly separated; the biological replicates of each variety were closely clustered and located within the 95% confidence interval ( Figure 4B). The contribution rates of PC1 and PC2 were 55.5% and 25.6%, respectively, and the cumulative contribution rate of the two principal components accounted for 81.1%. It is worth noting that 'Nongda No.5', with the highest carotenoid content, scored the highest in PC1, while 'Jinou No.1', with the highest ABA content and the highest expression of ChPSY, ChPDS, ChCRTISO, ChLCYE, and ChLCYB genes, scored the highest in PC2. 'Nongda No.6', with the lowest ABA content, scored the lowest in both PC1 and PC2. Expression Analysis of Flavonoid Metabolism Related Genes in Fruits of Four C. humilis Varieties In order to reveal the mechanism underlying the differences in flavonoid accumulation among fruits of the four C. humilis varieties, the expression of flavonoid metabolism-related genes was quantitatively verified ( Figure 5). The expression levels of anthocyanin and flavonoid biosynthesis-related genes, especially ChFLS and ChUFGT were significantly higher in 'Jinou No.1' fruits than in the other three varieties. In fruits of 'Nongda No.5', all other genes except ChPAL were found to have low levels of expression. The expression levels of ChF3H, ChDFR, and ChANS in 'Nongda No.6' and 'Nongda No.7' were much higher than those in the other two varieties. By analyzing the correlations among anthocyanin content, total flavonoid content, and the expression of synthesis-related genes ( Figure 6A), it was found that anthocyanin content in C. humilis fruits was very significantly positively correlated with total flavonoid content and expression levels of the ChCHS, ChFLS, and ChUFGT genes (p < 0.01). The total flavonoid content was very significantly positively correlated with the expression of PCA analysis was performed based on anthocyanin and total flavonoid content and on flavonoid metabolism-related structural gene expression levels ( Figure 6B). The results show that 'Jinou No.1', with the highest contents of anthocyanin and total flavonoids, and the highest expression levels of ChUFGT, ChFLS, and ChCHS, scored the highest in PC1, while 'Nongda No.7', with the highest expression levels of the ChC4H, ChCHI, ChF3H, ChDFR, and ChANS genes, scored the highest in PC2 and 'Nongda No.5', with the lowest anthocyanin and total flavonoids contents, scored the lowest in both PC1 and PC2. while 'Nongda No.7', with the highest expression levels of the ChC4H, ChCHI, ChF3H, ChDFR, and ChANS genes, scored the highest in PC2 and 'Nongda No.5', with the lowest anthocyanin and total flavonoids contents, scored the lowest in both PC1 and PC2. The Bioactive Substance Contents and Antioxidant Capacities of Differently Pigmented C. humilis Fruits Vary Greatly In this study, significant differences in the bioactive substance contents and DPPH, ABTS + , and FRAP antioxidant capacities of four differentially pigmented C. humilis fruits were discovered. In onions, the contents of anthocyanin, total flavonoids and total phenols, and antioxidant capacity of red onions have been found to be higher than those of yellow and white onions, indicating that darker colors are related to higher contents of anthocyanin, total flavonoids, and total phenols as well as to stronger antioxidant capacity [19]. Consistently, in this study we found that 'Jinou No.1' fruit had the highest anthocyanin, total flavonoid, and total phenol contents and the strongest antioxidant capacity. Interestingly, the ascorbic acid content in fruits of 'Nongda No.6', 'Nongda No.5', and 'Nongda No.7' was significantly higher than in 'Jinou No.1' fruits, indicating that these are more suitable for use as a natural source of ascorbic acid [20]. Additionally, our correlation analysis results reveal that the ascorbic acid content in C. humilis fruits is very significantly negatively correlated with anthocyanin content, total flavonoid content, total phenol content, FRAP, ABTS + , and DPPH (p < 0.01). Carotenoids and chlorophyll are important pigments, respectively conferring yellow and green colors on fruits. They both have strong antioxidant and healthcare values [21,22]. In this study, it was found that the contents of carotenoids and chlorophyll in 'Nongda No.5' fruits were more than twice of those of the other three varieties, indicating that this variety is rich in carotenoids and chlorophyll. Phenols are highly beneficial in terms of their health values [23][24][25]. C. humilis fruits are rich in total flavonoids and phenols [13]. Among the four varieties, the highest amounts of total flavonoids and phenols and strongest antioxidant capacity were identified in 'Jinou No.1' fruits, suggesting that this variety might have great potential for use as a raw material in producing bioactive substances for clinical researches. The Bioactive Substance Contents and Antioxidant Capacities of Differently Pigmented C. humilis Fruits Vary Greatly In this study, significant differences in the bioactive substance contents and DPPH, ABTS + , and FRAP antioxidant capacities of four differentially pigmented C. humilis fruits were discovered. In onions, the contents of anthocyanin, total flavonoids and total phenols, and antioxidant capacity of red onions have been found to be higher than those of yellow and white onions, indicating that darker colors are related to higher contents of anthocyanin, total flavonoids, and total phenols as well as to stronger antioxidant capacity [19]. Consistently, in this study we found that 'Jinou No.1' fruit had the highest anthocyanin, total flavonoid, and total phenol contents and the strongest antioxidant capacity. Interestingly, the ascorbic acid content in fruits of 'Nongda No.6', 'Nongda No.5', and 'Nongda No.7' was significantly higher than in 'Jinou No.1' fruits, indicating that these are more suitable for use as a natural source of ascorbic acid [20]. Additionally, our correlation analysis results reveal that the ascorbic acid content in C. humilis fruits is very significantly negatively correlated with anthocyanin content, total flavonoid content, total phenol content, FRAP, ABTS + , and DPPH (p < 0.01). Carotenoids and chlorophyll are important pigments, respectively conferring yellow and green colors on fruits. They both have strong antioxidant and healthcare values [21,22]. In this study, it was found that the contents of carotenoids and chlorophyll in 'Nongda No.5' fruits were more than twice of those of the other three varieties, indicating that this variety is rich in carotenoids and chlorophyll. Phenols are highly beneficial in terms of their health values [23][24][25]. C. humilis fruits are rich in total flavonoids and phenols [13]. Among the four varieties, the highest amounts of total flavonoids and phenols and strongest antioxidant capacity were identified in 'Jinou No.1' fruits, suggesting that this variety might have great potential for use as a raw material in producing bioactive substances for clinical researches. Anthocyanin, Total Flavonoids, and Total Phenols Are the Three Main Components Affecting the Antioxidant Activity of C. humilis Fruits The antioxidant activity of fruits is mainly dependent on the accumulation of bioactive substances such as anthocyanin, total flavonoids, and total phenols [26,27]. In fruits of most plants, the contents of total flavonoids and total phenols are reported to be positively correlated with the antioxidant capacity [28]. Anthocyanin has free radical scavenging activity and can reduce oxidative stress [29]. The total flavonoids, total phenols, and antioxidant capacities (DPPH, ABTS + , and FRAP) of thyme have been shown to be positively correlated [30]. The antioxidant capacity of pomegranate was shown to be significantly positively correlated with the contents of anthocyanin, total flavonoids and total phenols [31]. The of total flavonoid and total phenol contents in grapes have been positively correlated with antioxidant capacity (ABTS + and FRAP) [32]. In this study, the contents of anthocyanin, total flavonoids, and total phenols in the fruit of 'Jinou No.1' were found to be higher than in the other three varieties, and its antioxidant capacity was the highest. Moreover, we found that the contents of anthocyanin, total flavonoids, and total phenols in C. humilis fruits were positively correlated with antioxidant capacities (DPPH, ABTS + , and FRAP), indicating that these three bioactive substances may synergistically regulate the antioxidant capacity of C. humilis fruits. The Accumulation of Carotenoids in Fruits of Different C. humilis Varieties Are Closely Related to the Expression of Carotenoid Metabolism-Related Genes Gene expression analysis revealed that the expression levels of carotenoid biosynthesisrelated genes (ChCHYB, ChZEP, ChVDE, and ChNSY) and carotenoid degradation-related genes (ChCCD1, ChCCD4, ChNCED1, and ChNCED5) in 'Nongda No.5' fruits (with high carotenoid content) were significantly higher than other three varieties. The ABA content in fruits of 'Nongda No.6' was the lowest among the four C. humilis varieties. Consistently, except for ChPSY, ChLCYB, and ChNSY, the expression levels of other carotenoid metabolism-related genes in 'Nongda No.6' were all the lowest. Correlation analysis showed that carotenoid content was very significantly positively correlated with expression of ChCHYB, ChZEP, ChVDE, ChNSY, ChCCD1, ChCCD4, ChNCED1, and ChNCED5 (p < 0.01), significantly negatively correlated with ChZDS expression (p < 0.05), and positively correlated with ABA content. PSY is the first key rate-limiting enzyme in the carotenoid biosynthesis pathway [33]. Overexpression of PSY in maize callus has been found to significantly increase the accumulation of carotenoids (p < 0.05) [34]. In our study, we found that the carotenoid content in C. humilis fruits was positively correlated with the expression level of ChPSY. Moreover, the expression level of ChPSY in 'Nongda No.7' (with the lowest carotenoid content among the four C. humilis varieties) was found to be the lowest. ChCCD1 has been identified as playing a key role in the degradation of carotenoids [35]. In our study, we found that carotenoid content was significantly positively correlated with the expression levels of ChCCD1 and ChCCD4 (p < 0.05). Nine-cis-epoxycarotenoid dioxygenase (NCED) is a key enzyme connecting the carotenoid degradation and ABA biosynthesis pathways [36]. ABA content has consistently been found to be correlated to the expression level of NCED [37]. It has been reported that the expression level of IbNCED3 is positively correlated with the total content of carotenoids in the SS8 sweet potato variety [38]. In this study, the highest expression level of ChNCED and high ABA content were consistently found in 'Nongda No.5' (with the highest carotenoid content), while the lowest expression level of ChNCED and lowest ABA content was found in 'Nongda No.6' fruit, which has a low content of carotenoids. The Expression of Flavonoid Biosynthesis-Related Genes Such as ChCHS, ChUFGT, and ChFLS Is Very Significantly or Significantly Positively Correlated with Flavonoid Content in C. humilis Fruits Chalcone synthase (CHS) catalyzes the first step of flavonoid biosynthesis [39]. The expression of CHS has been shown to be very significantly positively correlated with of anthocyanin and flavonoid contents [40]. In strawberry, an increase in CHS expression level and the accumulation of anthocyanin and flavonoids were found to occur simultaneously [41]. Flavonol synthase (FLS) is a key enzyme in the biosynthesis of flavonols in the flavonoid pathway [42,43], while UFGT is a key enzyme catalyzing the final step of anthocyanin biosynthesis. The anthocyanin content of myrtle berries was found to be strongly positively correlated with the expression level of UFGT, and the expression level of UFGT was the highest in dark blue fruits with high anthocyanin content [44]. In this study, the expression levels of ChCHS, ChUFGT, and ChFLS were found to be very significantly (p < 0.01) or significantly (p < 0.05) correlated with the contents of anthocyanin and total flavonoids in C. humilis fruits. Moreover, consistently with the finding of the highest anthocyanin and total flavonoids contents in 'Jinou No.1' fruits and the lowest contents in 'Nongda No.5' fruits, the expression levels of ChCHS and ChUFGT were the highest in 'Jinou No.1' fruits and the lowest in 'Nongda No.5' fruits. This indicates that these three genes are closely related to the biosynthesis of anthocyanin and flavonoids in C. humilis fruits. Plant Materials 'Jinou No.1', 'Nongda No.5', 'Nongda No.6', and 'Nongda No.7' mature fruits with relatively uniform size and color and no mechanical or pest damage were harvested from the same C. humilis germplasm nursery located in Shanxi Agricultural University, then stored on ice and taken back to the laboratory. Color parameters (L*, a*, and b* values) of twenty fruits from each variety were measured using a CR8 colorimeter (3nh, Guangzhou, China) [45]. After removing the seeds, the fruits were cut into small pieces, frozen in liquid nitrogen, and stored in a refrigerator at −80 • C for further use. Determination of Carotenoid, Chlorophyll, Anthocyanin, and Ascorbic Acid Contents Extraction of carotenoids and chlorophyll and determination of their contents was carried out according to the methods of Gao et al. [46] and Zhang et al. [38]. After grinding fruits into a fine powder in liquid nitrogen, 1 g of sample was added to 5 mL of acetone (containing 0.1% butyl hydroxytoluene) and ultrasonically extracted for 60 min. Then, the supernatant was collected by centrifugation at 10,000 rpm for 15 min. A spectrophotometer (UV-1800, Shanghai Meixi Instrument Co., Ltd., Shanghai, China) was used to measure the absorbance of the supernatant at 663 nm, 645 nm, and 450 nm. Carotenoid content was calculated using the following formula: content (mg/kg) = ABS (OD) × extract volume (mL) × dilution times/sample weight (kg)/2500 (the average absorbance of 1% carotenoids at the maximum absorption wavelength). Chlorophyll content was calculated using the following formula: content (mg/kg) = (20.21 × ABS (OD 645 ) + 8.02 × ABS (OD 663 )) × extract volume (mL) × dilution times/sample weight (kg)/1000. Anthocyanin content in C. humilis fruits was extracted and determined according to the method of Zhuang et al. [47]. Briefly, 2.5 g of fruit peel was homogenized with acidified ethanol containing 85 mL of 95% ethanol and 15 mL of 1.5 mol/L hydrochloric acid per 100 mL and diluted to 25 mL with acidified ethanol. After being placed in the dark at room temperature for 24 h, this solution was centrifuged at 10,000 rpm for 15 min. The supernatant was collected and subjected to absorbance value measurement at 535 nm using a spectrophotometer (UV-1800, Shanghai Meixi Instrument Co., Ltd.). Anthocyanin content was calculated using the following formula: content (OD mL/100 g FW) = ABS (OD) × extract volume (mL) × dilution times/sample weight (g)/extinction coefficient (98.2) × 100. The content of ascorbic acid was measured using the 2,6-dichloroindophenol titration method [48]. Determination of Total Flavonoids, Total Phenols, and Antioxidant Capacity After grinding fruits into fine powder in liquid nitrogen, 2.5 g of fruit sample was added to 10 mL of 80% ethanol, mixed by whirlpool oscillation, extracted by ultrasound at 40 kHz for 15 min, and centrifuged at 5000 rpm for 10 min at 4 • C. The supernatant was collected, and after two rounds of extraction the collected supernatants were pooled, diluted to 25 mL with 80% ethanol, and used to determine the contents of total flavonoids and total phenols along with the antioxidant capacity. The total flavonoid content and ABTS + free radical scavenging ability of the fruits were determined by reference to the method of Fu et al. [49]. The total phenol content, DPPH free radical scavenging ability, and ferric reducing antioxidant power (FRAP) of the fruits were determined according to the method described by Clarke et al. [50]. Determination of Abscisic Acid (ABA) Content The ABA contents of the four varieties of C. humilis fruits were determined using a plant abscisic acid enzyme-linked immunosorbent assay kit (Jiankang Biological, Shanghai, China). Gene Expression Analysis The flavonoid and carotenoid metabolism-related protein sequences of Arabidopsis thaliana and Citrus sinensis were downloaded from the A. thaliana genome website (https://www.arabidopsis.org/, accessed on 5 March 2023.) and Phytozome13 (https: //phytozome-next.jgi.doe.gov/, accessed on 5 March 2023.), respectively. Using these as queries, BLASTP searches against the C. humilis protein data were performed using TBtools to identify candidate flavonoid biosynthesis-related (ChPAL, ChC4H, ChCHS, ChCHI, ChFLS, ChF3H, ChDFR, ChANS, and ChUFGT) and carotenoid metabolism-related (Ch-PSY, ChPDS, ChZDS, ChCRTISO, ChLCYE, ChLCYB, ChCHYB, ChZEP, ChVDE, and ChNSY) proteins of C. humilis. According to their coding sequences, primers were designed using Primer 3.0; the primers of the ChCCD1, ChCCD4, ChNCED1, and ChNCED5 genes were synthesized according to Cheng et al. [35] (Table S1). A Trizol RNA Extraction Kit (TaKaRa, Dalian, China) was used to isolate the total RNA from the mature fruits of the four C. humilis cultivars. Then, high-quality RNA was used for cDNA synthesis using a PrimeScript TM RT reagent Kit with a gDNA Eraser (Perfect Real Time) kit (TaKaRa, Dalian, China). qRT-PCR reactions were performed on a QuantStudio 3 (Applied Biosystems, Shanghai, China) real-time quantitative fluorescent PCR instrument using a TB Green ® Premix Ex Taq TM II kit (Tli RNaseH Plus; TaKaRa, Dalian, China). With ChActin as the internal reference gene, the relative expression levels of the selected C. humilis genes in the fruits of the four cultivars were calculated using the 2 −∆∆Ct method [35]. Three biological and three technical replications were made during qRT-PCR analysis of the selected genes. Data Analysis All data are presented as the mean ± standard deviation of at least three biological repetitions. OriginPro 9.0 was used for Pearson correlation analysis and principal component analysis (PCA). One-way analysis of variance (ANOVA) in SPSS 25.0 was used for statistical analysis of the data at p < 0.05 and/or p < 0.01 levels. Figures were created using GraphPad Prism 8.0. Conclusions In this study, we determined and compared the bioactive substance contents and antioxidant capacities of differentially pigmented C. humilis fruits from four different varieties and explored the molecular mechanisms underlying the differences in accumulation of carotenoids and flavonoids among them. Our results show that the bioactive substance contents and antioxidant capacities in fruits of the four C. humilis varied widely. 'Jinou No.1' fruits had the highest antioxidant capacity, which might be due to their having the highest contents of anthocyanin, total flavonoids, and total phenols; 'Nongda No.5' fruits had the highest of carotenoids and highest chlorophyll contents; and 'Nongda No.6' fruits had the highest content of ascorbic acid. Moreover, the carotenoid contents in C. humilis fruits were very significantly positively correlated with the expression levels of ChCHYB, ChZEP, ChVDE, ChNSY, and several other genes, and the total flavonoid and anthocyanin contents were very significantly or significantly positively correlated with the expression levels of ChCHS, ChUFGT, and ChFLS. This study can provide a basis for the healthcare-oriented application of differentially pigmented C. humilis fruits, and can be helpful for breeding C. humilis varieties with higher contents of flavonoids or carotenoids.
2023-08-30T15:02:25.777Z
2023-08-27T00:00:00.000
{ "year": 2023, "sha1": "139f78225f1c7bf81508e2a0b20fecff96e9515b", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/28/17/6272/pdf?version=1693203879", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "42fabe6f2710cbe6cbf64fd444ab127b67207a74", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
12986910
pes2o/s2orc
v3-fos-license
Bias in research By writing scientific articles we communicate science among colleagues and peers. By doing this, it is our responsibility to adhere to some basic principles like transparency and accuracy. Authors, journal editors and reviewers need to be concerned about the quality of the work submitted for publication and ensure that only studies which have been designed, conducted and reported in a transparent way, honestly and without any deviation from the truth get to be published. Any such trend or deviation from the truth in data collection, analysis, interpretation and publication is called bias. Bias in research can occur either intentionally or unintentionally. Bias causes false conclusions and is potentially misleading. Therefore, it is immoral and unethical to conduct biased research. Every scientist should thus be aware of all potential sources of bias and undertake all possible actions to reduce or minimize the deviation from the truth. This article describes some basic issues related to bias in research. Introduction Scientifi c papers are tools for communicating science between colleagues and peers. Every research needs to be designed, conducted and reported in a transparent way, honestly and without any deviation from the truth. Research which is not compliant with those basic principles is misleading. Such studies create distorted impressions and false conclusions and thus can cause wrong medical decisions, harm to the patient as well as substantial fi nancial losses. This article provides the insight into the ways of recognizing sources of bias and avoiding bias in research. Defi nition of bias Bias is any trend or deviation from the truth in data collection, data analysis, interpretation and publication which can cause false conclusions. Bias can occur either intentionally or unintentionally (1). Intention to introduce bias into someone's research is immoral. Nevertheless, considering the possible consequences of a biased research, it is almost equally irresponsible to conduct and publish a biased research unintentionally. It is worth pointing out that every study has its confounding variables and limitations. Confounding eff ect cannot be completely avoided. Every scientist should therefore be aware of all potential sources of bias and undertake all possible actions to reduce and minimize the deviation from the truth. If deviation is still present, authors should confess it in their articles by declaring the known limitations of their work. It is also the responsibility of editors and reviewers to detect any potential bias. If such bias exists, it is up to the editor to decide whether the bias has an important eff ect on the study conclusions. If that is the case, such articles need to be rejected for publication, because its conclusions are not valid. Bias in data collection Population consists of all individuals with a characteristic of interest. Since, studying a population is Bias in research quite often impossible due to the limited time and money; we usually study a phenomenon of interest in a representative sample. By doing this, we hope that what we have learned from a sample can be generalized to the entire population (2). To be able to do so, a sample needs to be representative of the population. If this is not the case, conclusions will not be generalizable, i.e. the study will not have the external validity. So, sampling is a crucial step for every research. While collecting data for research, there are numerous ways by which researchers can introduce bias in the study. If, for example, during patient recruitment, some patients are less or more likely to enter the study than others, such sample would not be representative of the population in which this research is done. In that case, these subjects who are less likely to enter the study will be underrepresented and those who are more likely to enter the study will be over-represented relative to others in the general population, to which conclusions of the study are to be applied to. This is what we call a selection bias. To ensure that a sample is representative of a population, sampling should be random, i.e. every subject needs to have equal probability to be included in the study. It should be noted that sampling bias can also occur if sample is too small to represent the target population (3). For example, if the aim of the study is to assess the average hsCRP (high sensitive C-reactive protein) concentration in healthy population in Croatia, the way to go would be to recruit healthy individuals from a general population during their regular annual health check up. On the other hand, a biased study would be one which recruits only volunteer blood donors because healthy blood donors are usually individuals who feel themselves healthy and who are not suff ering from any condition or illness which might cause changes in hsCRP concentration. By recruiting only healthy blood donors we might conclude that hsCRP is much lower that it really is. This is a kind of sampling bias, which we call a volunteer bias. Another example for volunteer bias occurs by inviting colleagues from a laboratory or clinical department to participate in the study on some new marker for anemia. It is very likely that such study would preferentially include those participants who might suspect to be anemic and are curious to learn it from this new test. This way, anemic individuals might be over-represented. A research would then be biased and it would not allow generalization of conclusions to the rest of the population. Generally speaking, whenever cross-sectional or case control studies are done exclusively in hospital settings, there is a good chance that such study will be biased. This is called admission bias. Bias exists because the population studied does not refl ect the general population. Another example of sampling bias is the so called survivor bias which usually occurs in cross-sectional studies. If a study is aimed to assess the association of altered KLK6 (human Kallikrein-6) expression with a 10 year incidence of Alzheimer's disease, subjects who died before the study end point might be missed from the study. Misclassifi cation bias is a kind of sampling bias which occurs when a disease of interest is poorly defi ned, when there is no gold standard for diagnosis of the disease or when a disease might not be easy detectable. This way some subjects are falsely classifi ed as cases or controls whereas they should have been in another group. Let us say that a researcher wants to study the accuracy of a new test for an early detection of the prostate cancer in asymptomatic men. Due to absence of a reliable test for the early prostate cancer detection, there is a chance that some early prostate cancer cases would go misclassifi ed as disease-free causing the under-or over-estimation of the accuracy of this new marker. As a general rule, a research question needs to be considered with much attention and all eff orts should be made to ensure that a sample is as closely matched to the population, as possible. Bias in data analysis A researcher can introduce bias in data analysis by analyzing data in a way which gives preference to the conclusions in favor of research hypothesis. reporting non-existing data from experiments • which were never done (data fabrication); eliminating data which do not support your hy-• pothesis (outliers, or even whole subgroups); using inappropriate statistical tests to test your • data; performing multiple testing ("fi shing for P") by • pair-wise comparisons (4), testing multiple endpoints and performing secondary or subgroup analyses, which were not part of the original plan in order "to fi nd" statistically signifi cant diff erence regardless to hypothesis. For example, if the study aim is to show that one biomarker is associated with another in a group of patients, and this association does not prove signifi cant in a total cohort, researchers may start "torturing the data" by trying to divide their data into various subgroups until this association becomes statistically signifi cant. If this sub-classifi cation of a study population was not part of the original research hypothesis, such behavior is considered data manipulation and is neither acceptable nor ethical. Such studies quite often provide meaningless conclusions such as: CRP was statistically signifi cant in a subgroup of • women under 37 years with cholesterol concentration > 6.2 mmol/L; lactate concentration was negatively associated • with albumin concentration in a subgroup of male patients with a body mass index in the lowest quartile and total leukocyte count below 4.00 x 10 9 /L. Besides being biased, invalid and illogical, those conclusions are also useless, since they cannot be generalized to the entire population. There is a very often quoted saying (attributed to Ronald Coase, but unpublished to the best of my knowledge), which says: "If you torture the data long enough, it will confess to anything". This actually means that there is a good chance that statistical signifi cance will be reached only by increas-ing the number of hypotheses tested in the work. The question is then: is this signifi cant diff erence real or did it occur by pure chance? Actually, it is well known that if 20 tests are performed on the same data set, at least one Type 1 error (α) is to be expected. Therefore, the number of hypotheses to be tested in a certain study needs to determined in advance. If multiple hypotheses are tested, correction for multiple testing should be applied or study should be declared as exploratory. Bias in data interpretation By interpreting the results, one needs to make sure that proper statistical tests were used, that results were presented correctly and that data are interpreted only if there was a statistical signifi cance of the observed relationship (5). Otherwise, there may be some bias in a research. However, wishful thinking is not rare in scientifi c research. Some researchers tend to believe so much in their original hypotheses that they tend to neglect the original fi ndings and interpret them in favor of their beliefs. Examples are: discussing observed diff erences and associa-• tions even if they are not statistically signifi cant (the often used expression is "borderline significance"); discussing diff erences which are statistically • signifi cant but are not clinically meaningful; drawing conclusions about the causality, even • if the study was not designed as an experiment; drawing conclusions about the values outside • the range of observed data (extrapolation); overgeneralization of the study conclusions to • the entire general population, even if a study was confi ned to the population subset; Type I (the expected eff ect is found signifi cant, • when actually there is none) and type II (the expected eff ect is not found signifi cant, when it is actually present) errors (6). Even if this is done as an honest error or due to the negligence, it is still considered a serious misconduct. Publication bias Unfortunately, scientifi c journals are much more likely to accept for publication a study which reports some positive than a study with negative fi ndings. Such behavior creates false impression in the literature and may cause long-term consequences to the entire scientifi c community. Also, if negative results would not have so many diffi culties to get published, other scientists would not unnecessarily waste their time and fi nancial resources by re-running the same experiments. Journal editors are the most responsible for this phenomenon. Ideally, a study should have equal opportunity to be published regardless of the nature of its fi ndings, if designed in a proper way, with valid scientifi c assumptions, well conducted experiments and adequate data analysis, presentation and conclusions. However, in reality, this is not the case. To enable publication of studies reporting negative fi ndings, several journals have already been launched, such as Journal of Pharmaceutical Negative Results, Journal of Negative Results in Biomedicine, Journal of Interesting Negative Results and some other. The aim of such journals is to counterbalance the ever-increasing pressure in the scientifi c literature to publish only positive results. It is our policy at Biochemia Medica to give equal consideration to submitted articles, regardless to the nature of its fi ndings. One sort of publication bias is the so called funding bias which occurs due to the prevailing number of studies funded by the same company, related to the same scientifi c question and supporting the interests of the sponsoring company. It is absolutely acceptable to receive funding from a company to perform a research, as long as the study is run independently and not being infl uenced in any way by the sponsoring company and as long as the funding source is declared as a potential confl ict of interest to the journal editors, reviewers and readers. It is the policy of our Journal to demand such declaration from the authors during submission and to publish this declaration in the published article (7). By this we believe that scientifi c community is given an opportunity to judge on the presence of any potential bias in the published work. Conclusion There are many potential sources of bias in research. Bias in research can cause distorted results and wrong conclusions. Such studies can lead to unnecessary costs, wrong clinical practice and they can eventually cause some kind of harm to the patient. It is therefore the responsibility of all involved stakeholders in the scientifi c publishing to ensure that only valid and unbiased research conducted in a highly professional and competent manner is published (8).
2017-06-20T15:44:44.206Z
2014-08-05T00:00:00.000
{ "year": 2013, "sha1": "9be491f597afa1712b33c4773ea825dae298d35c", "oa_license": "CCBYNC", "oa_url": "http://www.biochemia-medica.com/assets/images/upload/xml_tif/Simundic_AM_-Bias_in_research.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a1c6fdc51c4394145df9070ed87bf2b1fb960558", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
234037791
pes2o/s2orc
v3-fos-license
The Application of GIS Technologies and Remote Sensing Data to Determine the Morphometric Features of the River Basin: The Case of the Upper Part of the Charysh River The paper focuses on the possibilities of using the technology of geographic information systems (GIS-technologies) and data on the Earth remote sensing to determine the morphometric parameters of the river basin on the example of the upper part of the Charysh River (Russia). The authors conclude that a river basin is a holistic natural complex in which all physical parameters are interdependent and have specific features for each basin. Geographic information methods are used to study hydrological objects and in the modeling of drainage basins as natural compound complexes. In combination with GIS technologies, remote sensing methods are convenient for deciphering and assessing the hydrological situation of territories. For the upper part of the Charysh River basin, a complex of morphometric calculations was performed. Based on DEM and satellite images, thematic layers for GIS were built, and the thematic layers were designed as maps. Introduction Nowadays, GIS technologies are attractive due to the speed and accuracy of cartometric work and calculations. This technology can exclude random measurement errors during their performance, and, if possible, present the results in a visual and easily perceived cartographic form [5]. Geoinformation methods and methods for remote sensing data analysis are widely used in landscape, resource, hydrological, geological, and geomorphological research. Their use is convenient for identifying the relationships between the components of the territory, building maps, and conducting visual and quantitative analysis of specific indicators (e.g., river basins). The river basin is the most important natural object, which largely determines human life. It can act as the central territorial unit in zoning territories, assessing erosion processes, and environmental studies. The use of geoinformation and remote sensing methods for studying the hydrological features of river systems allows one to fully reveal and characterize the basin as a natural geosystem with complex internal connections. Knowledge of the morphometric features of the river basin makes it possible to competently use the resources available within it. This knowledge should be used to predict and overcome situations unfavorable for a person and the economy and ensure the rational use of natural benefits and their improvement. Study Area The Charysh, a left tributary of the Ob River, flows through the territory of the Altai Republic and the Altai Krai. The river takes its beginning from the slopes of the Korgon Range at an altitude of 1,800 meters and flows into the Ob near the village of Ust-Charyshskaya Pristan. The length of the river is 547 km, the area of the water basin is 22,200 sq. km [3]. The upper part of the water collection (up to 60% of the area) is mountainous. From the north, the river is limited by the Baschelak ridge, from the east -the Terekta (highest point -2,927 m), from the south -Korgonsky, Tigiretsky (highest point -2,007 m), and Kolyvansky (highest point -the mountain of Sinyukha (1,206 m)) ridges, with an average height of 1,800-2,500 m. Boundaries in flat terrain are drawn along the hills between the tributaries of the Charysh, the basins of the Alei River, and along the Kolyvan Uval. Morphometric analysis was carried out for the upper part of the Charysh. Materials and Methods The use of digital elevation models [DEM] and satellite data for their construction brings a morphometric description of the surface to a qualitatively new level. DEM, as a source of morphometric descriptions, have been of interest since their inception. In recent decades, techniques for a versatile morphometric analysis of territories based on DEM data were developed. Therefore, the labor intensity of geomorphological mapping saw a decrease. The experimental base, the methodological and theoretical potential of modeling, and the tools for modeling processes and phenomena dependent on the relief were significantly enriched. The accuracy of DEM constructed from the used remote sensing data is quite sufficient for performing common operations of morphometric analysis in a scale series common for the study of typical geomorphological objects, e.g., river basins of different orders [7]. The geoinformation method allows for calculating various features of river basins based on DEM. Such DEM as Shuttle Radar Topographic Mission (SRTM), with a spatial resolution of 90 m, and Advanced Land Observing Satellite (ALOS), with a spatial resolution of 30 m, were used as input data. The calculations were performed in the program ArcGIS (ESRI Inc, USA), using the instrument SpatialAnalyst. The obtained results (morphometric maps) have a raster data format, the construction method of which and their interpretation have their features. We obtained the following morphometric features of the basin: -Slope -the ratio of the river's fall in any of its sections to this section's length. In ArcGIS, the slope is the rate of change in elevation for each cell in the digital elevation model. It is the first derivative of the DEM. 3 groundwater runoff is directed towards a natural watercourse or reservoir [2]. The boundaries between watersheds are called flow demarcation lines. The estuary is the lowest point along the watershed boundary. -Stream ordering is a method of assigning a numerical order to links in a stream network. This order is a method for determining the types of classification of streams based on the number of their inflows. The use of the above tools allows one to obtain raster data for the entire basin. For the convenience of calculations (e.g., determining areas or lengths), it is necessary to digitize the received data, i.e., translate it into a vector data format. This was done using the "Raster to Polygons" or "Raster to Lines" tools and the Convert toolset. Results The theoretical prerequisite for modeling geospace within the boundaries of the river basin is its allocation as an independent unit of geographic space [7,8]. According to F. N. Milkov [4], the riverbed and the adjacent territory, from which the river collects surface and underground runoff, form a complex natural system in the landscape plan -a paragenetic basin system. The main feature of this system is the orderliness of its constituent elements. The water flow (moving from the upper reaches to the mouth of the river) and the direction of the solid flow (first moving from the watersheds to the river valley and then along the mouth), together with the channel flow, form a single dynamic system of the river basin in its longitudinal and transverse planes. The basin model consists of drainage basins of a different order. The components of a drainage basin are its relief and the configuration of the network of runoff lines. The relief is the main factor of the runoff because its features determine the surface runoff from the territory [6]. When analyzing the relief of the upper part of the Charysh River basin, the basin's morphometric parameters were determined. The DEM data allowed us to find the hypsographic curve of the upper part of the Charysh River basin (figure 1). The average height of the river is 1,371 m. The standard deviation is 253 m. A morphometric relief model was constructed to describe the basin's relief. The isolines were drawn at 250 m intervals to analyze the quantitative features of the relief. The hypsometric map reflects the elevation levels of the surface. The territory is gradually increasing from 959 to 2,460 m above sea level. The largest percentage of the area is occupied by territories located in the range of heights from 1,100 to 1,500 m above sea level. The hypsometric elevation map, compiled in GIS, is represented by raster and vector layers with attributive information for each object contained in it. The raster layer contains DEM with layered elevation coloring and shaded relief. Contours and elevations represent vector layers. The "horizontal" layer contains the following attributive information: object code (according to the nomenclature of symbols of the state geographic information center [SGIC]), object name (main, thickened, and intermediate horizontal), horizontal value (absolute height above sea level). The vector layer "elevation mark" contains the following attributive information: object code (according to the nomenclature of SGIC), object name (height mark, peak, etc.), own name of the object (name of a peak, mountain, hill, rise), and height value (absolute height above sea level). A map of slope angles of surfaces is compiled in the form of raster and vector layers with attribute information. The raster layer contains information about the angles of inclination of surfaces with the pixel-by-pixel coloring of their values. The vector layer is a digitized raster layer of this map. It shows slope value (in degrees) and polygon area (in sq. km). Sub-horizontal slopes (slope of the surface 0°-1°) are valley bottoms and watershed surfaces. Their area is 8.6%. The steepness of the slopes increases with the rise in the mountains. Thus, within the Korgon ridge (in the western part of the considered territory), there are very steep slopes (up to 49˚ (0.2% of the area)). A map of slope angles of surfaces is compiled in the form of raster and vector layers with attributive information. The exposure map contains information about tilted surfaces facing a particular cardinal direction. Depending on the exposure, all DEM cells were classified according to eight cardinal points and one class "plane," the slope value of which is 0°. Thus, it is possible to designate the northern slopes that can be used to equip ski tracks and the southern slopes, where we can observe active spring snowmelt with accompanying erosion. Sub-horizontal surfaces ("plane") occupy 1.3 sq. km (0.1% of the area). The least slopes are observed in the northern exposure, which occupies 165.1 sq. km (9.5% of the area). Most slopes are presented in the northeastern exposure (265.6 (15.3% of the area)). The slopes of other exposures are presented in relatively equal proportions (figure 2). The flow direction map is a delineation of slopes, the surface flow from which flows in one cardinal direction. We identified eight flow directions. The eastern slopes occupy 164 sq. km (9.6%), southeastern -157 sq. km (9.2%), southern -322 sq. km (18.8%), south-western -192 sq. km (11.2%), western -202 sq. km (11.8%), northwest -157 sq. km (9.2%), north -322 sq. km (18.8%), and northeast -195 sq. km (11.4%). The predominance of the southern and northern directions of runoff and slopes (18.8% each) indicates the sub latitudinal location of the mountain ranges that form the territory's relief and determine its runoff. The slopes in the southwestern, western, and northeastern directions are slightly larger than the area of the slopes in the southeastern and northwestern directions. The flow direction map is presented in the form of raster and vector layers with attributive information. The raster layer contains information about the surfaces' exposure with the pixel-by-pixel coloring of their values. The vector layer is a digitized raster exposure layer containing the following attributive information: azimuth (in degrees), cardinal direction (in eight cardinal directions: N, NE, E, SE, S, SW, W, NW, and "plane"), and area polygon. The stream order map contains information on the development of the valley lines system. When a map is automatically generated in the ArcGIS program, watercourses without tributaries are referred to as watercourses of the first order. When two watercourses of the first order merge, a watercourse of the second order is formed, etc. This stream ordering scheme is called "ascending" and is calculated in ArcGIS using the Strahler method. The stream order map (figure 3) is presented in the form of raster and vector layers with attributive information. The raster layer contains information about the order of the stream with the pixel-by-pixel coloring of their values. The vector layer is a digitized raster layer of this map containing the information about stream order and stream length (in km). Discussion Digital elevation models allow, with varying degrees of detail (depending on the selected spatial resolution of the DEM), to establish several parameters and features of the river basin, which, in turn, forms the basis for spatial analysis and identification of hydrological and geomorphological processes occurring on its territory (including such dangerous processes as erosion, slope, and avalanche formation, etc.). These parameters also help to learn the features of the local climate. Thus, the speed of the river flow depends on the slope. It also determines the intensity of erosion, features of surface runoff, and the amount of solar energy entering the territory (which determines the territory's microclimatic features). The exposure characterizes the orientation of the river basin concerning the flow of sunlight. It determines insolation, the amount of radiation received by the earth's surface. The exposure is an essential factor of the river basin's local climate (microclimate), as it determines the location of the slopes in relation to the prevailing winds (windward and leeward slopes) and in relation to sources of moisture (large hydrological objects). Areas with negative plan curvature are responsible for concave areas (accumulation areas are valley bottoms). Areas with positive planned curvature characterize convex areas (valley sides, ridges, and ledges marked with material drift. The greater the curvature (without taking into account the sign), the more concave or convex the surface is and vice versa. The practical convenience of determining the total curvature is that it equally characterizes both mechanisms of accumulation. The slope of the surface characterizes the relative intensity of material drift, and the exposure characterizes its direction. Thus, the vertical curvature determines the patterns of erosion and accumulation, while the horizontal curvature determines the runoff's spatial heterogeneity. Simultaneous consideration of both helps to better understand the patterns of redistribution of the material over the surface in liquid or solid form [1]. The "Total Runoff" tool allows one to identify all channels -permanent and temporary streams, avalanche trays, mudflow channels, etc. The determination of the order of watercourses allows not only to establish a hierarchical structure of runoff in the territory but also indirectly indicates the water content of rivers. Some features of watercourses can be deduced only by considering their order. Conclusion A river basin is a holistic natural complex in which all physical parameters are interdependent and have specific features for each basin. Any basin has morphometric indicators (area, length, width, slope, exposure, the relative height of the catchment area) and hydrological features of the basin (the direction of flow, order of streams, and total flow). Geographic information methods are used to study hydrological objects and in the modeling of drainage basins as natural compound complexes. In combination with GIS technologies, remote sensing methods are convenient for deciphering and assessing the hydrological situation of territories. For the upper part of the Charysh River basin, a complex of morphometric calculations was performed. Based on DEM and satellite images, thematic layers for GIS were built (hypsometric, surface inclination angles, exposure, flow direction and order of watercourses, and a hydrological map of the territory). The thematic layers were designed as maps. Morphometric information is an essential resource for: -Monitoring, modeling, and subsequent overcoming of emergencies unfavorable for humans and their economy, in particular, hydrological emergencies and landslides; -Rational use of land and development of agriculture; -The design of transport routes and engineering structures; -The development of a network of specially protected natural areas and recreational facilities.
2021-05-10T00:03:54.301Z
2021-02-01T00:00:00.000
{ "year": 2021, "sha1": "543be8d812146cdfc93582f6dd250ab2902729f2", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/670/1/012061", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "ba45ca438e4d686f5266c8fa6dc6f129a59bff10", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Physics", "Geology" ] }
37031117
pes2o/s2orc
v3-fos-license
Oxidative enzymes activity in sugarcane juice as a function of the planting system Received 10/4/2012 Accepted 30/11/2012 (005615) 1 Centro de Estudos Avançados em Bioenergia e Tecnologia Sucroalcooleira, Universidade do Oeste Paulista – UNOESTE, Rod. Raposo Tavares, Km 572, CEP 19067-175, Presidente Prudente, SP, Brasil, e-mail: tmarques@uol.com.br 2 Departamento de Engenharia de Biossistemas, Instituto Nacional de Ciência e Tecnologia em Engenharia da Irrigação, Escola Superior de Agricultura “Luiz de Queiroz” – ESALQ, Universidade de São Paulo – USP, Av. Pádua Dias, 11, CP 9, CEP 13418-900, Piracicaba, SP, Brasil *Corresponding author Oxidative enzymes activity in sugarcane juice as a function of the planting system Introduction In the first decade of the 21 st century, sugarcane production and cultivated areas grew as a result of the demand for renewable fuels, such as ethanol from sugarcane, and the international market demand for sugar, facts that contributed to price increase of this commodity (SANTOS; BORÉM; CALDAS, 2011). The increases in production led to diverting land used for cattle ranching (low level of production) to the production of sugarcane. Brazil is the world's largest producer of sugarcane, with 8.43 million hectares and an estimated production of 588.9 million tons in 2012, which was 5.64% lower than that of the previous harvest (2009/2010) (623.9 million tons).Brazil's Center-South region is responsible for 88.18% of the harvest in the country, and in the State of São Paulo it was estimated a production of 320.6 million tons in an area of 4.4 million hectares (COMPANHIA..., 2011). Physical and chemical treatments are necessary in the manufacturing process of granulated sugar from sugarcane aiming to clarify the broth and produce clearer sugars, which have greater value in the market.The dark pigments in plants can be of non-enzymatic or enzymatic-origin; the latter results from the activities of oxidases present in the industrial process (REIN, 2007). Soil moisture and temperature promotes changes in the biometrics of sugarcane by increasing the number of tillers, thus increasing the average height of the stems, and it can promote changes in sugarcane quality and quantity (BONNETT, 1998;SINGELS et al., 2005;SINGH;SHUKLA;BHATNAGAR, 2007;ALMEIDA et al., 2008).This research hypothesis is that using different doses of hydrogel polymers, different amounts of dry matter as cover, and systems with different planting depths interferes with the Abstract In Brazil, the largest producer of sugarcane in the world, the industrial process transforms this crop into ethanol and/or granulated sugar.Some cultivars exhibit enzymatic browning in the extracted sugarcane juice at levels harmful to the manufacturing process of white granulated sugar.The objective of this study was to assess the effect of sugarcane straw used as soil coverage, the use of different planting systems, and treatments with hydrogel polymer on enzymatic activity.The cultivar RB 86 7515 was sampled for 8 months; the first sample was obtained by cutting the upper portion of the stalk at the internode, which was taken to the laboratory for determination of the enzymatic activity of polyphenoloxidase (PPO) and peroxidase (POD).The soil coverage with different forms of straw as well as the planting systems did not change the enzymatic activity of polyphenoloxidase (PPO) and peroxidase (POD).The polyphenoloxidase (PPO) activity increased with the use of a polymer due to increased polyphenoloxidase (PPO) activity in the groove system.The enzymes studied showed changes in activity during the experimental period.The production of sugar at the end of the season (August to November) avoids the periods of highest enzymatic activity.Keywords: Saccharum; straw coverage; enzymes. Marques; Rampazo; Marques was measured for eight months; apical meristem samples were collected, placed in a cooler and forwarded to the laboratory, and pressed at 250 kgf cm -2 .The juice obtained (150 g) was prepared according to Vanini, Kwiastokowski and Clemente (2010) The experimental unit (sub-plot) consisted of five rows of 5 meters long and spacing of 1.5 m (area of 37.5 m 2 ).Culms were collected monthly from March to October 2010 for juice extraction and determination of enzymatic activity, according to Campos et al. (1996).Sugarcane samples were collected in three rows (3 repetitions) excluding the first and the last microplot of 1 m in each row.The first culms, below the insertion of the TVD (Top Visible Dewlap), were collected and pressed at the laboratory.The sugarcane juice extracted was quickly analyzed. The amounts of dry matter used were of 0 t ha -1 , 5 t ha -1 , 10 t ha -1 , 15 t ha -1 used the humidity 40% on green leaves according Orlando Filho (1983), Bovi and Serra (2001) and Ripoli and Ripoli (2004).These treatments with the addition of straw started after the first harvest (June of 2009).The data were submitted to analysis of variance by the F-test at 5% probability level.The means were compared by the Scott-Knott test using the SISVAR statistical software (ANOVA, p < 0.05), according to Gomes (1990).For the regression analysis of enzymatic activity data, the Microcal Origin 6.0 software was used. Results and discussion The weather data of the rainfall, maximum, and minimum temperatures collected are shown in Figure 1.Table 1 presents the results of soil analysis carried out on the experimental soil 60 days preceding planting and soon after the harvest of sugarcane. During the experimental period (March to October), the synthetic polymer promoted changes in the PPO activity (Table 2).physiology of plant growth promoting changes in enzyme activities.The objective of this study was to assess the effect of sugarcane straw used as soil coverage, the use of different planting systems, and treatments with hydrogel polymer on enzymatic activity. Materials and methods The study was conducted in the experimental area of the University of the Western São Paulo (Unoeste), campus II, in Presidente Prudente-SP, latitude 22° 07' 04" South, longitude 51° 22' 04" West at 430 m above sea level.The cultivar of sugarcane RB 867515 was planted in December 2007 and harvested in June 2009 (18 months).After harvest the first sugarcane ratoon was used in this study.The soil was identified as Red-Yellow Argisols (EMBRAPA, 1999), a type C production environment (PRADO, 2005).According to Koppen, the climate of the region is classified as Aw.The weather data such as rainfall and maximum and minimum temperature were collected during the experiment period.A composite soil sampling was performed 60 days preceding planting and soon after sugarcane harvest. According to the recommendations of Espironelo (1992), limestone addition was not necessary to correct the soil acidity, but it was fertilized (2007 harvest season) with the equivalent of 0 kg ha -1 of N, 135 kg ha -1 of P 2 O 5 , and 135 kg ha -1 of K 2 O using 675 kg ha -1 of 20-20-00 fertilizer.In February 2008, the plants received 30.8 kg ha -1 N in the form of 70 kg ha -1 urea (44%N) when the average plant height was 0.20 m, A complete randomized block design in plot subdivided was used: block 1-planting groove system and block 2 -windrow system.Between each block, four doses of waterabsorbing polymer were tested.In these plots, the doses were subdivided into four treatments using different amounts of dry matter of sugarcane straw as coverage (2 × 4 × 4).The enzymatic activity of polyphenoloxidase (PPO) and peroxidase (POD) PPO and POD.This fact indicates that the plants had a weak interaction with the use of coverage, which is usually used to improve soil physical and chemical attributes such as water holding capacity and the provision of water for cultivated plants.Therefore, the polymer used, which promoted changes in the enzymatic activity, showed a stronger plant-soil relationship. It can be observed in Figure 2 that there was a quadratic regression between the months and the PPO enzyme activities with a significant statistical adjustment at the level of 1%.Deriving the equation and equalizing to zero, the curve maximization point is obtained, which in this case is 4.48 (between April and may) with 290 au activity.This enzymatic activity plays an important role in the stress that occurs during these months.Figure 1 shows rainfall reduction, which in sandy soils, promotes lower humidity in the soil leading to greater soil water stress.This fact leads to the browning of sugarcane juice, according to Qudsieh et al. (2002) and Bucheli and Robinson Figure 1 . Figure 1.Weather information obtained at the meteorological station of Unoeste during the experimental period in 2010. Table 1 . Analysis of the experimental soil before planting and after harvest of sugarcane. Table 2 . Enzymatic activity (au -activity unit) of PPO and POD in the planting systems and polymer doses
2018-02-03T00:55:28.181Z
2013-02-25T00:00:00.000
{ "year": 2013, "sha1": "9a696af4b5c084c06968a80ba292a09e4710ad19", "oa_license": "CCBYNC", "oa_url": "https://www.scielo.br/j/cta/a/Jq8qQcrBxRMn5WL6BdP5DZD/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "9a696af4b5c084c06968a80ba292a09e4710ad19", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Chemistry" ] }
15088534
pes2o/s2orc
v3-fos-license
Cadmium induces changes in corticotropic and prolactin cells in the Podarcis sicula lizard pituitary gland We analyzed the effect of cadmium on corticotropic (ACTH) and prolactin (PRL) cells in the pituitary gland of the Podarcis sicula (P. sicula) lizard under chronic exposure to this metal. Adult lizards were given CdCl2 in drinking water at the dose of 10 µg/10 g body mass for 120 days. Light microscopy was performed after histological and immunohistochemical staining, and the effects were followed at regular time intervals up to 120 days post-treatment. We detected substantial variations in the general morphology of the pituitary: unlike the control lizards in which the gland appeared compact, the treated lizards showed a glandular tissue with dilated spaces that were more extensive at 90 and 120 days. PRL and ACTH cells showed an increase in occurrence and immunostaining intensity in treated lizards in comparison with the same cells of control animals. This cellular increase peaked for PRL at 30 days in the rostral, medial and also caudal pars distalis of the gland. ACTH cells appeared to increase markedly after 60 days of treatment in both the pars distalis and the pars intermedia. Again, at 60 days small, isolated ACTH cells were also found in the caudal pars distalis in which these cells were generally absent. However, at 120 days both these cellular types showed an occurrence, distribution and morphology similar to those observed in the control lizards. In lizards, protracted oral exposure to cadmium evidently involves an alteration of the normal morphology of the gland and an inhibitory effect of ACTH and PRL cells, since they increase in occurrence and immunostaining. Yet in time the inhibitory effect of cadmium on ACTH and PRL cells falls back and their occurrence appears similar to that of the control lizard. Introduction Cadmium (Cd) is known to be a potent toxic metal and a significant environmental pollutant. Studies concerning animals exposed to cadmium are of great interest since the presence of this metal in the environment has increased due to industrial activity and agricultural practices, thereby entering the food chain. 1 In human populations, cadmium exposure occurs primarily through dietary sources and drinking water as well as cigarette smoking. Cadmium exposure in mammals has been proved to cause bone problems 2 and renal dysfunctions, 3 alter reproductive functions 4 and exert neurotoxic effects. 5 It has been shown that Cd is also an endocrine disruptor that may play a role in the aetiology of the pathologies that involve the hypothalamic-pituitary-testicular axis of mammals. [6][7][8] Cadmium mimics the function of steroid hormones and its potential role in the development of hormone-dependent cancers is widely discussed. 9 Studies carried out on rats also prove that Cd 2+ is absorbed and retained in the pituitary gland, 10,11 leading to a decrease in content of luteinizing hormone and alteration of gland functionality. 7,8,12 In the mouse Mus platythrix, cadmium induces hypertrophy and hyperplasia of pituitary gonadotrophs. 13 In rat, Cd modifies the lactotroph cell activity of the pituitary gland through biochemical, genomic and morphological changes 14 and modifies plasma levels of luteinizing hormone (LH) and follicle-stimulating hormone (FSH). [15][16][17][18] Histopathological damage depends on exposure time, dose and administration route of Cd. 8 Lower concentrations of CdCl 2 do not seem to influence the gonadotrope (GTH) cells in the cyprinid Puntius sarana. 19 However, in fish cadmium exposure affects the activity of the endocrine system. [19][20][21] Compared with other classes of vertebrates, reptiles are rarely used in studies on the possible toxic effects of heavy metals. 22 However, they are important bioindicators because they are susceptible to the accumulation of persistent pollution which is identified as a major threat to reptile populations worldwide. Although Cd accumulation in various reptile organs has been studied, 1,[22][23][24][25] there has been very little experimental laboratory research on the effects of cadmium in reptiles. 26 Particularly lacking are effect-based studies in reptiles exposed to known concentrations of contaminants. Reptiles could be a good model to study the biological effects of cadmium. Although the physiological function of this metal is unknown, there is evidence to suggest that cadmium is a metallohormone with estrogenic and androgenic effects. 9 As we have already reported elsewhere, in the lizard an acute treatment with a single high (20 μg/10 g body mass) intraperitoneal dose of CdCl 2 not only induces apoptosis, especially in the rostral pars distalis, which appears irreversible, 27 but also alters the normal endocrine function of the gland. 28 Once again in the lizard, we also reported that chronic exposure to CdCl 2 affects the hormonal secretion of GTH cells through an inhibitory effect. 29 The aim of this paper was thus to analyse the effects of cadmium on ACTH and PRL cells in the lizard Podarcis sicula (P. sicula) exposed to chronic oral treatment for 120 days at an average dose (10 μg/10 g body mass) of CdCL 2 . Materials and Methods This study was performed on 30 adult females of P. sicula, captured near Naples (Italy) and kept under controlled conditions of light and temperature. Twenty specimens were subjected to chronic treatment and the others were used as control. CdCl 2 was administered to the lizards in drinking water for four months at a daily dose of 10 μg/10 g body mass while control lizards received cadmium-free water. The above dosage was chosen in accordance with previous reports. 24,29 No mortality or altered animal behaviour was recorded during the experiments. Groups of four treated and two control animals were killed at 10, 30, 60, 90 and 120 days. Experiments were performed in accordance with the Guidelines for Animal Experimentation of the Italian Department of Health under the supervision of a veterinarian, and organised to minimise stress and the number of lizards used. All animals were killed under ice anaesthesia by a cervical cut. In lizards the pituitary gland is extremely small and almost completely enclosed in the sella turcica, which makes its removal difficult and may cause damage to the gland. For this reason, we analysed the hypophysis in toto with the brain. After removal of the skullcap, the brains were fixed in Bouin's solution for 48 h at room temperature and then decalcified in a solution of 5% EDTA in 10% for- malin for 25-30 days, dehydrated and enclosed in paraffin. This was the only procedure able to preserve not only the morphology but also the antigenicity of the cells. Serial 6 µm sections were processed for routine histological and immunohistochemical staining. Mallory's trichromic stain was used for the study of the general morphology while the immunohistochemical procedure was applied to identify and observe the adenohypophyseal cells. For immunohistochemical staining, 30 the sections were processed according to the ABC technique 31 using the following heterologous antisera at specific working dilutions: anti-human PRL (1/300, Signet Laboratories, Dedham, MA, USA) and anti-synthetic ACTH1-24 (1/600, Biogenesis, Poole, UK). Visualization was carried out using the Vectastain Elite ABC kit (Vector Labs, Inc., Burlingame, CA, USA) and revealed by 3 mg 3,3'-diaminobenzidine-tetrahydrochloride (Sigma, St. Louis, MO, USA) in 10 mL PBS and 150 µL 3% H 2 O 2 . The sections were then contrasted with haemalum for one minute. Antibody specificity was assessed by omitting the primary antisera and absorbing each antiserum with the specific hormone. The images were examined and acquired by a Kontron Electronic Imaging System KS300 (Zeiss, Oberkochen, Germany). Quantification of the two cellular types was carried out on at least 300 cells, with a visible nucleus, on serial sections per animal and relative to specific regions, namely the rostral, medial and caudal pars distalis and pars intermedia. Data were expressed as the number of immunostained cells x 100/number of total cells. The data obtained were pooled and analysed performing Student's t-test to determine the significance (P<0.05) between control and cadmium exposed groups. General morphology In all control specimens the pituitary gland appeared compact and extended in the cephalic-caudal direction in which the pars distalis (PD) was divided into a rostral part (RPD), a caudal part (CPD) and a medial part (MPD) (Figure 1 A). The whole PD consisted of homogeneous vascularised cellular cordons with an evident basal lamina surrounding them and with the cells clearly identifiable by Mallory stain (Figure 1 B). In the treated animals the pituitary gland tissue appeared atrophied in some areas, with wide irregular intercellular spaces, which appeared more extensive at 60 (Figure 1 C) and evident also at 90 and 120 days. The gland also showed greater vascularisation; a basal lamina surrounded the cellular cordons only partially and several cells appeared altered in shape (Figure 1 D). Immunohistochemistry In the control lizards, ACTH and PRL cells appeared clearly through immunohistochemical detection as distinct cellular populations with a specific distribution in the PD or PI (pars intermedia). In treated specimens we observed the increase in occurrence and immunostain both for ACTH and for PRL cells. ACTH and PRL cells already at 10 days revealed a small increase, albeit with a different time course: PRL cells appeared more abundant at 30 days from treatment, while ACTH were more copious at 60 days. In all these cells we also observed greater cytoplasmic immu no staining intensity. Prolactin cells In control lizards PRL cells were found essentially in the RPD (24.0±0.5%) but also in the MPD (20.1±0.24%) (Figure 2 A). They were generally isolated or clustered in small cellular cordons of three-five cells (Figure 2 B). PRL cells appeared pyriform or ovoidal in shape, with an eccentric and ovoidal nucleus and a moderately dense cytoplasm (Figure 2 C). A few PRL cells were also observed in the CPD (1.2±0.05%), but they were always absent in the PI. At 10 days of treatment PRL cells had slightly increased in the RPD and MPD, but they were more numerous in the CPD (Table 1). Their occurrence peaked at 30 days from Original paper In all these regions they also showed greater immunostaining intensity (Figure 2 E,F). At 60 and 90 days of treatment PRL cells still appeared numerous, albeit with diminishing values, and at 120 days close to those of the control animals as reported in Table 1. No other differences were observed between PRL cells in control and treated specimens. Corticotropic cells ACTH cells were observed in both the RPD (29.0±0.7%), and the MPD (12.0±0.13%), as well as in the PI (59.0±0.12%), but were absent in the CPD of the control specimens (Figure 2 G,I). ACTH cells were elongated in shape, with a generally central nucleus and higher cytoplasmic density in the PD (Figure 2 H) compared with PI. At 10 days of treatment their occurrence appeared similar to that of control lizards, except for the presence of a few cells also in the CPD (Table 2) Discussion Our findings indicate the toxic effect of cadmium upon the pituitary gland of the lizard P. sicula exposed to chronic oral treatment with an average CdCl 2 dosage. Cytotoxic action may be inferred both from the morphological alteration of the gland and from the dysregulatory process in ACTH and PRL cells, effects which are nonetheless subject to different time courses. In terms of morphology, we observed the progressive disorganisation of hypophyseal tissue due to the appearance of atrophied areas with wide intercellular spaces, which were more expressed at 90 and 120 days of treatment. Similar effects have been reported in other glandular tissues such as the thyroid of the catfish Clarias batrachus 20 and the testis of the cyprinid Puntius sarana 19 and of the monkey Presbytis entellus entellus. 32 However, in lizard exposed to acute treatment with a single high intraperitoneal dose of CdCl 2 we previously noted that this metal Original paper induces apoptosis, as also reported in the anterior pituitary cells of the rat, 33 and that this effect is irreversible. 27 In this chronic treatment, apart from morphological damage, a parallel increase in the vessel network was also observed. The protraction of such tissue alterations and the enhanced vascularization of the pituitary gland agree with previous findings that chronic exposure to Cd can lead to elevations in blood pressure. Considerable evidence suggests that hypertensive effects of Cd result from complex actions on both the vascular endothelium and vascular smooth muscle. 34 The same increase in blood pressure could partly explain the alterations in the tissue architecture which we found in this gland. This heavy metal is also known to affect the endocrine system in mammals. 7,8,12 Likewise, we also observed in the lizard the inhibitory action of cadmium in both ACTH and PRL cells, just as we previously reported for gonadotrope cells. 29 Indeed, both these cell types during chronic treatment are more numerous and show marked immunoreactivity. However, we observed that the cellular increase peaked for PRL at 30 days, while for ACTH it peaked after 60 days of treatment. This finding is also supported by a concomitant increase in immunostaining of the cytoplasmic granules of all these cells. The increase in occurrence and in cytoplasmic density is indicative of a hormonal accumulation in these cells due to the inhibiting effect induced by cadmium. It has been proved elsewhere that divalent cations, such as Cd 2+ , inhibit in vitro release of GH and PRL from bovine adenohypophyseal secretory granules. 35 Further, the inhibitory effect of cadmium on the hormonal secretion of many adenohypophyseal cells has been found in mammals by virtue of biochemical studies: the levels of LH and FSH in serum of rat 7,12 and pig 36 exposed to cadmium decrease. Figure 2. ABC technique. Sagittal sections. (A-F) PRL cells (in brown). (A) Control lizard, showing the occurrence of PRL cells in the RPD and in the MPD. (B, C) Details of panel A showing these isolated cells (B) or organized into small cordons (C). (D) Treated lizard at 30 days, showing the increase in PRL cells extending also into the CPD. (E, F) Details of panel D showing the greater intensity of immunostain of the cytoplasm in the MPD (E) and the CPD (F). (G-L) ACTH cells (in brown). (G) Control lizard, showing the occurrence of ACTH cells in the RPD In mammals Cd differentially affects the secretory mechanisms of the pituitary hormones: the effects of this metal are dosedependent only for prolactin and ACTH. 8 In the fish Puntius sarana, 19 only high concentrations of CdCl 2 influence the pituitary gonadotropins with a gradual accumulation of secretory granules. Also in the lizard Cd could well compete with calcium at pituitary level through the membrane channels or change intracellular calcium mobilization, as postulated for mammals, 37 and inhibit hormone secretion. However, Cd could also cause alterations in receptor binding and secretory mechanisms of pituitary hormones as reported by Pillai et al. 38 in female rats: cadmium generates free radicals which change the biophysical properties of the pituitary membranes with an inhibitory effect on hormone secretion. However, in the present study we observed that, unlike tissue alteration which persists in time, the inhibitory action both on ACTH and PRL cells diminishes: at 120 days the occurrence of these cells returned to values similar to those observed in the control lizards. This reaction could be viewed as a probable adaptation to the toxic action of cadmium in time, when the Cd dosage is at moderate levels. It may also be attributed to the activation of defence mechanisms such as the action of metallothionein, given that Cd in chronic intoxication stimulates de novo synthesis of MTs; toxicity in the cells is assumed to start when Cd ion loading exceeds the buffering capacity of intracellular MTs. 39 That said, the lizard appears to be a good experimental model for studying the action of heavy metals on the endocrine system.
2014-10-01T00:00:00.000Z
2010-12-21T00:00:00.000
{ "year": 2010, "sha1": "765bc916be0feb7745a0bc1511798b763440334f", "oa_license": "CCBYNC", "oa_url": "https://www.ejh.it/index.php/ejh/article/download/ejh.2010.e45/1959", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "27cc1056d701e0ba3877e77aeefb7e5dfc85afab", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
118955001
pes2o/s2orc
v3-fos-license
Transversal magnetoresistance and Shubnikov-de Haas oscillations in Weyl semimetals We explore theoretically the magnetoresistance of Weyl semimetals in transversal magnetic fields away from charge neutrality. The analysis within the self-consistent Born approximation is done for the two different models of disorder: (i) short-range impurties and (ii) charged (Coulomb) impurities. For these models of disorder, we calculate the conductivity away from charge neutrality point as well as the Hall conductivity, and analyze the transversal magnetoresistance (TMR) and Shubnikov-de Haas oscillations for both types of disorder. We further consider a model with Weyl nodes shifted in energy with respect to each other (as found in various materials) with the chemical potential corresponding to the total charge neutrality. In the experimentally most relevant case of Coulomb impurities, we find in this model a large TMR in a broad range of quantizing magnetic fields. More specifically, in the ultra-quantum limit, where only the zeroth Landau level is effective, the TMR is linear in magnetic field. In the regime of moderate (but still quantizing) magnetic fields, where the higher Landau levels are relevant, the rapidly growing TMR is supplemented by strong Shubnikov-de Haas oscillations, consistent with experimental observations. I. INTRODUCTION One of the central research directions in condensed matter physics addresses topological materials and structures. Recently, a novel type of topological materials has received much attention: Weyl and Dirac semimetals. The quasiparticle spectrum near the nodal point of a Dirac semimetal is described by a three-dimensional (3D) 4 × 4 Dirac Hamiltonian where excitations close the crossing point of valence and conduction bands disperse linearly. The materials Cd 3 As 2 [1] and Na 3 Bi [2] represent experimental realizations of Dirac semimetals. For either broken spatial inversion or time-reversal symmetry, the four-component solution of the Dirac equation splits into two independent two-component Weyl fermions of opposite chirality with the Weyl points in the spectrum located at distinct momenta. Recent experiments classify TaAs [3,4], NbAs [5], TaP [6], and NbP [7] as Weyl semimetals. Further promising candidates for Weyl semimetals include pyrochlore iridates [8] and topological insulator heterostructures [9]. In the rest of the paper, we will use the term "Weyl semimetal" in a broader sense, including also the degenerate case of Dirac semimetals. Transport properties of Weyl semimetals are highly peculiar. For recent theoretical studies, see, e.g., Refs. [8,[10][11][12][13][14][15][16][17][18][19][20][21][22][23][24][25]30] and references therein. An important aspect of the transport properties is the appearance of a disordered critical point within the perturbative analysis. Below the disordered critical point (i.e., for sufficiently weak disorder), the density of states vanishes quadratically in energy around the Weyl point within the perturbation theory. Non-perturbative treatment yields an exponentially small density of states at the Weyl point. In the strong disorder regime, the density of states is finite at the Weyl point already without invoking exponentially small contributions. The transport in Weyl semimetals reveals a particularly interesting and rich physics when an external magnetic field is applied. One reason for this is the unconventional Landau quantization of Dirac fermions. Further, a single species of Weyl fermions displays the chiral anomaly that gives rise to a possibility of controlling the valley polarization. A strong anomalous Hall effect [12,13,26] and the longitudinal magnetoresistivity [7,16,20,21,27,28,[33][34][35][36] in Weyl semimetals have been predicted to originate from the chiral anomaly. Furthermore, thermoelectrical effects [37] and induced superconductivity [38] have been studied recently, both theoretically and experimentally. In this paper, we present a theory of the transversal magnetoresistivity in a Weyl semimetal away from charge neutrality point. (The term "transversal" here means that the magnetic field is perpendicular to the electric field: the relevant resistivity component is ρ xx , while the magnetic field is along the z axis.) The work is motivated by the spectacular experimental observation of a large, approximately linear transversal magnetoresistance (TMR) in Dirac and Weyl semimetals [7,[39][40][41][42]. Theoretically, a linear TMR of a system with Dirac dispersion in the ultra-quantum limit (where only the zeroth Landau level is effective) was obtained by Abrikosov in a seminal paper, Ref. [43]. The crucial ingredient of this result is the dependence of the screening of Coulomb impurities on magnetic field. In a previous work, Ref. [31], we have carried out a systematic analysis of the magnetoresistivity of a Weyl semimetal at the neutrality point and for different types of disorder. Our results for the case of Coulomb impurities and in strongest magnetic fields yield the linear TMR, in agreement with Ref. [43]. This is not sufficient, however, to explain experimental data since experiments are performed at non-zero electron density. A clear experimental evidence of finite density is provided by Shubnikov-de Haas oscillations (SdHO) super-imposed on the background of strong linear TMR in an intermediate range of magnetic fields. It is thus a challenge to understand whether the strong quantum linear TMR and the SdHO may emerge from the theory of disordered Weyl fermions. More generally, our goal is to develop the theory of quantum magnetotransport for systems with Dirac spectrum at non-zero density (chemical potential) of carriers. Below, we calculate the TMR and the Hall conductivity for arbitrary magnetic field H and arbitrary particle density. Depending on their values, the dominant contribution to the TMR comes from the zeroth Landau level (LL), separated LLs, or overlapping LLs. This includes also regimes where the SdHO can be observed. Our analysis has a certain overlap with a recent preprint, Ref. [32], where the Born approximation (without self-consistency) was used. We go beyond that work by employing the selfconsistent Born approximation (SCBA), analyzing the scaling of conductivities and of TMR in various regimes, and discussing two models of disorder-(i) short-range impurities and (ii) charged (Coulomb) impurities. Further, we study the TMR for two cases-fixed particle density and fixed chemical potential-and find that the results are essentially different. In the experimentally most relevant case of Coulomb impurities and a fixed particle density, we find a large, linear TMR in the ultra-quantum limit, where only the zeroth Landau level is effective. We show, that even though the analytical result for the resistivity is modified in comparison to that of Ref. [43] due to a non-zero value of the Hall conductivity, the linear-in-H scaling of TMR remains valid. In the regime of moderate (but still quantizing) magnetic fields, where the higher Landau levels are relevant, the TMR curves contain Shubnikov-de Haas peaks whose amplitude grows as a power-law function (H 4/3 ) of magnetic field. At the same time, the "background" TMR (the envelope of the minima) in such magnetic fields is negligible within the SCBA. Thus, the model with a single type of Weyl nodes does not contain a regime where a strong TMR is supplemented by SdHO, in agreement with the numerical findings of Ref. [32]. We further consider a model with Weyl nodes shifted in energy with respect to each other, with the chemical potential corresponding to the total charge neutrality, as illustrated in Fig. 1. Such type of spectrum has been found in various materials both experimentally and by first-principle calculations, see, e.g., Refs. [7,41]. In this situation, the total Hall conductivity is zero, whereas the shifted pairs of Weyl nodes are characterized by equal carrier (electrons and holes, respectively) densities. For Coulomb impurities, we find in this model a large TMR in a broad range of quantizing magnetic fields. In the ultra-quantum limit, where only the zeroth Landau level is effective, the TMR is again linear in magnetic field. At lower magnetic fields, in the regime of separated LLs, strong SdHO are superimposed on top of a rapidly growing background TMR, in contrast to the case of nonshifted Weyl nodes. Specifically, the envelope of the minima of TMR behaves as H 2/3 , while the maxima evolve FIG. 1. Schematic energy band structure of the material with two pairs of Weyl nodes shifted in energy with respect to each other. The carriers belonging to the two pairs of nodes have a chemical potential (counted from the corresponding node) of ∆ (electron-type carriers) and −∆ (hole-type), respectively. Therefore, the system is at the total charge compensation point. as H 2 . The overall behavior of the TMR resembles that found in experiments: with increasing magnetic field the (almost linear) TMR shows SdHO and crosses over into a purely linear TMR with no SdHO. Such a behavior emerges when the conductivity σ xx in a strong magnetic field is larger (due to the compensation between the shifted nodes) than the total Hall conductivity σ xy . This can be realized for shifted Weyl nodes away from the charge neutrality point (where the Hall resistivity is finite), provided that the concentrations of positively and negatively charged impurities are close to each other. The analysis in this paper is performed in the framework of the SCBA for non-interacting fermions. This discards other possible contributions to the TMR, including the classical memory effects (as discussed in the context of Weyl semimetals in a recent paper, Ref. [29]) and interaction-related mechanisms. We will return to a discussion of such magnetoresistance mechanisms in the end of the paper. The paper is organized as follows. Section II is devoted to an introduction to the model of impurity scattering. In Sec. III, we calculate the conductivity σ xx away from charge neutrality in a finite transverse magnetic field for the model of white-noise disorder. Section IV presents the analysis of the Hall conductivity for the clean case and for the white-noise disorder. In Sec. V, we use the obtained results to calculate and analyze the TMR. In Sec. VI, we extend our analysis to the case of charged impurities. Section VII discusses the TMR at the total charge compensation point for the pairs of Weyl nodes shifted in energy with respect to each other. We summarize our findings and discuss the experimental relations to experiments in Sec. VIII. Throughout the paper we set = c = k B = 1. II. MODEL In this section, we introduce the framework [31] for studying disordered Weyl fermions that will be used throughout the paper. We start from the Hamiltonian for a single Weyl fermion in the presence of a finite magnetic field directed along the z axis. The Hamiltonian in the Landau gauge for a clean system is given by where p is the momentum operator, v is the velocity, σ denotes the Pauli matrices and A(r) = (0, Hx, 0) is the vector potential. Now we include disorder. The impurity scattering generates a self-energyΣ(p, ε) in the (impurity-averaged) Green's function, which readŝ . (2) The Green's function is a matrix in the pseudospin space (in which the Pauli matrices σ operate). We will assume that the disorder potential is diagonal in both spin and pseudospin indices and neglect scattering between different Weyl nodes. Clearly, in the absence of internode scattering, the structure in the node space will be trivial for all quantities; the density of states and the conductivities calculated below are those per Weyl node. Under these assumptions, the pointlike impurity potential has the formV where 1 is the unit matrix in the pseudospin space. In view of the matrix structure of the impurity poten-tialV dis (r), the impurity correlatorŴ becomes a rankfour tensor. Within the self-consistent Born approximation (SCBA), the self-energy reads For a diagonal impurity potential, the impurity correlator is diagonal as well, which is expressed as where γ = n imp u 2 0 . We will later generalize the results obtained for white-noise disorder (5) to the case of Coulomb impurities. Similarly to the case of zero magnetic field, we introduce a parameter β defined as where Λ is the ultraviolet energy cutoff for energy (band width). In the following, we will mainly focus on the case of not too strong disorder, β < 1. The self-energy is diagonal in the energy-band space. However, in the presence of magnetic field, the self-energy is no longer proportional to the unit matrix. This asymmetry originates from the asymmetry of states in the zeroth LL. In the clean case, the states of the zeroth LL are only present in one energy band. Note that a strong impurity scattering eliminates this asymmetry. In what follows, it is convenient to switch to the LL representation such thatĜ =Ĝ(ε, p z , n) andΣ =Σ(ε, p z , n). The diagonal components of the matrix self-energy determined with the Green's function (2) read (below z = vp z ): Here we introduced the energy scale A, that combines the disorder coupling γ and the strength of magnetic field characterized by the distance Ω between the zeroth and first LLs. In general, the self-energy depends on energy and on the LL index,Σ =Σ(ε, p z , n). However, for the white-noise disorder, the dependences on n and p z drop out. For energies close to the Weyl point, |ε| < Ω, and for weak disorder, β 1, the asymmetry with respect to the zeroth LL should be taken into account. When the lowest LL is well separated from the others, ImΣ 1,2 < Ω, the contribution of the sum over n is dominated by the n = 0 term. In this case, we get ImΣ 1 = −A and ImΣ 2 ∼ −Aβ (10) Thus, ImΣ 2 is negligible in the limit of weak disorder. For energies away from the Weyl point, ε > Ω, and weak disorder, β < 1, the asymmetry induced by the zeroth LL is negligible: ImΣ 1 = ImΣ 2 = −Γ, where the LL broadening is determined by the self-consistent equation with W 2 n = ε 2 n + Γ 2 (ε), ε n = Ω 2 n. The solution of Eq. (11) gives a nonsymmetric peak of Γ(ε) around the nth LL located at ε = W n with where ε * ∼ Ω(Ω/A) 1/5 marks the energy below which the LLs are fully separated. A detailed analysis of the broadening of LLs reveals that the LLs are separated up to ε * * ∼ Ω(Ω/A) 1/3 , but for energies in the range ε * < ε < ε * * the background density of states is larger than the density of states for the particular LL as shown in Fig. 2 (for further details, see Ref. [31]). III. CONDUCTIVITY AWAY FROM CHARGE NEUTRALITY Using the introduced model, we calculate now the conductivity σ xx of a disordered Weyl semimetal in the presence of magnetic field. We restrict ourselves to the case of weak disorder, β 1. With the use of Kubo formula, the real part of the conductivity reads whereĵ x = evσ x is the bare current operator and j tr x = V trĵ x is the current vertex dressed by disorder, see Ref. [31]. We first calculate the conductivity without vertex corrections and include them at the final steps of the calculation. After the evaluation of the trace and using the orthogonality of the wave functions of the different LLs, Eq. (13) transforms into × dp z 2π ImG R 11 (ε, n, p z ) ImG R 22 (ε, n, p z ). (14) The Green functions here are written in the LL representation. We distinguish in the following calculations between the zeroth LL and higher LLs because the selfenergies for the zeroth LL differ from those of the others. In the following, we will focus on low temperatures, T → 0. For small chemical potential, µ < Ω, excitations to higher LLs are exponentially suppressed and the conductivity is dominated by the contribution of the zeroth LL. Note the conductivity in both region match via a narrow window at Ω µ corresponding to the width of the first LL [cf. the last two lines in Eq. (34)]. In the opposite regime, the conductivity is determined by the position of the chemical potential with respect to separated and overlapping LLs. A. Small chemical potential, µ < Ω: Zeroth Landau level We consider first the situation when the zeroth LL gives the dominant contribution to the conductivity. This case is realized under the following two conditions: (i) the zeroth LL is separated from the first one, which is fulfilled under the condition A Ω; (ii) the chemical potential satisfies µ < Ω, while the temperature is close to zero, T → 0. Under these conditions, the current vertex corrections are small, V tr (ε Ω) ∼ A/Ω 1, for energies close to the Weyl node. Therefore, we can disregard the difference between quantum and scattering time in the regime of the dominant zeroth LL contribution. The Green function, using ImΣ 1 A and ImΣ 1 0 and disregarding the real parts of self-energies (ReΣ ∼ βε ε, see Ref. [31]), reads Substituting Eqs. (15) and (16) in Eq. (14) and separating the n = 0 term in the sum over all LLs, we get where n max is the number of LLs within the energy band Λ. After the integration over ε for T = 0, we find that the contribution of higher LLs is of the order e 2 A 2 /(Ωv) and therefore negligible compared to the n = 0 term that is of the order of e 2 A/v. For the dominant term coming from the zeroth LL we find The result is proportional to the magnetic field and disorder strength and is equal to the result of µ = 0, see Ref. [31]. A finite but small chemical potential, µ < Ω, does not essentially affect σ xx : the corrections to Eq. (18) are small in the parameter Aµ 2 /Ω 3 . B. Large chemical potential, µ > Ω For large chemical potentials, µ > Ω, the situation is more subtle. For a given magnetic field, the spectrum is subdivided in three domains: (i) the low-energy part of the spectrum consists of separated LLs, (ii) in the intermediate region LLs are separated, but the background density of states is larger than the height of an individual LL, and, finally, (iii) at higher energies the LLs overlap. At low temperatures, the conductivity will strongly depend on the position of the chemical potential, with the unusual broadening of LLs leading to an unconventional shape of the SdHO. In view of the structure of the spectrum discussed above, we need to distinguish for the calculation of the conductivity between the three different cases of the position of the chemical potential: (i) fully separated LLs, (ii) separated LLs, but large background, and (iii) fully overlapping LLs. In all three cases the difference between the self-energies can be neglected and the self-energy can be written in terms of LL broadening: ImΣ 1 = ImΣ 2 = −iΓ. The Green functions take then the form Substituting these Green functions in the formula for the conductivity (14), we perform the summation over n and integration over p z . (This calculation is analogous to that in the case T Ω in Ref. [31].) The result is given by In all three cases of the structure of the spectrum near the chemical potential, the conductivity can be expressed by the semiclassical Drude formula, yielding Here τ tr (ε) is the transport scattering time that takes into account the vertex corrections in j tr x and is related to the quantum time τ q = (2Γ) −1 via τ tr = (3/2)τ q . In the case of overlapping LLs or of large background density of states compared to the particular LL, ε ε * , the LL broadening is given by Using the SCBA relation between the density of states and the scattering time and the semiclassical expression for the cyclotron frequency in the linear spectrum we find the conductivity in this region: . (26) In the following, we use Eq. (26) to evaluate the conductivity in all three regimes. First, we consider the regime of fully separated LLs, when the relevant energies satisfy Ω ε Ω(Ω/A) 1/5 , assuming that the chemical potential is located within one of the LLs and the temperature is low (smaller than the LL width). The conductivity for a general LL broadening is given by The broadening of the LLs at the LL center is given by Γ = (A/2) 2/3 ε 1/3 , which yields the conductivity in the center of LLs (in the following denoted by σ peak xx ) The conductivity of the background density of states with a broadening of Γ ∼ γε 2 is denoted by σ bg xx and reads Next, we turn to the intermediate range of the location of the chemical potential, Ω(Ω/A) 1/5 Ω(Ω/A) 1/3 . In this case, the Ω 8 -term in the denominator of Eq. (26) dominates, yielding In the last line of Eq. (30) we have taken low-temperature limit (here the condition T µ is sufficient). Finally, for higher chemical potential, ε > Ω(Ω/A) 1/3 , which is the regime of overlapping LLs, we neglect Ω 8 in the denominator of Eq. (26), which leads to The result coincides with the conductivity σ xx,0 in the absence of magnetic field and does not depend on the chemical potential. Magnetooscillations of the conductivity stem from the oscillations of the density of states ν(ε) and of the transport scattering time τ tr (ε), see Ref. [44]. For a Weyl semimetal, the density of states with magnetooscillations is given by where is the Dingle factor determined by the quantum scattering time τ q . Note that in the case of a conventional 3D material with parabolic dispersion (see Ref. [45]), the frequency of the oscillations is a factor of 2 larger then in the case of Weyl semimetals. A similar behavior is encountered in the 2D case of graphene [46] in comparison to conventional 2D materials. The non-equidistant behavior of the LLs for relativistic dispersion relations is expressed via the energy dependent cyclotron frequency ω c . For ω c (ε)τ q (ε) 1, which corresponds exactly to the condition of overlapping LLs, the first harmonics, k = 1, is the least damped term and hence dominates the oscillations. Using Eqs. (32) and (24), we find the oscillatory contribution to the conductivity (the SdHO) for the case of overlapping LLs: where σ xx,0 is the smooth part of the conductivity calculated above [Eq. (31)]. As usual, the SdHO are exponentially damped in the regime of overlapping LLs, in contrast to the case of separated LLs. We conclude this section with a summary of the results for the conductivity, in the different regimes with respect to magnetic field, chemical potential, and disorder strength. IV. HALL CONDUCTIVITY In this section, we calculate the Hall conductivity. According to the Kubo-Streda formula [47], the Hall conductivity is given by It is convenient to split up the Hall conductivity into a normal, σ I xy , and an anomalous, σ II xy , contributions. The normal contribution is determined by states near the Fermi level and can be simplified by using the orthogonality of the wave functions of different LLs. We find The anomalous contribution reflects the thermodynamic properties of the system in the presence of magnetic field and can be expressed as Here N is the electron density defined as follows: Below, we will first calculate the Hall conductivity in the clean case, and then will incorporate disorder which is encoded in the density of states ν(ε). A. Clean case We now briefly discuss the Hall conductivity in the clean case. The Green functions in Landau representa-tion for the clean case read We start with the calculation of the normal part of Hall conductivity and substitute the Green function from Eqs. (39) and (40) in Eq. (36). After the evaluation of the integral over energy ε and of sum over energy bands λ, the normal contribution to the Hall conductivity reads The evaluation of the integrals for T = 0 leads to The normal contribution of the Hall conductivity shows singularities when the chemical potential is at the center of the one particular LL, µ = Ω √ n, see Fig. 3 (a). The anomalous contribution to the Hall conductivity is obtained from Eq. (37) and the density of states ν(ε) of a Weyl semimetal in clean case, We evaluate the integral in Eq. (38) for T = 0 and take the derivative of N with respect to magnetic field H: The first term of Eq. (44) also shows singularities when the chemical potential is at the center of the one particular LL opposite of those of the normal contribution from Eq. (42), see Fig. 3 (b). Therefore, these singularities are exactly canceled in the total Hall conductivity. As demonstrated in Appendix A, this cancellation occurs in the clean case in the general case of arbitrary T . The evaluation of the sum over LLs with Euler-Maclaurin formula leads to the leading order to For µ > Ω, Eq. (45) describes the smoothened part of the Hall conductivity. On top of this background contribution there is an oscillatory part induced by the Landau quantization. The Hall conductivity (normal and anomalous part and the total Hall conductivity) without disorder is visualized in Fig. 3, where the oscillations induced by Landau quantization can be seen clearly in case of fixed chemical potential. Already based on this plot, one can expect that in the presence of disorder the total Hall conductivity is only weakly changed, since the disorder-induced broadening would only smoothen the oscillatory part of the curve. Further, we can express the Hall conductivity for a fixed particle density N instead of a fixed chemical potential, as relevant to experiments. The magneto-oscillations in the chemical potential are then exactly canceled by the oscillations in the particle density: see inset in Fig. 3 (c). Here the zero level of the density N is chosen in such a way that N = 0 for the chemical potential located in the Dirac point, µ = 0. B. Normal Hall conductivity in the presence of disorder Now, we turn to the Hall conductivity in the presence of disorder and first proceed with the evaluation of the normal contribution. As explained in Sec.II, we distinguish again between the cases when the chemical potential is within the zeroth LL or higher LLs. We focus on low temperatures, T → 0, throughout the whole section. We will start with the calculation of the Hall conductivity under the following conditions: (i) the zeroth LL is separated from higher LLs, A Ω; (ii) excitations to higher LLs are suppressed, µ < Ω. Using the Green functions for energies close to the zeroth LL, Eqs. (15) and (16), the formula for normal contribution to the Hall conductivity, Eq. (36) transforms to where z = vp z . In the following, we will split the summation over the LL index into the term with n = 0 and the terms with n > 0. In contrast to the conductivity σ xx , the contribution of the terms with n > 0 in σ I xy is of the same order as the n = 0 term. The evaluation of the terms under the conditions A Ω and µ < Ω gives to the leading order Clearly, this result (linear in disorder) matches the result for a clean system, where the normal contribution for the case of the chemical potential located in the zeroth Landau level is absent. We will see below that the term (48) is negligible in comparison with the anomalous contribution to the Hall conductivity. Now, we turn to higher chemical potential µ > Ω and analyze the contribution of higher LLs to σ I xy . For ε Ω, the difference between the self-energies for the two bands can be neglected and we can use the Green functions (19) and (20) in Eq. (36). The detailed calculation is presented in Appendix B. The normal contribution to the Hall conductivity reads The limit of vanishing disorder, Γ → 0, is reproduced in Eq. (B3). Similarly to σ xx , the normal contribution to the Hall conductivity can be cast in the form of a semiclassical Drude formula: In the regime where the contribution of the separated LLs to the density of states exceeds the contribution of the background, Ω µ Ω(Ω/A) 1/5 , the second term in Eq. (49) dominates. In the limit T → 0, we get Next, we evaluate the Hall conductivity for a larger chemical potential, when the LLs are separated but the contribution of the background dominates, or else, the LLs fully overlap. In these cases, the expressions for the density of states, transport scattering time, and cyclotron frequency are given by Eqs. (23), (24), and (25), respectively. In the range Ω(Ω/A) 1/5 µ Ω(Ω/A) 1/3 , which corresponds to the case of separated LLs with the dominant background density of states, we find For fully overlapping LLs, µ Ω(Ω/A) 1/3 , the normal contribution to the Hall conductivity reads C. Anomalous Hall conductivity in the presence of disorder In this Section, we calculate the anomalous contribution to the Hall conductivity in the presence of disorder. Furthermore, we subtract the contribution of states below the charge neutrality point since they do not contribute to the Hall conductivity. This is shown explicitly in Appendix A for the clean case and holds for finite disorder in the weak disorder regime, γΛ < 1, considered here. The density of states of a disordered Weyl semimetal is given by In the calculation of the self-energy, we distinguish between the zero LL and the others. For the energy at the zeroth LL the self-energy is given by which will be used in the regime µ < Ω. The anomalous Hall conductivity in this regime does not depend on weak disorder, γΛ 1: This result matches the ac anomalous Hall conductivity σ xy (ω) obtained in Ref. [48] in the limit ω → 0. For µ > Ω the situation is more subtle. The self-energy depends on the strength of broadening and, for separated LLs, µ < Ω(Ω/A) 1/3 , on the actual position of the chemical potential with respect to the center of a given LL. The shape of the density of states consists of the peak at the center of the LL, the tail of the LL, and the background, see Ref. [31]. For separated LLs with large background and for overlapping LLs, the density of states is dominated by the background contribution. The anomalous Hall conductivity for µ > Ω reads where Γ(ε) is given by Eq. (11). Under the same approximations as in the calculation of σ I xy , we obtain the anomalous Hall conductivity in the disordered case, reading where Γ n is defined in Eq. (11). For Γ → 0 in Eq. (11), the result (44) obtained in the limit of vanishing disorder is reproduced. Moreover, for non-overlapping LLs, the broadening of LLs in Eq. (58) is only important in the term the sum over LLs that corresponds to the LL where the chemical potential is located; for all other n one can replace Aµ/Γ n (µ) with µ 2 − Ω 2 n, as in Eq. (44). The smoothened part of the Hall conductivity for separated LLs µ 3/2 γ 1/2 Ω < µ, is thus the same as in the limit without disorder. The effects of the oscillations are minor compared to smoothened part of the Hall conductivity. Therefore, we will use Eq. (59) in the following sections to calculate the magnetoresistance. The oscillatory part of the Hall conductivity for fully separated LLs shown in Fig. 4 visualizes the effect of disorder in the Hall conductivity. For overlapping LLs, the main term in the broadening is given by Γ = 2Aε 2 /Ω 2 which is independent of magnetic field and therefore the anomalous Hall conductivity is zero to the leading order. The corrections due to magnetic field in the case of overlapping LLs are proportional to the Dingle factor, as described above. The particle density for zero temperature reads Since the Dingle factor is exponentially small for overlapping LLs, the anomalous part of the Hall conductivity decays exponentially. The same applies for the TMR. The contributions of overlapping LLs to the TMR will therefore be dominated by effects of finite temperature and will not be discussed here. V. MAGNETORESISTANCE FOR POINTLIKE IMPURITIES We now turn to the evaluation of the TMR, which quantifies the difference between the resistivity ρ xx (H) in a finite magnetic field and the resistivity at H = 0. Using we express the TMR through the conductivities at zero and finite magnetic fields, σ xx (0) and σ xx (H), as well as the Hall conductivity σ xy (H), and employ the results from the previous sections. The results for the TMR are either dominated by a large conductivity, σ xx σ xy , leading to or dominated by a large Hall conductivity, σ xy σ xx , resulting in In what follows, we will distinguish between fixed chemical potential and fixed particle density. Let us start with fixed chemical potential µ. We fix the values of µ and γ and increase the magnetic field. A detailed evaluation of the TMR in different regimes is presented in Appendix C and summarized as follows: For µ > Ω µ 5/4 γ 1/4 v −3/4 , the function Γ(µ) is given by Eq. (11), leading to the oscillations in the TMR from zero to a maximum value proportional to H 4/3 . This behavior is visualized in Fig. 5. For lower magnetic fields, Ω µ 5/4 γ 1/4 v −3/4 , the TMR is given within the SCBA by an exponentially small correction, as discussed in Appendix C. It is important to emphasize that the TMR is only large for the zeroth LL (for magnetic fields Ω > µ). For lower magnetic fields, a small background magnetoresistance is only present due to the different shape of the oscillations in the conductivity and the Hall conductivity and is zero for a smoothened curve. In the regime of the zeroth LL, the magnetoresistance first grows linearly with H as long as the Hall conductivity is larger than σ xx , and then decays (being proportional to H −1 ) in the limit of strongest H, where σ xx σ xy . A schematic plot of the TMR is presented in Fig. 6. The effect of finite temperature for separated LLs is discussed in the end of Appendix C. There, we assume that temperature is still smaller than chemical potential, T < µ, but larger than the distance between LLs, T > Ω/ √ n such that LLs are smeared by temperature. The magnetoresistance is small and linear in magnetic field for µ 1/2 γ −1/2 Ω < µ, reading We continue now with an experimentally more relevant situation of a fixed particle density N . The details of the calculations are discussed in Appendix C; here we present the summary of results: We observe that the behavior of the TMR at the fixed particle density only changes in the zeroth LL. For higher LLs, the particle density does not depend on magnetic field. The schematic behavior of the magnetoresistance is visualized in Fig. 7 We conclude this section with a short discussion of the Hall resistivity ρ xy for fixed particle density, reading For overlapping LLs, the anomalous Hall conductivity is exponentially small, see Eq. (60). In this regime, conductivity, Eq. (31), and the normal Hall conductivity, Eq. (53) combine to For separated LLs the Hall conductivity, Eq. (46), is larger than σ xx in magnetic fields up to Ω ∼ N 1/4 γ −1/4 , again leading to Eq. (69). For higher fields Ω > N 1/4 γ −1/4 , the conductivity of the lowest LL, Eq. (18), has a large contribution resulting in Therefore, the Hall resistivity shows a linear behavior up to the highest fields where it increases as a third power of magnetic field. A. Screening In the previous parts of the paper we considered shortrange impurities. In this Section, we are going to generalize the obtained results for the case of screened Coulomb impurities that is expected to be particularly relevant experimentally. The potential of a single Coulomb impurity is given by where ∞ is the background dielectric constant and κ is the inverse screening radius that is determined by the thermodynamic density of states ∂n/∂µ and reads, in the absence of disorder, In view of the singularity of Eq. (72) in the limit µ, T, H → 0, the effect of disorder becomes important, requiring a self-consistent treatment of disorder in the density of states. The method is discussed in Ref. [31], but, for the sake of clarity, we repeat the arguments below. The analysis below is based on the assumption, that the "fine-structure" constant is not small, In realistic situations with a fine-structure constant of the order of unity, the characteristic values of κ are of the order of k typical ∼ max(Ω, T )/v, typical values of the wave vector k. With condition (73), the parametric dependence of the conductivity for the screened Coulomb disorder is governed by an effectively pointlike correlator From now on, we suppress the numerical prefactors. The correlator (74) describes an effective white-noise disorder with the strength γ(H, T, µ) that depends on magnetic field, temperature, and chemical potential, Here N imp is the density of impurities. In the limit H, T, µ → 0, Eq.(75) leads to a divergent disorder strength. Therefore, a self-consistent treatment of the impurity screening becomes necessary. At the impurity-induced density of state will determine the screening For Coulomb impurities, the weak disorder regime is valid under the condition max(Ω, T, µ) ε imp . Under these conditions, the results of the previous sections are applicable to the Coulomb case with the replacement of γ with γ (H, T, µ). The dependence of the strength of screened disorder on magnetic filed plays a crucial role in the H dependence of TMR for charged impurities. B. Magnetoresistance In order to find the TMR, we substitute γ(H, µ) ∼ ε 3 imp v 3 [max(Ω, µ)] −4 for γ in the conductivity and Hall conductivity. As we do not keep numerical prefactors, the vertex corrections can be disregarded (since they only modify these prefactors). The particular substitution in each regime is done in Appendix D; here we only state the results. For charged impurities, we need to distinguish between µ > ε imp and µ < ε imp . We start with the case of µ < ε imp , where only overlapping LLs and the zeroth LLs are important. We fix µ and ε imp while increasing the magnetic field. The TMR reads The TMR for lower magnetic fields is exponentially small for the same reason as for pointlike impurities. We find a linear TMR for the zeroth LL and Coulomb impurities in fields up to ε 3/2 imp µ 1/2 . In the highest magnetic fields, the TMR at fixed chemical potential vanishes as H −1 . We will see below that the behavior of the TMR in the ultraquantum limit is different for the case of a fixed density, where the TMR keeps growing linearly. In the opposite regime, µ > ε imp , we have both regimes of separated LLs and separated LLs with the dominating background density of states. The TMR reads FIG. 8. Schematic illustration of TMR for Coulomb impurities at a fixed chemical potential, µ εimp. At lowest fields, the TMR is determined by separated LLs with large background. With increasing magnetic field, first separated LLs give rise to the peaks in the TMR (indicated by the shaded region), and finally the zeroth LL becomes relevant for transport. The scaling of the TMR in various regimes is given by Eq. (79). FIG. 9. Schematic illustration of TMR for Coulomb impurities at a fixed particle density, N ε 3 imp . The corresponding scaling is given by Eq. (81). where Γ C (µ, Ω) defines the oscillations of the conductivity and is defined in Eq. (D3). As for pointlike impurities, the TMR is vanishing small for lower magnetic fields. Furthermore, the magnetic field dependence of the TMR changes only for the lowest LL, Ω > µ compared to pointlike impurities. For the lowest LL, the screening is magnetic-field dependent, while for higher LLs, the screening is dominated by chemical potential. Therefore, Fig. 5 can be redrawn just by changing the dependence on chemical potential and disorder strength ε imp . A schematic plot of the TMR is presented in Fig. 8. If we fix the particle density as relevant for experiments, the magnetic-field dependence of the resistivity of the zeroth LL changes because of magnetic-field dependence of the particle density. The TMR for N 1/3 < ε imp reads The TMR in the limit of highest magnetic fields is linear in H, which agrees with the results of Refs. [31,43]. For N 1/3 > ε imp , the TMR is given by The resulting linear TMR in highest magnetic fields is in agreement with Eq. (80) and with Refs. [31,32,43]. The only difference is the replacement of the disorder scale ε imp in the slope of the TMR with N 1/3 . The TMR in the lower fields remains vanishing with small oscillations, see Fig. 9. The resulting "phase diagrams" for TMR in the cases of fixed chemical potential and fixed density are presented in Figs. 10a and 10b, respectively. For finite temperature under the conditions T < µ and T > Ω/ √ n, the magnetoresistance calculated in Appendix C applies here. For fixed particle density, the magnetoresistance given by Eq. (66) for N 1/12 ε Finally, we address the Hall resistivity, Eq. (68), for fixed particle density. Similarly to the case of pointlike impurities, the conductivity and Hall conductivity away from the quantum limit combine into In the quantum limit, Ω > N 1/3 , conductivity, Eq. (D2), and Hall conductivity, Eq. (46), scale identically with magnetic field. Therefore, the Hall resistivity is for ε imp < N 1/3 . In the physically most relevant situation where a finite particle density is induced by donors (charged impurities), ε imp ∼ N 1/3 , the Hall resistivity is of the same order as the TMR. To conclude this section, we outline its main findings: (i) For Coulomb impurities, the TMR is linear in the ultra-quantum limit; (ii) In the experimentally relevant case, ε imp ∼ N 1/3 , the Hall resistivity is of the same order as the TMR; (iii) Strong SdHO are observed in moderate magnetic fields, where the background TMR is negligible. All these findings are in agreement with the numerical results of Ref. [32]. The results (i) and (ii) conform with the experimental observations [7] of a strong linear TMR comparable to the Hall resistivity. However, the above model treated within the SCBA does not explain the emergence of the SdHO on top of rapidly growing background TMR as observed in experiments, contrary to (iii). In the next section, we propose a model that can explain such a behavior. VII. MAGNETORESISTANCE FOR SHIFTED WEYL NODES We discuss now a model with Weyl nodes shifted in energy, see Fig. 1. In various experiments [7,41], the different pairs of Weyl nodes are shifted in energy with respect to each other such that some pairs of nodes are characterized by a positive chemical potential, whereas other nodes by a negative chemical potential counted from the corresponding nodal points. The conductivity σ xx is an even function of magnetic field and does not depend on the sign of the chemical potential in a particle-hole symmetric spectrum, so that the contributions of different nodes to σ xx just add up. Even exactly at charge neutrality, the conductivity of each pair of nodes is determined by a finite density of quasiparticles (electrons or holes, N + and N − , respectively), similarly to the consideration of a single node above. It is important to notice that away from charge neutrality the SdHO show a superposition of oscillations from the pairs of nodes characterized by the different chemical potentials. At the same time, the Hall conductivity is an odd function of chemical potential and hence vanishes at charge neutrality. Therefore, the distance to the complete charge compensation point, which is in realistic cases typically smaller than the chemical potential of each pair of nodes (see discussion in Ref. [7]), is of crucial importance for the Hall response. We will first discuss the case, when the chemical potentials of the different nodes correspond to the charge compensation point characterized by a vanishing Hall conductivity, σ xy = 0. The magnetoresistance is then fully determined by the conductivity σ xx , As we have assumed that the carriers in one pair of Weyl node has the chemical potential ∆ while in the other pair the chemical potential is −∆, as depicted in Fig. 1, the total TMR is multiplied by the number of Weyl nodes. A. Pointlike impurities To obtain the TMR for the case of zero Hall conductivity, we use Eq. (34) for the conductivity in the different regimes. We fix now the values of ∆ and γ and analyze the evolution of the TMR with increasing magnetic field: (87) In the regime of separated LLs, γ 1/4 ∆ 5/4 v −3/4 Ω ∆, we find a sublinear (H 2/3 ) behavior of the minima of the SdHO in the TMR, while the maxima of the TMR show a quadratic growth with magnetic field. The SdHO and the TMR as obtained from the numerical solution of the SCBA equations are depicted in Fig. 11. In the limit of highest magnetic field, the TMR decays as 1/H, similarly to the case of non-shifted Weyl nodes. 11. TMR for separated LLs as a function of Ω 2 /∆ 2 for pointlike impurities and for Weyl nodes shifted in energy by 2∆. The results are obtained by using Eq. (18) for ∆ < Ω and Eq. (27) for ∆ > Ω. Red, blue, and green lines correspond to A∆/Ω 2 = 5 · 10 −3 , 6 · 10 −3 , 7 · 10 −3 , respectively. For all curves Λ/Ω 2 = 100. B. Charged impurities The condition of overall charge neutrality of the sample at zero charge of carriers (N + = N − ) can be maintained for a finite concentration of Coulomb impurities when the concentration of positively and negatively charged impurities are equal. The conductivity for Coulomb impurities is analyzed in Appendix D and is given by Eqs. (D9) and (D10). For fixed values of ∆ and ε imp , we first calculate the TMR for ∆ < ε imp : For ∆ > ε imp , the evolution of the TMR with increasing magnetic field is described by imp , (89) for Ω < ∆. These results nicely match at the border of the regimes in the numerical evaluation. Red, blue and green lines correspond to ε 3 imp /∆ 3 = 5 · 10 −3 , 6 · 10 −3 , 7 · 10 −3 , respectively. For all curves Λ/Ω 2 = 100. In both limits we find a large, linear TMR in the quantum limit where only the lowest LL contributes to transport. We observe that the linear TMR in highest magnetic fields is very robust and does not depend on whether the Weyl nodes are shifted in energy or not (cf. Sec. VI). For lower magnetic field, the result is similar to the case of pointlike impurities. The minima of the TMR evolve as H 2/3 and the maxima as H 2 in magnetic field. The result of numerical evaluation of TMR is depicted in Fig. 12 showing both the magnetooscillations and the TMR in the ultra-quantum limit. The overall picture of the TMR agrees with the behavior found in experiments. Specifically, with increasing magnetic field, the TMR shows strong SdHO on top of the rapidly growing background and crosses over into a purely linear TMR without magnetooscillations in the limit of highest magnetic field. Away from the exact compensation point, where the Hall resistivity is finite, the above picture for the TMR with SdHO on top of strong TMR remains intact as long as σ xx σ xy . Denoting by δµ ∝ (N + − N − ) 1/3 ∆ the distance from the neutrality point, we get The condition σ xx σ xy translates with the background conductivity given by Eq. (D6) at ∆ > Ω into This can be fulfilled in a broad range of magnetic fields when the concentrations of positively and negatively charged impurities are close. VIII. SUMMARY AND DISCUSSION To summarize, we have generalized the theory of the transverse magnetoresistivity of Weyl semimetals developed in Ref. [31] to the case of a finite chemical potential (finite carrier density). We have considered two models of disorder: (i) short-range impurities and (ii) charged (Coulomb) impurities. Away from charge neutrality, the analysis includes the calculation of the Hall conductivity and the Shubnikov-de Haas oscillations. We have further extended the consideration to a realistic model with Weyl nodes shifted in energy (as found in various Dirac and Weyl materials) with the chemical potential corresponding to the total charge neutrality. We have identified a rich variety of regimes of the resistivity scaling in the plane spanned by the magnetic field and the chemical potential (or carrier density) that emerge because of the unusual broadening of Landau levels and are governed by a competition between the conductivity σ xx and Hall conductivity σ xy . We have also found that the TMR in strongest magnetic fields depends on whether the particle density or the chemical potential is fixed. For pointlike impurities, the TMR is negligible in moderate magnetic fields (even for separated Landau levels), showing peaks at the centers of Landau levels. A pronounced magnetoresistance is only observed for the zeroth Landau level, where the TMR decays as 1/H in the ultra-quantum limit for both fixed chemical potential and fixed particle density, see Figs. 5 and 6. In the model of Coulomb impurities (which is expected to be more relevant experimentally), while the behavior of the TMR in moderate magnetic fields is similar, the crucial difference appears in strongest magnetic fields, where a linear-in-H TMR emerges, see Figs. 8-10. For a fixed chemical potential, the TMR is linear only in a finite range of H in a close vicinity of the charge neutrality point and decreases with magnetic field as 1/H otherwise. For a fixed particle density (which should be the case in experiments), we obtain in the ultra-quantum limit a nonsaturating linear TMR of the type first discovered in Ref. [43]. While the prefactor of the linear TMR away form the neutrality point is different from that at charge neutrality, Ref. [31], the scaling with magnetic field is the same. Moreover, the conductivity and Hall conductivity are of the same order in the experimentally relevant situation [7,42] of particle density being roughly equal to the concentration of impurities, N ∼ ε 3 imp . Within this model, the range of magnetic fields where the Shubnikov-de Haas oscillation are developed corresponds to a weak background TMR, while the strong (linear) TMR emerges only in the ultra-quantum limit, where only the zeroth Landau level contributes to transport and hence no magnetooscillations can be observed. Further, we have analyzed a more sophisticated (but experimentally relevant) model which describes different pairs of Weyl nodes shifted in energy with respect to each other (Fig. 1). In such systems, the Hall conductivity can be partly or fully compensated, while σ xx in each pair of nodes corresponds to a finite density of quasiparticles. Within this model, the range of moderate magnetic fields, where Shubnikov-de Haas oscillations become strong, overlaps with the range of fields where the background TMR grows rapidly. The minima of the oscillations evolve with magnetic field as H 2/3 while the maxima increase quadratically with magnetic field. This holds for both models of disorder (pointlike and Coulomb impurities). Thus for shifted pairs of Weyl nodes, an intermediate regime of magnetic fields emerges where the Shubnikov -de Haas oscillations are superimposed on the strong background magnetoresistance originating from separated Landau levels. (This is impossible in the case of ultra-quantum linear TMR of Refs. [31,43] which is entirely governed by the lowest LL.) The difference between the two models of disorder manifests itself in the ultra-quantum limit where only the zeroth LL contributes to transport. There, we find a decay of the magnetoresistance proportional to 1/H for pointlike impurities and a large, linear magnetoresistance for charged impurities, consistent with those found in Refs. [31,43]. The results for TMR in the two different models of disorder are visualized in Figs. 11 and 12. We emphasize that this work focused on the two idealized cases: (i) no charge compensation between different pairs of Weyl nodes, (ii) complete charge compensation between the different pairs of nodes. The intermediate case of a partial compensations as present in experiments [7,41] would show a variety of effects governed by the competition between σ xx and σ xy as well as a superposition of Shubnikov-de Haas oscillations coming from different nodes. It should be noted that the calculations in this paper have been mainly performed at zero temperature. As usual, finite temperature smears Shubnikov-de Haas oscillations. Our results are well applicable for temperatures smaller than the distance between neighboring Landau levels -in the regime where pronounced Shubnikovde Haas oscillations are observed. There the finite temperature only leads to a small correction while keeping the background TMR essentially unchanged. We also briefly discussed the effect of thermal smearing at higher temperatures where the thermal averaging exponentially suppresses the magnetooscillations, and leads to a finite background TMR even in the model of non-shifted Weyl nodes, similarly to the case of charge neutrality [31]. The TMR is small and linear in the regime of thermal averaging [cf. Eqs. (66) and (82)]. For Coulomb impurities, this thermally smeared quantum TMR, Eq. (82) scales in the same way as the ultra-quantum TMR, second line of Eq. (81). A natural extension of this work would be a detailed discussion of finite temperature away from charge neutrality in the whole parameter space. Our results are in a qualitative agreement with the main experimental findings on TMR, Ref. [7,[39][40][41][42], where a strong, linear TMR was observed at finite carrier density in the ultra-quantum limit, which was comparable in magnitude to the Hall resistivity. The qualitative behavior of the TMR for shifted Weyl nodes (Fig. 12) is similar to that observed in experiments where pro-nounced Shubnikov -de Haas oscillations were superimposed on top of a rapidly growing background TMR. The order of magnitude of the TMR in Fig. 12 is also comparable to the experimentally observed magnitude of the effect. Before concluding the paper, we briefly discuss alternative mechanisms of strong TMR that can emerge beyond the SCBA in the range of magnetic field where magnetooscillations are strong. The first mechanism is based on classical memory effects (for reviews of memory effects in conventional 2D and 3D systems see Refs. [44] and [49], respectively). In conventional 3D systems a pronounced memory effect in a smooth disorder potential is based on the trapping of cyclotron orbits in z direction [50]. For the case of Weyl semimetals such a mechanism was recently addressed in Ref. [29]. This mechanism requires a large correlation radius of disorder ξ, which may be the case in two dimensions (large spacer) but seems unlikely in three dimensions, unless the "fine structure constant", Eq. (73) (assumed to be 1 in the present work) is very small. Even within the assumption of ξ being much larger than the cyclotron radius, Ref. [29] obtained the TMR up to 1-2 orders of magnitude, while it is of about 5 orders in the experiment Ref. [7]. Furthermore, in the ultra-quantum limit, this type of memory effects is expected to be strongly suppressed in Weyl systems, compared to conventional ones [49]. This is due to the chirality of 1D modes in z direction: the backscattering in z direction requires internodal scattering which is ineffective in Weyl materials (and also in Dirac semimetals in the strongest magnetic field which shifts the Dirac points in momentum space). An interesting prospect is to analyze quantitatively the role of this memory-effect mechanism of TMR in the case of screened Coulomb impurities in Weyl semimetals and compare it to the quantum TMR discussed in this paper. Other mechanisms for a strong TMR can be provided by interaction effects (for this mechanism in 2D systems, see Ref. [51] and references therein), including possible Luttinger liquid effects of interaction within 1D channels in z direction in the ultra-quantum limit, and by electron-hole recombination in compensated systems of a finite geometry (see Ref. [52]). These mechanisms may be important in those regimes where the present model yields zero background TMR at moderate magnetic fields (higher Landau levels) and remain to be explored in realistic systems, in particular, in Weyl semimetals with shifted nodes. We do not expect, however, that these additional mechanisms of TMR would change the overall picture of TMR developed in the present work and could compete with quantum TMR in strongest magnetic fields. Appendix A: Anomalous Hall conductivity To evaluate the anomalous Hall conductivity, we calculate the particle density by integrating the density of states up to the ultraviolet cutoff: where f (ε ± µ) = (exp((ε ± µ)/2T ) + 1) −1 denotes the Fermi function. Taking the derivative with respect to H leads to the following anomalous Hall conductivity: The last term of Eq. (A2) differs from the normal Hall conductivity Eq. (41) only by the sign. The anomalous Hall conductivity reads Employing the Euler Maclaurin formula to Eq. (A3) leads to a cancelation of the terms that depend on the ultraviolet cutoff Λ. The remaining terms are which corresponds to Eq. (44). Appendix B: Calculation of the normal Hall conductivity for large chemical potential This appendix is devoted to the calculation of the normal contribution to the Hall conductivity σ I xy for large chemical potential, µ Ω. Starting from Eq. (36), we use the Green's functions for LLs with n > 0. The resulting formula is The evaluation of the integral over z = vp z leads to To simplify the equation, we can shift the sum over n for the terms containing n + 1 by −1 and evaluate the real part of the equation. The Hall conductivity σ I xy can be then written as n=1 Ω 4 n + 2Ω 2 Γ 2 Ω 4 + (4εΓ) 2 ε 2 − Ω 2 n − Γ 2 + (ε 2 − Ω 2 − Γ 2 ) 2 + 4ε 2 Γ 2 √ 2 (ε 2 − Ω 2 n − Γ 2 ) 2 + 4ε 2 Γ 2 + 4εΓΩ √ n max We can split the sum over n in three parts: n < n 0 , n 0 , n 0 + 1 and n > n 0 + 1, where n 0 is the resonant energy. For the part of n < n 0 , we can neglect Γ and for n > n 0 + 1 we can expand in Γ. After some algebra, the Hall conductivity reads Here Γ (n0) and Γ (n0+1) are defined via the self-consistent equation Γ = n Γ n . By evaluating the second sum, we see that term of the upper limit n max − 1 cancels with the last term of Eq. (B4). Furthermore, the contribution of the lower limit n 0 + 2 of this sum and the term from the n = 0 are parametrically small and can be neglected. The normal contribution to the Hall conductivity then reads This expression is further evaluated in the main text, where we consider the different regimes of LL broadening. Appendix C: Calculation of the magnetoresistance for pointlike impurities In this appendix we evaluate the TMR for pointlike impurities. For the lowest magnetic fields, Ω 2 < µ 3 γ, all LLs overlap. The conductivity and the normal Hall conductivity are given by the Drude formula in this regime, Eqs. (31) and (53), leading to a vanishing TMR. In this regime, the anomalous Hall conductivity is exponentially small, see Eq. (60). Therefore, effects of a finite temperature (not discussed here) will dominate the TMR. For magnetic fields in the range µ 3 γ < Ω 2 < µ 5/2 γ 1/2 , the LLs are separated, but the background density of states is still larger than the peaks of the LLs. In this region, the conductivity, Eq. (30), is smaller than the Hall conductivity, Eq. (59). The magnetoresistance calculated with Eq. (64) remains zero. A further increase of magnetic field, µ 5/2 γ 1/2 < Ω 2 < µ 2 , leads to pronounced LLs. The TMR is still determined by Eq. (64) (conductivity is small compared to the Hall conductivity), but it now strongly oscillates with magnetic field because of the oscillations of the scattering rate. With the conductivity, Eq. (27), and the Hall conductivity, Eq. (59), the TMR is evaluated as leading to ∆ max ρ (H) ∼ Ω 8/3 µ 10/3 γ 2/3 (C2) at the peak (using the conductivity at the peak, Eq. (28)) and zero background TMR (as in the previous region). For stronger magnetic fields, µ < Ω < γ −1 , the TMR is determined by carriers at the zeroth LL. With the conductivity, Eq. (18), and the Hall conductivity, Eq. (56), we find that σ xy is larger than σ xx up to magnetic fields of Ω 2 < µγ −1 , resulting in where we have used Eq. (64). For yet higher magnetic fields, µ 1/2 γ −1/2 < Ω < γ −1 , we use Eq. (63) and obtain ∆ ρ ∼ 1 γ 2 Ω 2 − 1. (C4) We continue this appendix with the analysis of the TMR for a fixed particle density. The particle density is evaluated with Eq. (38), reading The magnetic-field dependence of the resistivity only changes for the zeroth LL (in both conductivity and Hall conductivity). For completeness, we start with the analysis from the lowest relevant magnetic fields, N 5/6 γ 1/2 > Ω 2 (below no TMR emerges to the leading order within the SCBA). In magnetic fields N 5/6 γ 1/2 < Ω 2 < N 2/3 the TMR is finite at the center of the LLs, Eq. (C2). Using Eq. (C5), we get ∆ max ρ (H) ∼ Ω 8/3 N 10/9 γ 2/3 (C6) for the TMR at the center of LLs for a fixed particle density For larger magnetic fields, N 1/3 < Ω < γ −1 , the conductivity, Eq. (18), and the Hall conductivity, Eq. (56), are modified by Eq. (C5). We find that the Hall conductivity is larger than the conductivity up to magnetic fields of Ω < N 1/4 γ −1/4 , resulting in For yet stronger magnetic fields, N 1/4 γ −1/4 < Ω < γ −1 , we use Eq. (C4) which remains unaffected for a fixed particle density. The calculation of the magnetoresistance was so far limited to zero temperature. In the following, we will briefly discuss the effect of finite temperatures. Finite temperature smears LLs for T > Ω/ √ n. Let us consider separated LLs in the regime of low chemical potential, Ω < µ < Ω(Ω/A) 1/5 , and temperature T < µ. In this case, the contribution of the LLs in the vicinity of the chemical potential µ − T < Ω √ n < µ + T should be analyzed. In order to estimate the corresponding contribution to the conductivity, we replace the integral over energy by a sum over regions of width Γ(W n ) around Landau levels, and replace Γ (n) ( ) there by its maximal value Γ (n) (W n ) ≡ Γ n ∼ A 2/3 Ω 1/3 n 1/6 . As a result, we get This value of the conductivity is smaller than the background conductivity Eq. (30), but is important for the TMR which otherwise vanishes. The Hall conductivity for T < µ remains essentially unaffected by finite temperature. The magnetoresistance is still determined by the Hall conductivity according to Eq. (64), yielding Eq. (66). This linear magnetoresistance is small and will show exponentially suppressed Shubnikov-de Haas oscillations.
2017-09-21T15:26:54.000Z
2017-09-07T00:00:00.000
{ "year": 2017, "sha1": "8bf615c9b41e6326ee549836437f68be1c4d4925", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1709.02361", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "8bf615c9b41e6326ee549836437f68be1c4d4925", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
247769184
pes2o/s2orc
v3-fos-license
Twenty-Four Hour Blood Pressure Response to Empagliflozin and Its Determinants in Normotensive Non-diabetic Subjects Background Sodium–glucose co-transport 2 inhibitors (SGLT2i) lower blood pressure (BP) in normotensive subjects and in hypertensive and normotensive diabetic and non-diabetic patients. However, the mechanisms of these BP changes are not fully understood. Therefore, we examined the clinical and biochemical determinants of the BP response to empagliflozin based on 24-h ambulatory BP monitoring. Methods In this post-hoc analysis of a double-blind, randomized, placebo-controlled study examining the renal effects of empagliflozin 10 mg vs. placebo in untreated normotensive non-diabetic subjects, the 1-month changes in 24 h ambulatory BP were analyzed in 39 subjects (13 placebo/26 empagliflozin) in regard to changes in biochemical and hormonal parameters. Results At 1 month, empagliflozin 10 mg decreased 24-h systolic (SBP) and diastolic (DBP) BP significantly by −5 ± 7 mmHg (p < 0.001) and −2 ± 6 mmHg (p = 0.03). The effect on SBP and DBP was more pronounced during nighttime (resp. −6 ± 11 mmHg, p = 0.004; −4 ± 7 mmHg, p = 0.007). The main determinants of daytime and nighttime SBP and DBP responses were baseline BP levels (for daytime SBP: coefficient −0.5; adj. R2: 0.36; p = 0.0007; for night-time SBP: coefficient −0.6; adj. R2: 0.33; p = 0.001). Although empaglifozin induced significant biochemical changes, none correlated with blood pressure changes including urinary sodium, lithium, glucose and urate excretion and free water clearance. Plasma renin activity and plasma aldosterone levels increased significantly at 1 month suggesting plasma volume contraction, while plasma metanephrine and copeptin levels remained the same. Renal resistive indexes did not change with empagliflozin. Conclusion SGLT2 inhibition lowers daytime and nighttime ambulatory systolic and diastolic BP in normotensive non-diabetic subjects. Twenty-four jour changes are pronounced and comparable to those described in diabetic or hypertensive subjects. Baseline ambulatory BP was the only identified determinant of systolic and diastolic BP response. This suggests that still other factors than sustained glycosuria or proximal sodium excretion may contribute to the resetting to lower blood pressure levels with SGLT2 inhibition. Clinical Trial Registration: [https://www.clinicaltrials.gov], identifier [NCT03093103]. INTRODUCTION In the development program of all sodium-glucose co-transport 2 inhibitors (SGLT2i), significant reductions in blood pressure (BP) were observed in normotensive subjects as well as in hypertensive and normotensive diabetic patients (1,2). Thus, in a meta-analysis of 21 studies evaluating SGLT2i in diabetic patients, the mean change in systolic BP ranged across groups from −6 to −2 mm Hg, the mean change in systolic BP in the intervention groups being 4.5 mm Hg lower than in the control group (CI, 5.7 to 3.2 mm Hg lower) (1,2). Interestingly, SGLT2i lower BP on top of antihypertensive drugs such as blockers of the renin-angiotensin system, calcium antagonists and even diuretics (3). Moreover, this effect of SGLT2i occurs regardless of the CKD stage and baseline BP. The precise mechanisms of the BP lowering effects of SGLT2 inhibitors are not perfectly understood. The impact of SGLT2 inhibition on BP is rather rapid and can be seen already within a week or two. This would suggest that the BP decrease is an immediate consequence of the osmotic diuresis leading to sodium excretion and volume contraction. However, the observation that SGLT2 inhibitors lower BP even in more advanced CKD and on top of diuretics when the impact of SGLT2 inhibition on urinary volume, glucose and salt excretion is modest would rather indicate that natriuresis and osmotic diuresis induced by glycosuria are not unique mechanisms leading to the decrease in BP. We investigated recently the acute and sustained effects of empagliflozin on renal tissue oxygenation and BP in normotensive non-diabetic subjects (4). In this study, significant reductions of office and ambulatory BP were observed. In the present post-hoc analysis, we examine the clinical and biochemical determinants of the BP response to empagliflozin based on the 24 h, diurnal, and nocturnal ambulatory BP values. MATERIALS AND METHODS Details of the study protocol have been previously published (4,5). In brief, the study was a double-blind, randomized, placebo-controlled study that examined the renal effects of 1 month treatment with empagliflozin 10 mg vs. placebo in non-medicated, normotensive, non-diabetic subjects. Study Participants An announcement for recruitment was posted at the University Hospital Center (CHUV) in Lausanne, which was visible on the web page of the institution. After contacting the study nurse, volunteers received an information sheet which included detailed information on the study protocol, side effects, their symptoms, treatment, and preventive measures, and they were invited to an information visit. Then, a minimum of 48 h was requested before confirming their interest and planning the screening visit. Inclusion criteria included the absence of diabetes (HbA1C < 6.5%) and a normal oral glucose tolerance test (<7 mmol/l fasting, <11.1 mmol/l after 75 g of glucose), a CKD-EPI based eGFR > 60 ml/min/1.73 m 2 , a urine albumin/creatinine ratio < 3.3 mg/mmol, a normal urine dipstick, normal hematology and chemistry results and a normal renal ultrasound. Each group was randomized to placebo (n = 15) or empagliflozin (n = 30). The randomization procedure was done by the hospital pharmacy. A 2:1 randomization was chosen to compensate the possible higher drop-out rate in the empagliflozin group due to side effects. Empagliflozin 10 mg or placebo pills were identical in size and stored in similar boxes containing 30 pills. Investigators (research nurses, doctors and technicians) were blinded to treatment. Intervention At baseline, each subject underwent a 24 h ambulatory BP (ABPM) recording using a validated device (Diasys, Physicor, Geneva, Switzerland) and a 24 h urine collection without any treatment. Blood pressure was measured every 20 min during daytime and every 30 min during nighttime. To be validated, the 24 h ABPM had to have a minimum of 20 measurements during daytime and 7 measurements during nighttime (6). A renal ultrasound was performed using an Aplio XG device (Toshiba Medical Systems, Volketswil, Switzerland). The renal resistive indexes were measured on three segmental arteries (superior, middle, and inferior) in each kidney and averaged. Volunteers were instructed not to smoke or drink alcohol or have any caffeine-containing beverage during study days. On the following morning, after a light breakfast at 7am, the volunteer arrived at the study center at 9am. Blood samples were collected before the administration of the first dose of placebo or empagliflozin 10 mg. Volunteers left the center and continued taking the pill once a day in the morning for 4 weeks. During this period, they were examined once a week and had a telephone call on another day each week for safety reasons. On the day before the last pill, the 24 h ABPM and 24 h urine collection were repeated with blood sampling on the next day. Biochemical Measurements Plasma and urine samples were analyzed for glucose, urea nitrogen, creatinine, bicarbonate, urate and sodium, using routine clinical chemistry methods on a Cobas 8000 R (Roche Diagnostics System, Basel, Switzerland). Plasma and urine osmolality were measured by flame photometry. Proximal renal sodium handling was assessed by the determination of fractional excretion of endogeneous lithium (FELi), a proxy of proximal sodium reabsorption as described previously (7). The fractional excretion of lithium (FELi), sodium (FENa), and urate (FEurate) were assessed using the standard formula [FEx = (Ux × Pcreatinine)/(Px × Ucreatinine)], with U and P as urine and plasma concentrations of the various electrolytes or urate. We also calculated free water clearance (FWC) using the formula: FWC (C H2O ) = Urinary volume -Osmolar clearance. Hormones Plasma renin activity (PRA) was measured using a radioimmunoassay commercial kit for the quantitative determination of Angiotensin I in human plasma, while aldosterone quantification in blood was performed with the Aldo-Riact RIA kit (both kits from CIS Bio International, Yvette, Saclay, France; Cedex, Paris, France). Plasma metanephrines and normetanephrines were measured by ultra-high pressure liquid chromatography-tandem mass spectrometry (8). Copeptin was assessed in batch using a commercially available automated fluorescent sandwich immunoassay (BRAHMS Copeptin proAVP KRYPTOR TM , Thermo Fisher Scientific, Breman, Germany) with a a limit of detection (LOD) of 0.9 pmol/l. The functional assay sensitivity, defined as the concentration with an interassay coefficient of variation of <20%, was 2 pmol/l. Outcome The primary outcome of this study was the acute and chronic effects of empagliflozin on renal tissue oxygenation as measured by blood-oxygen-level-dependent magnetic resonance imaging (BOLD-MRI) and results have been published recently (4). In this post-hoc analyses, we investigated the determinants of a predefined secondary outcome, i.e., the empagliflozin-induced effects on 24 h ambulatory BP with a separate analysis for diurnal and nocturnal BP. STATISTICAL ANALYSIS Sample size calculation was based on the assumption that empagliflozin would improve the oxygenation compared with placebo by 10% (corresponding to an approximate decrease in cortical R2 * of 2 s −1 ) with a sigma (standard deviation) of 5% (1 s −1 ). This estimation and SD were partly based on our previous studies as detailed previously (4). No specific calculations were done for secondary outcomes. Statistical analysis was performed using STATA 14.0 (StataCorp, College Station, TX, United States). Quantitative variables were expressed as mean ± standard deviation, qualitative variables were expressed as number of volunteers and percentage. A paired Student t-test was performed to compare values at baseline and after 1 month therapy. P-values < 0.05 were considered as significant. RESULTS After examining 79 subjects, a total of 45 subjects, aged 18-50 years, were recruited while 34 subjects were excluded (reasons detailed in Supplementary Figure 1). Subjects were randomized to placebo (n = 15) or empagliflozin 10 mg once daily (n = 30). For analysis, we considered only subjects who completed the full protocol (acute + chronic phases). Hence, the BP data of 13 subjects of the placebo group and 26 subjects of the empagliflozin group were analyzed (Supplementary Figure 1). Baseline characteristics of the study groups are presented in Table 1. Average age, BMI, sex distribution, office and ambulatory BP and heart rate did not differ between groups. Blood glucose was similar at baseline and 2 h after 75 g of glucose in both groups. Renal function, 24 h urinary electrolytes, serum electrolytes and the hormonal parameters (plasma renin activity, aldosterone, copeptin, metanephrine, normetanephrine) were not different between groups at baseline (Tables 2, 3). Biochemical and Hormonal Changes After 1 Month of Placebo or Empagliflozin The changes in blood and urinary parameters are shown in Tables 2, 3. Fasting plasma glucose, insulin and HOMA insulin resistance index did not change in both groups. As expected, urinary glucose excretion increased with empagliflozin in all subjects. The major changes observed after 1 month of empagliflozin were: an increase in hemoglobin, a significant decrease in plasma uric acid (p < 0.0001) with an increase in urinary urate excretion both during the day and the night ( daytime FE urate + 3.2 ± 2.1% (p < 0.0001) and nighttime FE urate + 3.6 ± 1.9% (p < 0.0001)]. During the night, free water clearance decreased with empagliflozin in comparison to placebo. Plasma aldosterone levels increased significantly after 1 month of empagliflozin (+ 36.9 pmol/l, p = 0.002) as well as plasma renin activity (+ 0.18 ng/ml/h, p = 0.02). Plasma normetanephrine and metanephrine levels were not affected by the administration of empagliflozin. Plasma copeptin levels increased mildly but not significantly with empagliflozin. Determinants of Blood Pressure Response Linear regression analyses in subjects receiving empagliflozin showed a correlation between baseline systolic BP and BP response to empagliflozin during the day (diurnal SBP: coefficient −0.5; adjusted Rsq: 0.36; p = 0.0007) and the night (nocturnal SBP: coefficient −0.6; adj Rsq: 0.33; p = 0.001). The correlations were significant but weaker with baseline DBP (diurnal DBP: adjusted Rsq: 0.12; p = 0.03; nocturnal DBP: adj Rsq: 0.17; p = 0.02). The acute and 1-month changes in daytime glycosuria did not correlate with the acute or sustained changes in daytime ambulatory BP (data not shown for acute). Likewise, the 1 month changes in nighttime glycosuria did not correlate with the sustained changes in nighttime ambulatory BP. Neither urinary sodium, lithium excretion nor changes in plasma uric acid or uric acid urinary excretion or changes in free water clearance correlated with the sustained changes in BP. The changes in nighttime SBP at week 4 correlated significantly with the changes in plasma aldosterone levels (adj Rsq 0.21; p = 0.01) even after correction for glycosuria (p = 0.02). Changes in nighttime DBP correlated Other Correlations Changes in nocturnal glycosuria measured at 4 week correlated with the changes in plasma aldosterone (adj Rsq 0.18; p = 0.02). Diurnal and nocturnal glycosuria correlated with diurnal and nocturnal uricosuria (adj Rsq: 0.12; p = 0.04 for both). DISCUSSION This study confirms that after 1 month of empagliflozin 10 mg, daytime and nighttime systolic BP decrease significantly in non-diabetic normotensive subjects. The effect was more pronounced during the night and was independent from sex, age or BMI. The main determinant of daytime and/or nighttime SBP and DBP responses to empagliflozin was baseline BP. In our study of normotensive subjects, the decrease in office systolic and diastolic BP was comparable to the SGLT2iinduced reductions in BP reported in cardiovascular outcome trials (CVOT) in patients with type 2 diabetes with or without hypertension (9)(10)(11). In these trials, decreases in office BP ranged between −3 to −5 mmHg for systolic and −1 and −2 mmHg for diastolic BP (9)(10)(11). In the Credence study, canagliflozin lowered systolic BP in the same range (−4 mmHg) in type 2 diabetic patients with CKD stage 1-3A3 (12). In this latter study, 86% of participants were hypertensive and treated with antihypertensive drugs. A similar decrease in BP was observed in patients with type 2 diabetes and moderate renal impairment (CKD stage 3A) in the DERIVE study (13). More relevant is the impact of SGLT2 inhibition on 24 h ambulatory BP control with a stronger effect during the night. In patients with hypertension and type 2 diabetes, empagliflozin lowered ambulatory systolic and diastolic BP by 4 and 3 mmHg, respectively and independently from baseline antihypertensive therapy (3). In their analysis, 24 h BP profile did not demonstrate a greater effect on BP during the night. Similarly, another study showed that empagliflozin decreased daytime SBP by −10 mmHg and night-time SBP by −6 mmHg and restored the normal circadian rhythm of blood pressure control in elderly T2D patients with uncontrolled nocturnal hypertension (14). This observation was confirmed in a recent analysis of the effect of SGLT2 inhibitors on 24 h, daytime and nighttime BP suggesting a greater impact on daytime BP than on nocturnal BP (15). Taken together, the systolic blood pressure response to SGLT2i in normotensive untreated individuals differs from treated hypertensive and diabetic subjects by a stronger effect during the night. We do not have a clear explanation for this observation. Of note, after 4 weeks therapy, the peak concentration reached shortly after the oral administration of empagliflozin is twelve times higher than the trough concentration at the end of the night (data from Boehringher Ingelheim). Interestingly, these differences in concentration do not translate into weaker effects on blood pressure or urine chemistry during the night. As reported previously, the decrease in BP was not accompanied by any increase in heart rate. This observation has been attributed to a sympatho-inhibitory effect of SGLT2 inhibitors (16). Thus, no increase in muscle sympathetic nerve activity was found 4 days after starting empagliflozin in spite of significant diuresis and a lower BP (17). In our subjects, plasma metanephrine and normetanephrine levels were comparable in the empagliflozin and placebo groups with no significant change after 1 month in both groups. Although this does not exclude a specific effect of SGLT2 inhibitors on renal or cardiac sympathetic activity, these hormonal measurements would not support a global inhibition of the sympathetic nervous system. Yet, metanephrine and normetanephrine represent the inactive metabolites of epinephrine and norepinephrine levels and are imperfect surrogates of sympathetic nervous activity. SGLT2 inhibitors might also shift the baroreceptor balance toward the parasympathetic pathway. The link between SGLT2 inhibition and the decrease in BP is generally attributed to the glycosuria leading to osmotic diuresis and hence an increase in urinary sodium and volume excretion. In our initial paper (4), we did report an acute increase in urinary sodium excretion induced by empagliflozin, which disappeared after 1 month as a new sodium balance was reached. In type 2 diabetes, dapagliflozin induced a transient increase in natriuresis of ∼40 meq/day after 24 h, which returned to baseline after 14 weeks (18). Similarly, canagliflozin increased urinary volume and urinary sodium excretion on day 1 in type 2 diabetic patients with a return to baseline on day 2 although glycosuria remained increased (19). In our study, the main indicator of a sustained effect of empagliflozin on renal sodium handling at the proximal tubule was a significant increase in fractional excretion of endogenous lithium (+16.3%). A similar effect was observed with dapagliflozin in patients with type 2 diabetes (20). The sustained uricosuric effect observed in our study could also be interpreted as an indirect indication of the persistent effect of empagliflozin to lower proximal tubular transport. Our findings are in agreement with a study in patients with diabetes showing a marked reduction in free water clearance (20) which may be due to the compensatory mild (but not significant) increase in copeptin levels observed in our study. At last, the persistent stimulation of the renin-angiotensin system (increased plasma renin and aldosterone levels at 1 month) demonstrates that additional compensatory mechanisms are activated to prevent excessive losses of sodium and water. In the present analysis, the decrease in systolic and diastolic ambulatory BP at 1 month correlated with baseline BP. Neither diurnal nor nocturnal sodium excretion, uricosuria or glycosuria were associated with the reduction in BP. Although fractional excretion of lithium increased, it did not correlate with blood pressure changes. One of the reasons why we did not find any association between urinary sodium excretion and BP changes might be the small number of subjects studied or the fact that a standardized diet was not prescribed. Alternatively, this may indicate that still other mechanisms are involved. Thus, it is interesting to note that SGLT2 inhibitors lower BP in patients with CKD despite a blunted effect on glycosuria and urinary sodium excretion suggesting some dissociation of the natriuretic effect and BP responses to SGLT2 inhibitors. Of note, diurnal and nocturnal glycosuria was associated with uricosuria suggesting either the impact of a high proximal flow rate, the activation of GLUT9 or the inhibition of URAT1 due to glycosuria as recently hypothesized (21). As for hormonal changes, increases in aldosterone and plasma renin activity correlated with nocturnal blood pressure response as previously demonstrated (19,20). Limitations of the study: the likelihood of identifying the biological or hormonal determinants of blood pressure response was weakened by the small number of subjects and the absence of a standardized diet. Perspectives: SGLT2 inhibition decreases blood pressure in normotension and particularly in normal high blood pressure subjects. There may be an interest to further explore whether SGLT2 inhibition has additional benefits to non-pharmacological therapy as diet and exercise in individuals with normal high blood pressure. CONCLUSION Taken together our results confirm a significant reduction of daytime and nighttime ambulatory BP after 1 month of empagliflozin in normotensive non-diabetic subjects. Twentyfour hour changes are pronounced and comparable to those described in diabetic or hypertensive subjects. Empagliflozin induced expected changes in hematocrit and urinary glucose, sodium and uric acid excretion. However, baseline ambulatory SBP and DBP were the only identified determinants of systolic and diastolic BP response. Although a transient increase in urinary sodium excretion could also contribute to lower BP in our subjects as well as in patients, we were not able to demonstrate that the natriuretic response to empagliflozin was a key determinant of the SGLT2-induced decrease in BP at 1 month. This may suggest that still additional factors such as changes in baroreceptor activity or cardiac function might contribute to the beneficial effects of SGLT2 inhibitors on blood pressure. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT The studies involving human participants were reviewed and approved by the Ethics committee of the Canton de Vaud, Switzerland. The patients/participants provided their written informed consent to participate in this study. AUTHOR CONTRIBUTIONS MB is the principal investigator. AZ wrote the study protocol and co-investigator and the corresponding author. M-EM and AG-W is responsible for the running of the clinical trial and is coinvestigator. MP, GW, and OB are co-investigators. All authors meet the International Committee of Medical Journal Editors (ICMJE) criteria for authorship for this article, take responsibility for the integrity of the work as a whole, and have given their approval for this version to be published. FUNDING AZ and MB have received funding from Boehringer Ingelheim for this study. This study is an investigator-initiated study supported partially by an unrestricted grant from Boehringer Ingelheim. The funder was not involved in the study design, data collection, analysis, interpretation of data, the writing of this article or the decision to submit for publication.
2022-03-29T13:20:19.725Z
2022-03-22T00:00:00.000
{ "year": 2022, "sha1": "02084cc56241575c8c0da28912aa256f6519d7a3", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "02084cc56241575c8c0da28912aa256f6519d7a3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
242831717
pes2o/s2orc
v3-fos-license
Testing the validity of free cash flow hypothesis: Evidence from Nigeria Purpose: This study empirically tests the validity of the free cash flow hypothesis among firms quoted on the Nigerian Stock Exchange (NSE) from 2007 to 2017. Research methodology: The study employed a dynamic panel system Generalized Method of Moments (GMM) in analyzing the data generated. Results: the result failed to provide empirical evidence in support of the Jensen free cash flow hypothesis in Nigeria. The equally showed that a high concentration of shareholding in the hand of a few individual increases the amount of dividend paid out to shareholders. The result is however robust using different methods. Limitations: We focused only on testing the validity of the free cash flow hypothesis proposed by Jensen (1986). Contribution: The study provided empirical evidence that invalidates the propositions of the free cash flow hypothesis among publicly quoted firms in Nigeria. The result is robust using different estimation techniques. Introduction The concept of free cash flow has gained increasing attention in corporate finance literature following the work of Jensen (1986) which brought the Free Cash Flow (FCF) hypothesis to the limelight. The Jensen free cash flow hypothesis presupposes that management of firms with increasing FCF's tends to invest in a project with negative Net Present Value (NPV) to enrich themselves at the expense of the owners of the firm. According to Kadioglu and Yilmaz (2017) managers of firms employ free cash flow as a tool to promote their selfish interests at the detriment of the shareholders. And so, as the free cash flow increases in the firm, the agency cost increases proportionately due to conflict of interest between managers and shareholders (Zhang et al, 2016). Yeo (2018) however, noted that firm managers have incentive to invest instead of distribution to the shareholders even when the investment produces a negative NPV. We focused only on testing the validity of the free cash flow hypothesis proposed by Jensen (1986). It is based on the forgoing that the free cash flow hypothesis asserts that managers with increased FCF will prefer to invest in negative NPV projects rather than distributing the FCF as dividends to the shareholders. Managers with FCF will deliberately avoid debt financing and dividend payout as these options will reduce the FCF under their control and move will increase the agency problem faced by the firm. Several studies have been conducted in different economies of the world examining the effect of free cash flow on dividend payout policy of publicly quoted firms, but different conclusions reached. DeAngelo and DeAngelo (2000), Kadioglu and Yilmaz (2017), La Porta, Loper-de-Silanes, Shleifer and Vishny (2000) provided empirical evidence supporting the free cash flow hypothesis in their various studies. On the other hand Byrd (2010), Khan, Kaleem, and Nazir (2012), and Zhang (2009) concluded in their studies that debt financing reduces free cash flow in the hand of the management as they are under obligation to settle their outstanding indebtedness. While Titman, Wei, and Xie (2004) and Fairfield, Whisenant, and Yohn (2003) reported that firms with free cash flow make extreme investments hence leading to the poor performance of the firm. Despite what may seem like an avalanche of research work in this subject area in other jurisdiction of the world, no study to the best of our knowledge has tested the validity of free cash flow hypothesis in Nigeria or investigated the effect of free cash flow on dividend payout of firms quoted on the Nigerian stock exchange. We could not also find any study that examined the interaction effect of free cash flow with ownership concentration and board ownership on dividend payout of firms in Nigeria. This study, therefore, examines the validity of the free cash flow hypothesis among firms publicly quoted on the Nigerian stock exchange using a sample of 65 firms for the period of 2007 to 2017. A panel dynamic Generalized Method of Moments (GMM) was used to establish the effect of free cash flow on dividend payout of the sampled firms in Nigeria for the period under investigation. The rest of the study is structured as follows: section two of the study provides a review of relevant literature, section three presents the data and methodology for the study while, section four presents the result and the interpretation of the analysis. Finally, we present the conclusion and policy recommendations in section five of the study. 2.1 Theoretical review 2.1.1 Free cash flow hypothesis The free cash flow hypothesis, according to Jensen & Meckling (1976) posits that managers tend not to behave in a way consistent with the profit maximization objective of the firm. They noted that Managers most often use increased free cash flow to pursue objectives that have little or no effect on profit growth. In line with the free cash flow postulations, the agency cost explanation introduced by Jensen, Clifford & Smith (1995), suggests that monitoring difficulty creates the potential for management to spend internally generated cash flow on projects that are beneficial from a management perspective but costly from a shareholder perspective. It, therefore, suggests that investments in profitable projects decrease the amount of free cash flow available for them to pursue their opportunistic consumption and suboptimal investments. Donaldson (1990), argues that managers of firms with free cash flows (cash flows above profitable investment opportunities) tend to waste cash by taking excessive perquisites or by making unprofitable investments. Similarly, Managers who control free cash flows are more likely to invest in projects that will merely increase the size of the firm (or pay themselves excessive perks), instead of paying dividends to the owners of the firm or repurchase outstanding shares to increase the market value of the share. A contestable inference drawn by the agency hypothesis is that firms that possess free cash flows are more probable to grow over the optimal point of shareholder wealth maximization. The firm's Shareholder tends to benefit from the decision of the management that helps in preventing these wasteful expenditures. One of the ways to prevent such waste is through Share repurchases, which use up the excess cash flows available within the firm (Jensen & Smith, 1995). Similarly, the reaction of the capital market to a drop in dividend payout is a sharp decline in the stock price of the firm's stock this, however, is consistent with the agency costs of free cash flow. Meanwhile, when Debt is created, without retaining the proceeds arising from the issue, it enables managers of a firm to effectively keep their promise to pay out future cash flows. And so, one key instrument that provides an effective substitute for dividends is debt, although something is not generally recognized in the corporate finance literature. The issuance of debt as a substitute for stock, bond managers to their promise to pay out future cash flows in a manner that cannot be accomplished by simple dividend increases. This, however, empowers the debt holders with the right to initiate bankruptcy proceedings against the firm in the court of law if they do not keep their promise to make the interest and principal payments as when due to the debt holders (Jensen & Michael, 1976). Nguyen et al (2014) in their study tested the free cash flow theory and its effect on dividend policy of firms quoted on the Vietnams Stock Exchange from 2008 to 2012 and their result showed that the firm's dividend policy is consistent with the theory of free cash flow. This, however, implies that the company that pays a dividend higher than the industry average of 47.36% has the largest free cash flow. They further suggest that the firms that pay a dividend or higher dividend are mainly small firms which the authors noted that they do so to prevent the stock price from declining. This result led the authors to question the rationale for dividend payment by smaller firms, whether they just pay a dividend to meet the needs of the investors even if there are investment opportunities that can generate a positive Net Present Value for the firm. Wang (2010) investigated the extent to which free cash flow is associated with agency cost and how free cash flow and agency cost influence firm performance in Taiwan. Employing data from Taiwan publicly quoted companies the result showed that free cash flow has a significant impact on agency cost with two contrary effects. On one hand, he opined that free cash flow could incur agency cost due to perquisite consumption and shirking behavior; on the other hand, the generation of free cash flow, resulting from internal operating efficiency, could lead to better firm performance. Conversely, the result provides evidence of a positive and significant relationship between free cash flow and firm performance measures, demonstrating a lack of evidence supporting the free cash flow hypothesis. Empirical review Cai (2013) is a study that theoretically and empirically examined the relationship between corporate governance and firm-level over-investment of free cash flow, employing a cross-sectional paired sample of 1411 firms, annual observation of listed companies in Shanghai and Shenzhen stock exchange in China covering the period of 2003 up to 2010. The result showed that there is a significant positive association between over-investment and free cash flow. Hejazi and Moshtaghin (2014) in their study examined the impact of agency cost of free cash flow on the dividend policy and leverage firms in Iran. The study employed data generated from 101 companies listed on the Tehran Stock Exchange from 2007 to 2012. They adopted the multivariate linear regression model by using panel fixed effect approach in the analysis of the data generated. Their result indicates that there is a positive and significant effect that the agency cost of free cash flow has on the dividend and leverage policy of the firms under study. Furthermore, firm size and profitability were shown to exert a positive and significant effect on the dividend policy of the firms quoted on the Tehran stock exchange. Kadioglu and Yilmaz (2017) (2011). We could not find any study in Nigeria that directly tested the validity of the free cash flow hypothesis among firms trading in the Nigerian stock exchange. The following hypothesis were formulated and tested. H1: increase in free cash flow negatively affects dividend pay-out of firms in Nigeria. H2: changes in ownership structure do not have any effect on dividend pay-out of firms in Nigeria. H3: the interaction of free cash flow and ownership structure has no effect on dividend pay-out of firms in Nigeria. Research methodology The sample size for this study comprises of 30 dividend firms quoted on the Nigerian Stock Exchange. The names of the sampled firms are presented in Appendix 1. The sampling technique employed in this study is purposive sampling technique. The choice of this technique is to provide opportunities for the researcher to isolate and control for some limitations by removing some items from the population. According to Awoyemi and Bagga (2016) out of 212 quoted companies in NSE, only 124 pay divided representing 58% of all the firms listed on the NSE. Based on the consistency of dividend payment within the last five (5) years, they reported that companies that maintain 100% dividend payment consistency represents 18% of dividend-paying firms and are mostly large firms. Similarly, 6% of dividend-paying firms maintain a dividend payment consistency of about 71-80% while 35% of the firms maintain consistency of about 50-70%. The remaining 41% maintain a dividend payment consistency of less than 50%. Following the report of Awoyemi and Bagga (2016), we arrived at a sample size of 30 dividend-paying firms which maintained 80-100% dividend payment consistency within the period under investigation. These companies are presented below. The variables of choice comprises of dividend payout which is the dependent variable, free cash flow, total assets, board ownership, and ownership concentration which are independent variables. The study made use of the undistributed cash flow method to determine the free cash flow. The approach is similar to the one used by Kadioglu and Yilmaz (2017), Hong, Shuting, and Meng (2012), and Al-Zararee and Al-Azzawi (2014). The dependent variable (dividend payout) and the free cash flow were normalized using total assets although some studies normalized free cash flow using sale value. The model included natural logarithm of total assets as a control variable while the interaction of free cash flow and board ownership together with the interaction of free cash flow and ownership concentration where all included in the model to determine if the free cash flow hypothesis holds or otherwise given the ownership structure of the firm. 3.1 Model specification The signalling effect theory of dividend states that increase dividend pay-out is a signal that the management of the firm is trading favourably with the investors' fund. In other words, it suggests that an increase in dividend pay-out by a company is an indication of a positive prospect. Most often than none management of the firm uses dividend signalling as an indication of good investment potential. Meanwhile, the value of a firm's dividend for the current year often tends to reflect or to a greater extent depend on the previous years' value. Implying that, the lagged values of dividend pay-out affect the current years' value. To capture this effect, a dynamic panel data model with the lagged value of dividend will have to be included as an independent variable will be most appropriate. We, therefore, adopted and modified the GMM model developed by Arellano and Bover (1995) and Bllundel and Bond (1998) to suit the current study. And so, to achieve the objectives of this study, we specify five (5) different models to address the five hypotheses stated in this work. The level form of the model, as well as the first differenced model as proposed by Arellano and Bover (1995), are as follows: Where ∆ is the difference operators, ∑ is the summation sign, is the coefficient of free cash flow, is the coefficient of lagged dividend payout,∅ is the coefficient of ownership concentration, is the coefficient of the interaction term, is the coefficient of firm Size, is the error term. , is the Dividend Payout as a ratio of total assets, , −1 is the Dividend Payout as a ratio of total assets for the previous period, , is the Free Cash Flow to equity as a ratio of total assets, OWCi,t is the Ownership Concentration, FCF*OWCi,t is the Interaction between free cash flow and ownership concentration , is the Natural Logarithm of the firm's Total Assets Techniques of data analysis This study examined the effect of free cash flow on the dividend policy of publicly quoted firms in Nigeria. We began the estimation procedure by estimating a linear dynamic panel-data (DPD) model to capture the effect of lagged dividend pay-out on the current dividend pay-out. DPD models contain unobserved panel-level effects that are correlated with the lagged dependent variable, and this renders standard estimators inconsistent. The Arellano and Bond (1991) difference GMM estimator provides consistent estimates for such models. This estimator differences the data first and then uses lagged values of the endogenous variables as instruments. However, as pointed out by Arellano and Bover (1995), lagged levels are often poor instruments for first differences. Blundell and Bond (1998) proposed a more efficient estimator, the system GMM, which mitigates the problem of the weak instrument by using additional moment conditions. The system GMM uses more instruments than the difference GMM, and therefore one might expect the system estimator to be more biased than the difference estimator. However, Hayakawa (2007) shows that the bias is smaller for the system than the difference GMM. Specifically, the bias of the system GMM estimator is smaller because it is a weighted sum of the biases of the difference and the level estimator, and that these biases move in opposite directions. We, therefore, use the more efficient and less biased system GMM estimator for our regressions. We now point out some potential caveats of the system GMM estimator and discuss how these problems are addressed. The first issue relates to the validity of the instruments. Second, the procedure assumes that there is no second-order autocorrelation in the idiosyncratic errors. Another pertinent issue is that the test for autocorrelation and the test for the validity of the instruments lose power when the number of instruments, I, is large relative to the cross-section sample size (in our case, the number of firms), n. specifically, when the instrument ratio, r, defined as = , is less than 1, the assumptions underlying the two procedures are likely to be violated (Roodman, 2006). Furthermore, a low r raises the susceptibility of the estimates to a Type 1 error-i.e., producing significant results even though there is no underlying association between the variables involved. The easiest solution to this problem is to restrict the number of lags of the dependent variable used for instrumentation to the point where r≥1 (Roodman, 2006). To address these potential problems, we test for autocorrelation and the validity of instruments for each regression. Specifically, for each regression, we report the p-values for the test for second-order autocorrelation as well as the Hansen-J test for over-identifying restrictions. We report the results for the regressions, and the p-values to indicate whether the assumption of no second-order autocorrelation is satisfied in each of the regressions. Furthermore, the instruments are valid in all the estimated regressions. Thus, the two assumptions are satisfied in our specifications. Furthermore, in all the regressions, r≥1, and therefore we do not restrict the number of lags of the dependent variable used for instrumentation. We end the section by providing some details about our estimation strategy. First, we use the two-step GMM estimator, which is asymptotically efficient and robust to all kinds of heteroskedasticity. Second, the independent variables are treated as strictly exogenous in all the regressions. Besides, our regressions utilize only internal instruments-we do not include additional (external) instruments. Note that the system estimator uses the first difference of all the exogenous variables as standard instruments, and the lags of the endogenous variables to generate the GMM-type instruments described in Arellano and Bond (1991). Furthermore, the system estimations include lagged differences of the endogenous variables as instruments for the level equation. Model diagnostics One of the problems associated with Arellano and Bond (1991) GMM estimator is the overidentification of instruments. Since the estimator employs different lags instruments in other to eliminate the firm-specific effect, hence given rise to proliferation of instrument. The effect of instrument proliferation in the model is that too many instruments can over-fit endogenous variables, thereby failing to remove the endogenous components of the variable and as well lead to a biased coefficient estimate towards those from un-instrumented estimators. Meanwhile, to solve this problem, Arellano and Bond (1991) recommended restrictions in the model through the inclusion of collapse command in the GMM estimator. And so, to test form the validity of the restrictions in the model, the study employs Hansen and Sargan test of over-identification. Sargan-Hansen tests are the test of over-identifying restrictions, which is based on the null hypothesis that the over-identifying restrictions are valid. To proceed with the GMM dynamic panel data model, we must fail to reject the null hypothesis in both tests. Similarly, the choice of GMM in the first place over the Ordinary Least Square (OLS) estimator is due largely to the presence of autocorrelation in the model. This arises due to the inclusion of the lagged value of the dependent variable as an explanatory variable. And so, to justify the use of GMM there must be autocorrelation of order one (AR(1)) but not more than order one if the result of the dynamic GMM estimator is to be free from bias. The result of the AR(1) test must indicate the rejection of the null hypothesis which states there is no autocorrelation in the model and accept the alternative hypothesis of autocorrelation in the model. It is also important to note that the null hypothesis for AR(2) must not be rejected. Results and discussions The result of the system GMM as presented in table 4.1 below indicates that the coefficient of free cash flow is 0.0393 suggesting that free cash flow has a positive impact on dividend pay-out of publicly quoted firms in Nigeria. The coefficient of free cash flow as shown in the table is statistically significant at a 5% level of significance. This result suggests that a percentage increase in free cash flow will lead to about 0.04 per cent increase in dividend pay-out of quoted firms in Nigeria on average ceteris paribus. This finding fails to provide evidence to support the Jensen (1986), free cash flow hypothesis, which argued that managers of firms tend to invest in a project with negative Net Present Value (NPV) as the free cash flow within their control increases instead of distributing them as dividend to the shareholders. The assertion that increases in free cash flow leads to unnecessary administrative waste and inefficiency resulting in a decrease in dividend pay-out could not be substantiated in this study. Robust standard errors in parentheses *** p<0.01, ** p<0.05, * p<0.1 div = dividend pay-out as a ratio of total assets fcf = free cash flow to equity as a ratio of total assets at = natural logarithm of total assets owc = bulk ownership as a ratio of total outstanding shares bown = board interest as a ratio of total outstanding shares Our findings corroborate the result of Ojode (2014) who noted that firms with high free cash flow are likely to attract more investors who seek a return on their investment by way of dividend. This is so because most investors attach significant value on cash dividend. The result also evidences the use of free cash flow by management to mitigate agency conflict as investors are paid cash dividend as free cash flow increases. Similarly, our findings indicate that possession of free cash flow does not increase agency costs as noted by Jensen (1986). Suggesting that, since the management of the firm tends to distribute the free cash flow available to them as dividends to the shareholders; the investors will not need to increase the cost of monitoring on the management that controls free cash flow. This finding is also in line with the result of Vakilifard and Shahmoradi (2014), who reported a strong relationship between free cash flow and return on equity of firms quoted in Iran stock exchange. The result further shows that when free cash flow interacts with ownership concentration, the effect on dividend pay-out remains positive indicating that the free cash flow hypothesis is not valid among firms quoted in the Nigerian stock exchange whether the share ownership is concentrated or not. Columns 3 and 4 present the result of differenced GMM which was performed as a robustness test. It also failed to provide empirical evidence to support the propositions of the free cash flow hypothesis, confirming the robustness of the result of the system GMM. 4.1 Result of the diagnostic test To ascertain whether or not the data are consistent with the assumptions of the Blundell and Bond (1998), Arellano and Bover (1995) estimator, we examined some of the commonly used diagnostic tests. In particular, the Sargan and Hansen tests statistic which examines the over-identification restrictions were reported. The two tests essentially examines whether the instruments are uncorrelated with the error terms in the estimated equation. The test is based on the null hypothesis that the instruments as a group are exogenous. There is a need to find an exogenous instrument in other to validate the System-GMM estimates. (1991) System-GMM estimator. The second test we report is the Arellano and Bond test for autocorrelation. The null hypothesis is 'no autocorrelation' and relates to the differenced residuals. The result of AR(1) with a probability of less than 0.05 for all the estimated models indicates the rejection of the null hypothesis suggesting that there is autocorrelation at AR(1). We only report the test statistics and its associated p-values for AR(2). For all the estimated models, we are unable to reject the null hypothesis of 'no autocorrelation' for AR(2). It implies that there is robust evidence that all models are free from autocorrelation at the 5% level. Conclusion The novelty provided by this study is that it is the first study to directly test the validity of the free cash flow hypothesis among firms quoted on the Nigerian stock exchange. The study employed a sample of 65 dividend-paying firms quoted on the Nigerian stock exchange within the period of 2007-2017. The study provided empirical evidence that invalidates the propositions of the free cash flow hypothesis among publicly quoted firms in Nigeria. The result is robust using different estimation techniques. The interaction of free cash flow and ownership concentration affects dividend pay-out positive but statistically insignificant. This result implies that managers of a firm with free cash flow in Nigerian prefer distributing free cash flow to the shareholders as against investing it on projects with negative NPV. They chose this option because payment of the dividend is a signal that the firm is doing well which will result to increase in the market price of the firm's stock hence, increase the potential bonus accruable to the management of the firm. We, therefore, recommend that shareholders should not increase agency cost of monitoring firms with free cash flows in Nigeria as managers prefer to distribute free cash flow as a dividend instead of investing in projects with negative NPV. Limitation and study forward The study employed short panel data analysis techniques due to the unavailability of data for a longer period across the selected panel for the study. We equally focused on Jensen's free cash flow hypothesis across a sample of 65 countries quoted on the Nigerian stock exchange. Further studies can equally explore the validity of free cash flow on other stock exchanges in Sub-Saharan Africa.
2020-03-05T10:20:46.225Z
2020-03-03T00:00:00.000
{ "year": 2020, "sha1": "d544a8d9924bd8a3dbaa7acd86944d3c25f3850f", "oa_license": "CCBYSA", "oa_url": "https://goodwoodpub.com/index.php/ijfam/article/download/123/29", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "88b495db8e0b341114b30477e6a8dfdfa582c4bd", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [] }
230654302
pes2o/s2orc
v3-fos-license
Practice in the System of Training the Future Educator: Opportunities and Advantages --A cross-cutting program is an educational document that regulates the purpose, content, the chain of practices. Teachers of the university departments develop the work programs on the basis of a cross-cutting program since it contains recommendations on the types and forms of control over the level of knowledge, skills, and abilities that students should acquire. The authors studied training of the future teachers in the course of educational-productive practice in secondary schools and estimated its effectiveness. During the study, research and empirical methods were used, as well as methods of mathematical statistics. The link between the educational process of higher educational institutions and pedagogical practice in general secondary educational institutions was analyzed. The peculiarities of pedagogical practice and its influence on the formation of general and professional competences were revealed. It was concluded that practical training is an important component in teachers’ preparation and their self-affirmation. Introduction The modern stage of modernization in higher pedagogical education is aimed at forming the general and professional competences of the future teacher, ensuring his continuous growth and providing a search for the new forms and methods of his professional and pedagogical training. A compulsory component of the educational process in Pereiaslav-Khmelnytskyi Hryhoriy Skovoroda State Pedagogical University is the passing of pedagogical practice, which aims to form the methodical competence of the future teacher, deepen the theoretical knowledge, improve the professionally significant personality traits, determine the degree of professional ability and creatively responsible attitude of the future teacher, improve the professional skills of pedagogical education applicants (Chapran, 2019). The practical training of the students of Pereiaslav-Khmelnytskyi Hryhoriy Skovoroda State Pedagogical University is carried out under the legal and regulatory provision, in particular, "Regulations on the practical training of students of Pereiaslav-Khmelnytskyi Hryhoriy Skovoroda State Pedagogical University" approved by the Decision of the Academic Council of University (11 Regulations No. 6, 2019), The Law of Ukraine "On Education", The Law of Ukraine "On Higher Education", Regulation "On the Practice of Students of Higher Educational Institutions of Ukraine", approved by the Decree of the Ministry of Education of Ukraine; The State Higher Education Development Program, MES Order "On implementation in higher educational institutions of Ukraine of the European credit transfer system". Letters of the Ministry of Education and Science "On Practical Training of Students" (2009) and "On methodological recommendations for the introduction of the European credit transfer system and its key documents in higher education institutions" (2010); Concepts on the Pedagogical Education Development in Ukraine (2018), etc (Niemiec et al., 2006;Valitova et al., 2015). Teaching practice in pedagogical institutions of higher education is an organic part of the educational process. It provides a combination of psychological and theoretical readiness of future teachers for future practical activities. It is during the practice that the future teacher can determine how correctly he has chosen for himself the field of activity and find out the degree of concordance between the personal qualities and profession of a teacher. The pedagogical practice contributes to the formation and development of pedagogical erudition, pedagogical goal setting, pedagogical thinking, intuition, ability to improvise, pedagogical optimism and pedagogical reflection. The main tasks facing the trainee students are the next: realization of the general and professional competences, theoretical knowledge, and application of practical skills acquired during the study of theoretical courses "Pedagogy", "Fundamentals of pedagogical skill", "Methods of educational work", "Practical pedagogy", "History of Pedagogy" directly in the educational and disciplinary process of secondary educational institutions (Delaney & Krepps, 2021;Friedman & Kass, 2002). The content of the practices and the consistency of their conduction are determined through a cross-cutting program. It is the basic educational and methodic document that regulates the purpose, content, the chain of practices and summarizing of their results. It also contains recommendations on the types and forms of control over the level of knowledge, skills, and abilities that students should acquire passing each type of practice at each educational level. Based on a cross-cutting program for practice, teachers of the university departments develop the work programs that correspond to types of practices: educational (in particular didactic, continuous, acquaintance, etc.) and productive ones (educational-productive, undergraduate, etc) (Suryasa et al., 2019). The students of Pereiaslav-Khmelnytskyi Hryhoriy Skovoroda State Pedagogical University work according to the programs of pedagogical practices, namely: they study the features of methodical, organizational and educational activity of the experienced teachers, in particular, attend lessons, educational events, outline plans for lessons and educational activities, conduct trial and test lessons, developed educational activities, participate in the discussion over the results, carry out the psychological and pedagogical study of schoolchildren and class collective and characterize them, prepare reports and make the results on the teaching practice public. The basic principles of pedagogical educationalproductive practice in Pereiaslav-Khmelnytskyi Hryhoriy Skovoroda State Pedagogical University are the principles of student centricity, humanism, systematicity and consistency, activity, mobility of students and teachers, individualization and differentiation of the educational process aimed at implementation and construction of an individual educational trajectory of the development and improvement of students' professional knowledge and skills, personal growth of the future teachers and their self-realization (Markova et al., 2021;Slipchuk et al., 2021). Theoretical foundations of pedagogical practice organization are grounded in the educational and methodic manuals of the domestic scientists (Alekseienko et al., 2008;Arefiev & Kurysh, 2007;Denyschych & Kosareva, 2017;Serhiychuk et al. 2019;Tovt et al., 2019). Among foreign scholars, Virtanen & Tynjälä (2019), have made a thorough study of the factors that influence the formation of professional skills; Karlberg & Bezin (2020), study the professional development of Swedish teachers; King et al. (2019), have researched their own practical activities (SoTL); Dunn & Rice (2019), described the future teacher's level of training through the independent online work and more. The aim of the study is to investigate and test the effectiveness of the future teacher's competent professional practical training in the course of educational-productive practice in secondary schools. To substantiate the possibilities of organizing pedagogical practice through building of an individual trajectory for the development of a future teacher (Carchi et al., 2021;Velázquez et al., 2020). Materials and Method Research methods were theoretical: analysis, generalization, comparison; empirical: observation, studying of the products of students' activity, questionnaires, conversations, interviews, discussion, testing, pedagogical prognostication, pedagogical experiment for revealing the results of experimental work; methods of mathematical statistics. To analyze the link between the educational process of higher educational institutions and pedagogical practice in general secondary educational institutions; to reveal the peculiarities of pedagogical practice and its influence on the formation of general and professional competences, ways of improving its implementation by following the requirements of regulatory documents of Ukraine and the world community, particularly the Concept of Pedagogical Education Development in Ukraine (2018) and Dublin Descriptors, Office for Official Publications of the European Communities which are aimed at applying knowledge and skills to solve the new non-standard situations; to possess the communication skills; have the skills to continue education; to use theoretical and practical knowledge for the development of original ideas; to investigate the problems by integrating knowledge from new areas and finding solutions in the context of incomplete and limited information; to demonstrate leadership and innovation at work; to demonstrate the ability to interact quickly in difficult situations; to bear the social, scientific and ethical responsibility that arises at work (Metzler & Woessmann, 2012;Seng & Yatim, 2014). To accomplish this task, the authors distinguished educational-productive practice, because this type of practice, at the educational level of the bachelor, is provided in the final, 4-th course, after all the students have already a certain base of theoretical and practical knowledge. This type of practice differs in duration (6-8 weeks), the content of educational work and it is carried out on the bases of secondary educational institutions, according to the signed bilateral agreements between the higher educational institutions and institutions of general secondary education. It is provided for curricula in the specialties 014 Secondary education. According to the "Regulations on practical training of students of Pereiaslav-Khmelnytskyi Hryhoriy Skovoroda State Pedagogical University, educationalproductive practice is conducted according to the schedule of the educational process in the eighth semester, namely in February-March (2019). The purpose of the educational-productive practice is to consolidate and deepen the theoretical knowledge gained by students in the process of studying theoretical disciplines of the general, professional cycles and cycles of disciplines of the variable component; improvement of practical skills of pedagogical activity in the field of specialty; a collection of factual material for the completion of coursework and diploma works (Resolution of the Cabinet of Ministers, 2004). The peculiarity of educational-productive practice is an independent practical activity of the future teachers: 1) realization of tasks on the methods of teaching according to the direction of the specialty: conducting lessons, its analyses and processing of mistakes; 2) consolidation and deepening of the students' knowledge in the theory of education, ensuring of the connection between theoretical knowledge in the methods of educational work and the real educational process at school, performance of duties of the classroom leader in the assigned class: carrying out of the educational activities, classroom parental meetings, discussions with schoolchildren and their parents, its analyses (by taking into account the choice of educational methods, age and individual characteristics of the learners, their level of development, etc); 3) formation of professional competences of the future classroom teacher, application in practice of the theoretical knowledge in professional subjects, pedagogy, psychology, methods of educational work and methods of teaching the specialized subjects; formation of general and professional competencies, including the ability to design, organize and analyze the personal teaching activity; to plan educational, methodical and educational work under the curriculum and plan of educational work of the educational institution; design and carry out the different types of lessons that are most effective in learning relevant topics and sections of the program, adapting them to the existing levels of learners' preparation; ability to analyze the educational, educational-methodic literature and possibilities to use it for preparing presentation of the program material, ensuring the cross-curricular relations; ability to organize the educational, research, and cultural-leisure activities of the secondary school learners, to manage, and evaluate its results; ability to apply the modern techniques and technologies (including information, multimedia, cloud) for ensuring the quality of the educational process; ability to develop in schoolchildren the interest in learning, extracurricular activities, conscious choice of future profession; ability to build partnerships with pupils' self-government bodies; ability to organize different types of collective and individual activity of secondary school learners; ability to develop training materials for different in form educational activities; ability to use methods of scientific and pedagogical researches for studying a class collective, to keep a diary of pedagogical observations; ability to carry out vocational guidance work with schoolchildren; ability to analyze critically, evaluate the results of a personal professional activity and adjust the own activities (Serhiychuk & Bahno, 2014). In the design and implementation of the educational-productive practice in secondary educational institutions, the principle of student centricity is the decisive one, because from the personality of a student, the future teacher, the level of his theoretical training, the developed skills for professional activity, motivation for professional activity, creativity depend on the content of his practical pedagogical activity, and the choice of methods, forms of work with learners, application of the modern educational technologies. The implementation of the above-mentioned initial provisions and features of the educational environment of secondary educational institutions will contribute to the personal development of the future teacher and formation of his/her readiness for professional activity based on the individual trajectory (Rahman et al., 2015;Roga et al., 2015). Results and Discussion To identify the state of the future teachers' practical training organization, we On the eve of our study, a survey was conducted among students. The survey showed that only some students are confident in their effective educational activity, the available amount of theoretical knowledge, practical skills and personal qualities (10% of respondents -8 persons), most of them treat with caution the educational work based on the practice, have doubts about personal capabilities and qualities but they are confident in a sufficient amount of knowledge in pedagogy, psychology, teaching methods of professional disciplines and educational work with schoolchildren and wants to try out themselves in terms of practice and increase their pedagogical skills as a teacher and a classroom leader (85% of the respondents -68 persons). 5% of the students covered by the questionnaire (4 persons) have some doubts about their professional competences, theoretical knowledge, possibilities of their professional use, personal qualities of a teacher and on the whole chosen pedagogical profession (Marshall et al., 2015;Balboni et al., 2015). Experimenting involved taking into account the position that the main components of the future teachers' preparation for professional activity are: motivational-target, value-oriented, practical, communicative, and creative. The motivational-target component involves the formation of professional motivation for pedagogical activity, where the main purpose is the formation of professional competencies of the future subject teachers. The value-oriented component is aimed at forming the professional competences of future teachers in the process of studying the pedagogically oriented disciplines. The practical component of the future teachers' professional training for pedagogical activities provides the formation of the future teacher's skills in applying the acquired knowledge into practice and reflects the ability to combine the theoretical and practical knowledge that contributes to such a process. The communicative component is responsible for the development of future teacher's communicative skills and it provides the most effective communication activity of the future bachelor. The creative component contributes to the development of the creative abilities of future bachelors by solving the assigned tasks and non-standard professional situations. Besides, we found out the students' opinion about the role and place of pedagogical (educational-productive) practice in preparation for the future teacher's profession, the formation of skills with the application of theoretical knowledge in practice, the development of students' creative personality and professional abilities of the students, formation of their research skills, the level of cooperation between the heads and methodists of the higher educational institution and the secondary educational institution, as well as the problems that occurred during the teaching practice. The success of trainee students' pedagogical practice in the institutions of secondary education, as our research showed, largely depends on the understanding by the pedagogical staff of its role in the preparation of the future teacher, the business mood of the pedagogical team, a friendly atmosphere for students and a clear organization of the joint activities of subject teachers and trainee students. Of course, if the pedagogical staff of the school is aware of the responsibility for the task connected with the future teachers' preparation, the pedagogical practice will be successful. In particular, the question in the questionnaire "Why did you choose the profession of a teacher?" and the discussion that has been carried out on this issue facilitated the detection of the future educators' professional orientation. The results of the work are shown in Table 1. Positive attitude to the chosen profession, respect for the profession of a teacher, desire to be a teacher were expressed by 35% of respondents (28 students), 12 (15%) of the students are confident that they will be able to realize themselves in this profession; the other 30% have a desire to "work with children" (16 students), "satisfies the mode of teacher's work" (8 students); in general, 20% of students had only a desire to get higher education (no matter what) ( Table 1). The value-oriented component reflects the formation of the professional competencies of the trainee student, his / her personal qualities as a future teacher. We have explored this component in the process of testing, interviewing, survey and conversations with trainee students. In one of the questionnaires, we asked some common questions, such as: "Did the teacher provide you with practical help during the educational work with learners?" "Did the subject teacher provide you with methodical advice for conducting the lesson?", "What qualities do you consider to be key ones in the activity of a teacher?", "What are the negative features of the professional and personal orientation of a teacher which can be harmful to the educational process?" etc. Most of the students, almost 85% of the total number of respondents (68 students), indicated in their answers to the available worthy assistance of a subject teacher who is well aware of the individual characteristics of the learners in his class. In this case, the teacher's activity for students is an additional source of pedagogical knowledge. The educational power of a personal example of a teacher for students is manifested primarily in the fact that they analyze the pedagogical activity of the teacher and deliberately try to imitate the positive experience. Without having enough practical experience in educational work with children, students carefully observe how the teacher keeps himself in the classroom, how he communicates with children, uses different techniques and methods of educational and educative influences, taking into account the individual characteristics of pupils, their personal trajectory. The moral-psychological and personal qualities of the subject teachers positively influence the formation of professional interest of the trainee students and cause them a sincere feeling of satisfaction with the pedagogical practice, or vice versa, the lack of desire to be a teacher in the future profession (Table 2). The students note different aspects of the teacher's activity, which awakened and intensified their love for children and pedagogical work in general. Most trainee students have the following qualities of a subject teacher: deep knowledge of the subject (85%), positive attitude towards trainee students (85%) and pupils (95%), enthusiasm for their work (75%). Of course, the personal qualities of teachers distinguished by students are made on the basis of the moral and psychological qualities of teachers with whom the students worked directly during the pedagogical practice, and they reflect the subjective perception of the students. At the same time, the image of the best teachers (experienced, with extensive experience, high-ranking) has become a pedagogical ideal for many students, a worthy example to follow. However, some trainee students have also had to deal with teachers who inhibit future teachers' interests in educational work while they pass the educational-productive practice. The students include to such qualities the following (Table 3): The indifference and negative personality traits of the subject teachers which we have identified in the indicators adversely affect the organization and conduction of the pedagogical practice. It is important to note that the higher mastery of the teacher, the higher his / her satisfaction with the work and, accordingly, a desire to help the trainee student to overcome difficulties during their entry into the pedagogical profession and practical activity. Thus, the study proves that among the qualities that characterize the professional mastery of the teacher, students have identified the following: knowledge of the subject and psychological and pedagogical foundations of teaching; providing of the individualized approach to the learners and interdisciplinary connections; working with a class asset; knowledge of psychological and pedagogical bases of education; the ability to involve in the educational process pupils-parents-teachers. The practical component of the future teacher's professional training is fully reflected in trainee students' synopses of lessons in the relevant subjects, educational activities, the ability to reflect the existing theoretical knowledge in their practical activity of the subject teacher and class supervisor, their introspections and reports; during the observation of the trainee students' activities; in conversations with pedagogical staff of general secondary educational institutions and pupils. The practical component enables methodists in pedagogy to follow the existing knowledge of students in pedagogy, age psychology, anatomy and hygiene of the different classes schoolchildren; identify gaps in the theoretical base of practitioners' knowledge and correct them through the practical training; to teach students to build educational process following the modern legal framework, to apply the latest approaches in education, methods, forms and technologies of educational process organization, etc. It is important for today to take into account by the trainee students of childcentrism in the educational process; impartial and fair treatment of every learner, overcoming any discrimination; ability to mark the efforts and successes of all schoolchildren and to build educational activities based on a personality-oriented model of education, humanistic and competence approaches. When planning pedagogical practice, it is necessary to take into account the pedagogical experience and personal professional and moral qualities of the pedagogical workers, as heads of practice from the base of passing. Along with the qualities we have described above, the leaders of the practice bases, the subject teachers should possess the innovative pedagogical technologies, introduce them into the educational process in classes where they teach; to be acquainted with the modern tendencies of education development in the state, peculiarities of education and upbringing under the requirements of the New Ukrainian School. The communicative component of the future teacher's professional training involves, first and foremost, the creation of a safe educational environment for all participants of the educational process based on the pedagogy of cooperation. Among the methods, we used to study the communicative component were: observation, conversation, discussion, and pedagogical design. To ensure the communication skills of the trainee student, some educational activities were carried out with pupils and their parents. The student was encouraged to apply the forms of work in small groups at the lessons, project activities, situation modeling, dramatic education, and more. In such circumstances, the trainee student has established an interaction in schoolchildren's body, motivated the learners for the activity, mutual assistance, ability to resolve conflicts and build the productive relationships. The activities of each future teacher should take into account the communicative component (to find common ground with pupils, to foster relationships with classmates, students with teachers and pupils' parents). The creative component of the future teacher's professional training is ensured by his creative, innovative educational activities, the search for new ways of realizing the didactic and educational tasks during the practice, and the ability to think critically. In close collaboration with the methodist from the higher educational institution and the head of practice from the institution of general secondary education, the future educator has the opportunity to apply innovative, new pedagogical technologies and introduce them into the educational process, to carry out experimental work in order to reflect it in his scientific-research diploma work. It should be noted that trainee students objectively evaluate their abilities during their pedagogical practice. This is evidenced by their reports after the practice. The student questionnaire contains several questions, including: "What steps it is necessary to take in order to improve the organization and passing of the pedagogical practice?", "What kind of knowledge do you lack to pass the practice?", "Did you have difficulty in communication with schoolchildren?", "What difficulties did you have while working with pupils in the classroom?", "Did you experience difficulties during the lesson or educational hours?" etc. Under conditions of our research and the components of the future teachers' professional training during passing of the educational-productive practice, we have identified the following criteria for analyzing, generalizing and evaluating the educational activity of the future teacher in the general secondary educational institutions: motivational-prognostic; cognitive-knowledge; result-evaluative; emotional-reflexive. The motivational-prognostic criterion involves the study of the presence of interest in the pedagogical activity, awareness of the meaning of teacher's profession and opportunities to realize themselves in the productive organization of educational, research, independent and extracurricular activities of schoolchildren for their harmonious and comprehensive development. In our research, this criterion is represented by following indicators, such as definition of the goals for learning activities, responsibility for the learning outcomes; student's awareness of the importance of his intellectual development, presence of a steady interest in the professional activity; awareness of the importance of the level of professional activity, and aspiration for the professional development. At the heart of the cognitive-knowledge criterion is the level of the future teacher's mastering by the acquired special, professional knowledge, skills, and personal qualities. The indicators of cognitive-knowledge criterion include student's knowledge of pedagogy, psychology, methods of educational work and teaching methods for specialty; disclosure of the level of awareness in the professional field, the level of cognitive activity; formation of trainee student's qualities of thinking (speed, independence, flexibility); possession by operations of thinking; display of creativity in the professional activity. The emotional-reflexive criterion characterizes the emotional stability and ability of a teacher to reflect as a professional; the ability to carry out self-analysis of the own activity, to be aware of the effect of educational influence. Indicators of the emotional-reflexive component reveal the essence and social significance of the future pedagogical profession, manifestation of a lasting interest in it, the ability to show emotional stability when working with children, parents, and teaching staff. The result-evaluative criterion indicates the ability to regulate, monitor and evaluate student's activity; availability of self-appraisal and self-knowledge skills and abilities; propensity for self-actualization and self-improvement. The indicators of the result-evaluative criterion provide for the verification of the readiness of future teachers to active application of the acquired competences in real conditions of the professional activity based on their ability to solve effectively the real professional tasks; ability to find the main thing from the acquired experience, self-development and self-control skills; independent detection of errors; the ability to evaluate the results of one's activity in its context; personal responsibility for the results of their work. The substantiated criteria included the presence in trainee students of a set of motives that motivate them to self-education, need for continuous selfdevelopment, self-improvement; awareness of personal and social importance in the professional activity; ability to plan and carry out professional activities; ability to work in student and pedagogical teams; ability to make and provide the internally subject and cross-domain links; ability to find independently the optimal ways for accomplishing the tasks set before the trainee student; ability to adjust the results of one's professional activity by exercising self-control, selfexamination, and self-appraisal. In the course of the experiment, there were analyzed the results of the experimental work and it was found that the pedagogical conditions introduced into the practical training of the trainee students raised the level of their professional competences in professional activity by all criteria. Based on the existing criteria, the quality levels of the student's pedagogical activity at the beginning and at the end of the educational-productive practice were determined. Adaptive (low) -when a student solves pedagogical problems and situations through trial and mistakes. He cannot use in practical activities the acquired in higher educational institution knowledge, and does not use methodic literature; it is difficult for him to work with the subject teacher. Illconsidered working methods in the classroom, determined spontaneously purpose of the lesson, without taking into account the level of children's preparedness in the classroom, spontaneously selected methods and techniques often lead to the methodic errors; with considerable difficulty and errors, he develops synopses of lessons and educational hours. The activities of such students are unsuccessful, they are not able to assess adequately their capabilities, they are passive, and do not believe in themselves. An adequate (intermediate) level includes the practical activity of a student which becomes searchable; he widely uses ready-made synopses of lessons and developments of educational hours; can achieve good results by working on a model, copying the methods and techniques of the other educators; when solving non-standard situations he can find an effective solution, mainly using the analogy method; originality of classes and educational activities are usually absent; the work is mostly based on the outdated stereotypes with minor changes. The creative (high) level is characterized by a close connection of theoretical knowledge with great creative potential; trainee student can develop independently a new idea, methodically effective in the results of classes and other activities; easily and correctly finds the right solutions to non-standard pedagogical situations; possesses a distinct style of the pedagogical activity, constantly strives for a creative search, and professionalism. Upon completion of the educational-productive practice, the future teachers pass the diagnosis in the higher educational institution, which determines the real level of their professional qualities and skills development. The theoretical competence includes a system of knowledge in pedagogy and psychology about the development of a certain age category of schoolchildren, age peculiarities of the mental processes in adolescent children; complex of theoretical knowledge about regularities, principles, methods, techniques, means, forms of modern education, innovative pedagogical technologies, possibilities of their implementation in the institution of general secondary education; complex theoretical knowledge about the purpose of education in modern conditions, knowledge about ways for overcoming different types of conflicts, methods of preventing their occurrence; understanding of the essence of methods, techniques, directions, and forms of organization the educational work with a younger generation. Practical and methodical competence involves becoming in a future teacher of the design, gnostic, procedural, organizational, constructive, and communicative skills. Design skills: can carry out the perspective and weekly planning of the own work and activity of schoolchildren; shows autonomy and initiative in planning of the educational and extracurricular classes according to the speciality. Constructive skills: can define and substantiate the purpose, content, methods, and techniques of training; can draw a detailed synopsis of the lesson, with showing independence and initiative; can determine the content of educational hours in the specialty in accordance with the level of psychophysiological development of learners, can select the appropriate material and model the form of educational activity. Gnostic skills: the trainee student analyzes from different aspects the attended lessons conducted by the teacher; can analyze educational hours in the specialty carried out by other students; can analyze the own activities (the effectiveness of the lesson, educational hours, knows how to make the necessary adjustments in its conduction); Procedural skills: the trainee student has a deep knowledge of the lesson material, doesn't make mistakes; can use different methods for activating the cognitive activity of schoolchildren at the lessons of a professional subject and educational hours; can evaluate the level of knowledge, skills, and competences of learners under the current standards. Organizational skills: the trainee student conducted the required number of lessons and educational hours; timely submitted the properly prepared report on educational-productive practice; actively participated in the discussion on the results of educational-productive practice at the final conference or seminar. Communication skills: non-conflict communication during the practice; has impeccable literary broadcasting (in lessons and after school hours); can react correctly to the comments that arise in the process of practical activity. The results of the conducted research showed that the dynamics of the levels of quality of student's pedagogical activity at the beginning and at the end of educational-productive practice has been improved significantly: creative (high) level after passing the practice has 25% of students (20 people), sufficient (average) level -70% (56 students) and a low level of theoretical and practical knowledge is observed in 5% (4 students). At the beginning of the experiment, these indicators were as follows: creative (high) level was demonstrated by 15% of students (12 students), sufficient (average) level -60% (48 students), and low level of theoretical and practical knowledge -25% (20 students). That is why students with a low level of quality in pedagogical activity, and, in particular, formation of the general and professional competencies, received the additional teaching and methodic support from teachers-methodists, which consisted in counseling, collaboration, coordination and correction of the synopses of lessons and educational activities. Thus, the methodists implemented a compensatory function in the process of practice: deepened theoretical knowledge in professional subjects, improved skills in the choice of teaching methods, their application, established cooperation with the pedagogical staff of the educational institution and class teams, which affected, in particular, on the improvement of personal qualities and raising of their teaching skills. To summarize the results on students' passing of the educational-productive practice, the Department of General Pedagogy and Pedagogy of the Higher School of Pereiaslav-Khmelnytskyi Hryhoriy Skovoroda State Pedagogical University organized a special seminar on "Modern Challenges in Future Teache's Preparation" (10.03.2020 р.). Its purpose was to discuss the problems of the quality of pedagogical activity of the student in the process of pedagogical educational-productive practice and his psychological and pedagogical readiness for the practical professional activity. Students of various specialties, methodists, group supervisors, subject-teachers from the practice bases were invited to participate in the seminar (classroom leaders, for whom the trainee students were assigned). Each participant of the seminar had the opportunity to offer the own suggestions on how to improve the process of future teachers' preparation for the educational-productive practice, as a significant step in their professional activity. According to the results of the seminar, was made a decision on the improvement of pedagogical skills formation of the future teacher; enhancement of the students' training on the problems of studying relationships in the learners' team and individual characteristics of schoolchildren; improvement of students' ability to build lessons in accordance with psychological, pedagogical and valeological features; motivation of the creative ideas for submission of educational material, and it was also recommended to make changes in educational and professional programs for their updating according to the educational necessity. Conclusion Thus, the conducted research makes it possible to consider at a qualitative level the problem of professional competence formation of the future teacher of secondary educational institutions in the process of pedagogical practice. The substantiated components of organization, carrying out and results of the future teachers' practice (motivational-target, value-oriented, practical, communicative, and creative) point to the thoroughness of the research and its effectiveness. We have substantiated the criteria of the future teacher's activity in the institutions of general secondary education (motivational-prognostic; cognitive-knowledge; result-evaluative; emotional-reflexive) and the levels of quality of student's pedagogical activity (adaptive, sufficient and creative). Selected methods of our research (questioning, survey, observation, testing, experiment, analysis, and generalization of the work results, etc.) contributed to realization of the purpose and objectives of the study. The results of the study indicate a positive dynamic in acquisition and improvement of the general and professional competences of future teachers in the process of practical training, in particular, educationalproductive practice. Besides, the study showed that students have profound and qualitative knowledge in specialized subjects (pedagogy, psychology, teaching methods, etc.). The conducted research proves once again that practical training is an important component in teachers' preparation and their self-affirmation.
2021-11-03T15:18:00.744Z
2021-10-23T00:00:00.000
{ "year": 2020, "sha1": "943522ea36936955e4aecff6db2a2c760ab4d516", "oa_license": null, "oa_url": "https://doi.org/10.13189/ujer.2020.082419", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "7bc677db78fb78dc6e9c471dd0cea8fae7b8740d", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Psychology" ] }
23052284
pes2o/s2orc
v3-fos-license
Carrier Multiplication Mechanisms and Competing Processes in Colloidal Semiconductor Nanostructures Quantum confined semiconductor nanoparticles, such as colloidal quantum dots, nanorods and nanoplatelets have broad extended absorption spectra at energies above their bandgaps. This means that they can absorb light at high photon energies leading to the formation of hot excitons with finite excited state lifetimes. During their existence, the hot electron and hole that comprise the exciton may start to cool as they relax to the band edge by phonon mediated or Auger cooling processes or a combination of these. Alongside these cooling processes, there is the possibility that the hot exciton may split into two or more lower energy excitons in what is termed carrier multiplication (CM). The fission of the hot exciton to form lower energy multiexcitons is in direct competition with the cooling processes, with the timescales for multiplication and cooling often overlapping strongly in many materials. Once CM has been achieved, the next challenge is to preserve the multiexcitons long enough to make use of the bonus carriers in the face of another competing process, non-radiative Auger recombination. However, it has been found that Auger recombination and the several possible cooling processes can be manipulated and usefully suppressed or retarded by engineering the nanoparticle shape, size or composition and by the use of heterostructures, along with different choices of surface treatments. This review surveys some of the work that has led to an understanding of the rich carrier dynamics in semiconductor nanoparticles, and that has started to guide materials researchers to nanostructures that can tilt the balance in favour of efficient CM with sustained multiexciton lifetimes. Introduction The process of carrier multiplication (CM) in semiconductors follows excitation with a high energy photon where the excess of energy above the bandgap, E g , is at least twice the latter and may in fact need to be considerably higher for the multiplication process to occur. CM is sometimes also termed multiple exciton generation (MEG) where the excitation process leads to the formation of multi-exciton states. The process is known to occur in both bulk [1,2] and quantum confined semiconductors, i.e., quantum dots (QDs), and the latter are understood to offer advantages over their bulk counterparts owing to potentially lower multiplication thresholds and higher slope efficiencies [3] (the rate at which the carrier or exciton yield increases with excitation photon energy once the threshold has been exceeded, see Figure 1). Excitation at high photon energies is said to initially yield hot carriers (excitons, electrons or holes) where the excess energy is partitioned between the carriers according to the ratio of the inverse of their effective masses as an optical selection rule [3]. The precise mechanism by which the excess energy is converted to multi-excitons has been the subject of frequent debate and we will discuss the various scenarios in more detail later. In bulk semiconductors, the kinetic impact ionization process is most frequently encountered and is often used as the model for CM in QDs by extrapolation from the bulk case, with modification where appropriate for momentum conservation considerations. Figure 1. A staircase-like increase is shown in the ideal case for carrier multiplication (CM) where only energy conservation applies (black curve) and where momentum conservation must also be satisfied (red curve). In the latter case, the threshold corresponds to that of a semiconductor similar to bulk silicon (=4 Eg). In practice, quantum dots (QDs) do not show abrupt increases in exciton yields vs. excitation energy as shown for the green and blue curves. Their threshold may sometimes be termed 'soft'. The green line would correspond to a QD with equal hole and electron effective masses (me, mh respectively), whilst the blue line shows a case where me/mh = 0.2. Impact ionization is well known in many bulk semiconductors and is the basis for established photodetector technology in avalanche photodiodes (APD): Si, Ge, InAs, InP, InGaAs, Ga1−xAlxSb and Hg(1−x)CdxTe have all been exploited for APD devices [4][5][6]. In the Ga1−xAlxSb and Hg(1−x)CdxTe alloys, advantage is taken of the ability to manipulate the spin-orbit splitting energy in order to place the split-off band at resonance with the energy of the bandgap to enhance the efficiency of the multiplication process for certain alloy compositions [4,6]. CM has also been envisaged as offering the scope for substantial improvements in solar cell performance. Unfortunately, in the case of many bulk materials, the threshold for multiplication is rather high. For an optimum bandgap energy for solar conversion (around 1.2-1.4 eV) [7], the CM threshold for bulk silicon of around 4 Eg [8][9][10] puts most of the solar spectrum beyond the range of the CM effect. Beard et al. [11] have compared the threshold and slope efficiencies for bulk and QD semiconductors. They commented that for bulk PbSe and PbS the CM threshold should be 4 Eg (experimentally 4.5 Eg) whereas the more recent experimental values for QD versions of these materials range between 2.7 Eg and 3 Eg. However, they also commented that for significant benefit in solar cell performance, the threshold energy should be as low as 2 Eg-2.5 Eg i.e., at or approaching the energy conservation limit. For PbSe QDs as an example, Beard et al. predicted that the Shockley-Queisser solar cell efficiency limit [12] could be raised to around 42% [11], though more recent evaluations have suggested that rather lower efficiencies might be available (e.g., Binks with experimental CM data shows very marginal solar cell efficiency improvements in the 1.2 eV-1.4 eV bandgap range [13], and Nair and et al. predicted realistic enhancements in power conversion efficiencies of under 5% [14] also based on reported CM performance at that time. The actual improvement in the performance of solar cell test devices has been somewhat limited. Semonin et al. [15] attributed only 4% of the photocurrent in their PbSe photovoltaic devices to CM. The rather muted outcomes from trying to use CM to enhance solar cell efficiencies have led to a continued Figure 1. A staircase-like increase is shown in the ideal case for carrier multiplication (CM) where only energy conservation applies (black curve) and where momentum conservation must also be satisfied (red curve). In the latter case, the threshold corresponds to that of a semiconductor similar to bulk silicon (=4 E g ). In practice, quantum dots (QDs) do not show abrupt increases in exciton yields vs. excitation energy as shown for the green and blue curves. Their threshold may sometimes be termed 'soft'. The green line would correspond to a QD with equal hole and electron effective masses (m e , m h respectively), whilst the blue line shows a case where m e /m h = 0.2. Impact ionization is well known in many bulk semiconductors and is the basis for established photodetector technology in avalanche photodiodes (APD): Si, Ge, InAs, InP, InGaAs, Ga 1−x Al x Sb and Hg (1−x) Cd x Te have all been exploited for APD devices [4][5][6]. In the Ga 1−x Al x Sb and Hg (1−x) Cd x Te alloys, advantage is taken of the ability to manipulate the spin-orbit splitting energy in order to place the split-off band at resonance with the energy of the bandgap to enhance the efficiency of the multiplication process for certain alloy compositions [4,6]. CM has also been envisaged as offering the scope for substantial improvements in solar cell performance. Unfortunately, in the case of many bulk materials, the threshold for multiplication is rather high. For an optimum bandgap energy for solar conversion (around 1.2-1.4 eV) [7], the CM threshold for bulk silicon of around 4 E g [8][9][10] puts most of the solar spectrum beyond the range of the CM effect. Beard et al. [11] have compared the threshold and slope efficiencies for bulk and QD semiconductors. They commented that for bulk PbSe and PbS the CM threshold should be 4 E g (experimentally 4.5 E g ) whereas the more recent experimental values for QD versions of these materials range between 2.7 E g and 3 E g . However, they also commented that for significant benefit in solar cell performance, the threshold energy should be as low as 2 E g -2.5 E g i.e., at or approaching the energy conservation limit. For PbSe QDs as an example, Beard et al. predicted that the Shockley-Queisser solar cell efficiency limit [12] could be raised to around 42% [11], though more recent evaluations have suggested that rather lower efficiencies might be available (e.g., Binks with experimental CM data shows very marginal solar cell efficiency improvements in the 1.2 eV-1.4 eV bandgap range [13], and Nair and et al. predicted realistic enhancements in power conversion efficiencies of under 5% [14] also based on reported CM performance at that time. The actual improvement in the performance of solar cell test devices has been somewhat limited. Semonin et al. [15] attributed only 4% of the photocurrent in their PbSe photovoltaic devices to CM. The rather muted outcomes from trying to use CM to enhance solar cell efficiencies have led to a continued interest in the alternate approach of trying to extract hot carriers before either cooling or fission (i.e., CM) of the hot exciton can occur [16,17]. Nonetheless, if CM Materials 2017, 10, 1095 3 of 32 with near energy conservation limit threshold and high slope efficiency can be obtained, then it remains a potentially interesting mechanism for improved solar cells along with other applications including QD based lasers, high brightness light emitting devices, etc. However, to be of practical benefit most of the carriers would have to be extracted within the respective multi-exciton lifetimes. This is due to the competing process of Auger non-radiative recombination which rapidly and sequentially annihilates excess excitons until the QDs are only single exciton occupied, or worse still leave the QDs in a charged state so that even subsequent single exciton excitation will be at the mercy of fast trion Auger non-radiative recombination [18,19]. For spherical QDs the biexciton lifetimes are typically a few 10s to 100 ps, for core/shell heterostructures several 100s or even 1000s of ps [20], for nanorods similarly 1000 ps is achievable [21], whilst in more recently explored 2D nanoparticles (nanoplatelets) biexcitons can survive for as long as 14,000 ps [22]. For higher multi-exciton occupancies, the triexciton and n-excitons decay even more rapidly than the biexciton [23] and so fast carrier extraction is a must for solar cell applications in order to reap the benefits of CM. CM Quantum Yield Measurements Multi-excitons can exist in QDs either because a hot exciton has undergone fission to form two or more cooler excitons or simply because more than one separate photon absorption event has each yielded an exciton before the other(s) have decayed by radiative or non-radiative channels. The greater the excitation fluence the higher the probability that this may occur. Irrespective of how the multiexcitons were created in the nanoparticle, as soon as two or more excitons are in residence, Auger recombination may occur, progressively removing one exciton at a time until only a single exciton remains. In many cases, the latter has a much longer lifetime than any of the multi-excitons and this forms the basis for several measurement techniques that can follow an ensemble's exciton occupancy. Femtosecond/picosecond transient absorption [24] (TA), heterodyne transient grating [25] (TG), picosecond transient photoluminescence [26] (TPL), and time resolved microwave conductivity [27] (TRMC) measurement techniques can all follow the evolution of the (average) exciton occupancy per QD. In the TA case for example, the band edge absorption is rapidly bleached as excitons populate the QD and cool towards the band edge. The intensity of the bleach is a measure (a linear function) of the number of excitons per QD. As Auger recombination progresses the bleach intensity falls to an almost asymptotic level, usually on the nanoseconds timescale, with the level corresponding to single exciton occupancy (e.g., Figure 2) [28]. On longer timescales, the bleach will recover completely as the much slower single exciton recombination channels return the QD to the neutral ground state. Similar dynamics are seen in TG signals. The peak number of excitons can therefore simply be determined by the ratio of the peak to single exciton asymptotic signal levels. To determine how many of the multiexcitons were formed by hot exciton fission (e.g., by impact ionization) rather than multiple absorption events, the experiment can be repeated several times for a range of different fluences, including very low levels and the zero fluence peak occupancy can be extracted from the fluence dependence. Unfortunately, in practice things are not quite so simple. Whilst the above experimental method/analysis is correct in principle, it has been suggested that there is a possible additional contribution to the picosecond decay that may masquerade as Auger multiexciton decay processes but which in fact originates from the decay signal of charged electron-hole complexes such as trions [19] (one electron and two holes or vice versa) or charged multi-excitons. Such additional contributions may inflate the apparent CM quantum yield (QY) or indeed appear to give evidence of CM where there is none (or where the true CM threshold is a lot greater than it appears). Califano [29] modelled the decay kinetics of biexcitons and both positive and negative trions in the InAs and InAs/CdSe core/shell systems and concluded for that material that trion decays, being several times slower than biexciton decays, should be distinguishable when fitting the bleach recovery signal. However, he also suggested that the large variability in reported CM QYs may also be at least partially explained by surface effects arising from different synthetic methods and surface treatments during sample purification and preparation. An early striking example of the realization that possible photocharging effects may complicate the extraction of CM parameters from experimental data comes from work by Pijpers et al. [30] on InAs QDs including InAs/CdSe/ZnSe core/shell/shell structures. In that work, a CM yield of 1.6 at an excitation energy of 2.7 Eg was reported. However, a subset of the same group of authors [31] revisited with TA measurements on the same system in the following year and with an improved approach to the extrapolation to zero excitation fluence failed to observe any CM up to excitation energies of 3.7 Eg. Similarly, when Nair et al. attempted to observe CM in PbS, PbSe [26] and CdSe, CdTe [32] QDs using ultrafast TPL measurements, no or very limited evidence was found even above previously reported CM thresholds, that were well above the energy conservation limit. Time resolved photoluminescence experiments using time correlated photon counting as the detection method require only very weak excitation signals, thus ensuring that photocharging (e.g., originating from Auger recombination and ionization processes following multiple absorption events) is likely to be minimal or absent. McGuire et al. [33] considered the effect of a known degree of photocharging on the TA bleach signal and showed how the true CM QY could then be recovered from a measured signal (for PbSe QDs, see Figure 3). They also showed that the true CM signal could be obtained experimentally if the sample was stirred during measurements since any charged fraction of the measured solution would be swept out of the pump/probe beams and be diluted/have sufficient time to become neutralized by Unfortunately, in practice things are not quite so simple. Whilst the above experimental method/analysis is correct in principle, it has been suggested that there is a possible additional contribution to the picosecond decay that may masquerade as Auger multiexciton decay processes but which in fact originates from the decay signal of charged electron-hole complexes such as trions [19] (one electron and two holes or vice versa) or charged multi-excitons. Such additional contributions may inflate the apparent CM quantum yield (QY) or indeed appear to give evidence of CM where there is none (or where the true CM threshold is a lot greater than it appears). Califano [29] modelled the decay kinetics of biexcitons and both positive and negative trions in the InAs and InAs/CdSe core/shell systems and concluded for that material that trion decays, being several times slower than biexciton decays, should be distinguishable when fitting the bleach recovery signal. However, he also suggested that the large variability in reported CM QYs may also be at least partially explained by surface effects arising from different synthetic methods and surface treatments during sample purification and preparation. An early striking example of the realization that possible photocharging effects may complicate the extraction of CM parameters from experimental data comes from work by Pijpers et al. [30] on InAs QDs including InAs/CdSe/ZnSe core/shell/shell structures. In that work, a CM yield of 1.6 at an excitation energy of 2.7 E g was reported. However, a subset of the same group of authors [31] revisited with TA measurements on the same system in the following year and with an improved approach to the extrapolation to zero excitation fluence failed to observe any CM up to excitation energies of 3.7 E g . Similarly, when Nair et al. attempted to observe CM in PbS, PbSe [26] and CdSe, CdTe [32] QDs using ultrafast TPL measurements, no or very limited evidence was found even above previously reported CM thresholds, that were well above the energy conservation limit. Time resolved photoluminescence experiments using time correlated photon counting as the detection method require only very weak excitation signals, thus ensuring that photocharging (e.g., originating from Auger recombination and ionization processes following multiple absorption events) is likely to be minimal or absent. McGuire et al. [33] considered the effect of a known degree of photocharging on the TA bleach signal and showed how the true CM QY could then be recovered from a measured signal (for PbSe QDs, see Figure 3). They also showed that the true CM signal could be obtained experimentally if the sample was stirred during measurements since any charged fraction of the measured solution would be swept out of the pump/probe beams and be diluted/have sufficient time to become neutralized by non-geminate recombination before re-entering the beam once more. Although only applicable to (low viscosity) solutions this measurement approach has now become commonplace. Other measures to combat photocharging include using a flow cell arrangement and also rastering the measurement cell through the beam, though this is perhaps more effective in terms of avoiding the runaway accumulation of damaged or precipitated material at the measurement cell interior surface with more fragile materials where colloid stability is marginal. For many materials in solution, stirring is sufficient both for the removal of photocharging artefacts and from the point of view of upholding the material stability. In the latter respect, monitoring the low excitation fluence steady state photoluminescence (PL) (single exciton) QY before and after TA, etc., measurements will give a clear indication of any incipient material damage during pump-probe measurements. non-geminate recombination before re-entering the beam once more. Although only applicable to (low viscosity) solutions this measurement approach has now become commonplace. Other measures to combat photocharging include using a flow cell arrangement and also rastering the measurement cell through the beam, though this is perhaps more effective in terms of avoiding the runaway accumulation of damaged or precipitated material at the measurement cell interior surface with more fragile materials where colloid stability is marginal. For many materials in solution, stirring is sufficient both for the removal of photocharging artefacts and from the point of view of upholding the material stability. In the latter respect, monitoring the low excitation fluence steady state photoluminescence (PL) (single exciton) QY before and after TA, etc., measurements will give a clear indication of any incipient material damage during pump-probe measurements. . This difference persists in the limit of low pump intensities. Inset: the ratio of the early to late-time PL signals (A/B) as a function of <Nabs> for 3.08 eV (black crosses) and 1.54 eV (red circles) excitation. All data were acquired using femtosecond PL up-conversion with temporal resolution of ≤4 ps. Adapted with permission from ref. [33]. Copyright (2010) American Chemical Society. In further work, investigating the photocharging phenomenon in QDs, McGuire et al. [19] also showed how even low intensity steady state UV irradiation could lead to an accumulation of photocharged QDs. In their PbSe QD samples they estimated that there was a photochargeable fraction of 5-15% in steady state experiments. Whilst the probability of a single photon absorption event leading to photocharging was only 10 −3 -10 −4 , the long lifetime of charged species (tens of seconds) could lead to a significant build up (in the probe beam volume) unless this was offset by stirring. There has been some debate over whether photocharging affects the CM efficiency itself (as distinct from simply obscuring its measurement). McGuire et al. [19,33] suggest that in practice the presence of charges does not affect the actual CM efficiency whilst theoretical arguments (based on atomistic modelling of small PbSe clusters) have been advanced that suggest the CM efficiency might be suppressed under these circumstances by shifting the CM threshold to higher energies [34]. Not surprisingly, these findings cast a lot of doubt on some of the earlier reports of significant CM in QDs, and led to a spate of reassessment of several materials, the frequently studied PbSe QDs in particular. Trinh et al. [35] measured CM in PbSe taking steps to ensure proper discrimination between multi-excitons generated by CM and by multiple photon absorption by careful determination of the signal in the very low fluence range (where the average number of absorbed photons per QD could be as low as 0.05), and by accounting for any pump induced spectral shifts in their TA spectra by spectrally integrating over the whole bleach absorption peak rather than simply using the time Figure 3. Effect of sample stirring on PL dynamics of PbSe QDs with significant static-stirred difference in PL dynamics (at exciton occupancy <N abs > = 1.4). This difference persists in the limit of low pump intensities. Inset: the ratio of the early to late-time PL signals (A/B) as a function of <N abs > for 3.08 eV (black crosses) and 1.54 eV (red circles) excitation. All data were acquired using femtosecond PL up-conversion with temporal resolution of ≤4 ps. Adapted with permission from ref. [33]. Copyright (2010) American Chemical Society. In further work, investigating the photocharging phenomenon in QDs, McGuire et al. [19] also showed how even low intensity steady state UV irradiation could lead to an accumulation of photocharged QDs. In their PbSe QD samples they estimated that there was a photochargeable fraction of 5-15% in steady state experiments. Whilst the probability of a single photon absorption event leading to photocharging was only 10 −3 -10 −4 , the long lifetime of charged species (tens of seconds) could lead to a significant build up (in the probe beam volume) unless this was offset by stirring. There has been some debate over whether photocharging affects the CM efficiency itself (as distinct from simply obscuring its measurement). McGuire et al. [19,33] suggest that in practice the presence of charges does not affect the actual CM efficiency whilst theoretical arguments (based on atomistic modelling of small PbSe clusters) have been advanced that suggest the CM efficiency might be suppressed under these circumstances by shifting the CM threshold to higher energies [34]. Not surprisingly, these findings cast a lot of doubt on some of the earlier reports of significant CM in QDs, and led to a spate of reassessment of several materials, the frequently studied PbSe QDs in particular. Trinh et al. [35] measured CM in PbSe taking steps to ensure proper discrimination between multi-excitons generated by CM and by multiple photon absorption by careful determination of the signal in the very low fluence range (where the average number of absorbed photons per QD could be as low as 0.05), and by accounting for any pump induced spectral shifts in their TA spectra by spectrally integrating over the whole bleach absorption peak rather than simply using the time dependent bleach intensity at a single wavelength at or in the vicinity of the peak. With this rigorous analysis, the authors found the CM efficiency to be almost a factor of two lower than in previous reports [36] for equivalent excitation energies of~4.8 E g but did confirm that CM does occur at such energies. Later TA measurements by Gdor et al. [37], again using spectral integration to ensure pump dependent peak shifts did not cloud the data analysis, showed no evidence of CM at excitation energies up to 3.7 E g suggesting either a threshold in the range 3.7 E g -4.8 E g (assuming that these studies used equivalent sized PbSe QDs) or that CM efficiencies and thresholds were batch dependent e.g., due to differences in surfaces or exposure to air, etc. Time resolved two photon photoemission spectroscopy measurements on PbSe thin films by Miaja-Avila et al. [38] allowed the hot electron relaxation and CM generation processes to be mapped. The samples were prepared as thin films and treated with ethanedithiol (EDT) after deposition, whilst measurements were carried out under high vacuum conditions. No evidence of CM was seen from intraband hot electrons with energies up to 3 E g , but for interband hot electrons CM was observed for energies of~4 E g . This might be consistent with the differing findings in the Gdor et al. and Trinh et al. reports, but here again there is the scope for some debate as the EDT treatment was found by Beard et al. [39] to severely quench the CM response in their study of the effect of a range of surface treatments on PbSe films. Ultrafast time resolved photoluminescence (TRPL) is often suggested as a potentially superior measurement technique for CM studies. Sub-nanosecond TRPL measurements using photon counting detectors and photon correlation generally only require very low fluence excitation (consistent with average numbers of absorbed photons per QD being <<0.1) and can offer good signal to noise ratios. However, when the bandgap and therefore detection wavelength move into the infrared (IR) which is beyond the silicon APD and photon counting photomultiplier ranges, detection becomes a problem. However, recent advances in detectors based on superconducting wires have shown the potential to greatly improve TRPL experiments in this energy range. The alternative in which photoluminescence photons are upconverted before detection with conventional lower wavelength range detectors (uPL) requires extremely long run durations (e.g., 10 h) to build up decay transients, whereas Sandberg et al. [40] reported being able to acquire signals between 10-100 times faster than with uPL. Using a combination of both techniques, they compared the electron-hole creation energies for both PbSe QDs and nanorods and concluded that the lower values for nanorods (2.6 E g ) compared with QDs (3.2 E g ) arises due to the elongated structure in the former. In the PbSe QDs and nanorods CM was observed at excitation energies in the 3 E g -5 E g range. Carrier Cooling and CM Efficiency Following the excitation of a hot carrier, several possible fates may await. The carrier may relax to the band edge via either interband or intraband cooling processes (depending on how much excess energy above the bandgap the hot carrier has) and which will involve the emission of phonons and the eventual dissipation of excess energy to the lattice. Alternately the hot carrier may undergo CM resulting in the formation of two or more excitons (Figure 4). In order to select one or other material as good CM candidates, it is therefore useful to be able to compare the likely degree of competition between these two principal pathways by which the hot excited state relaxes. Stewart et al. [41,42] made ranking comparisons of the relative ratios of the electron hole creation energies of PbTe, PbSe, and PbS (1:1.8:4.5 respectively) predicted from their model for a competing two channel relaxation scheme and compared the trend with that seen for the measured 1P-1S cooling rate constants, k 1P1S , PbTe:PbSe:PbS ≈ 1:2.0:4.2. Their analysis leads to the correlation that the electron hole creation energy, ε eh , and cooling rate are connected as, where the cooling rate k cool is assumed to scale for all three materials in proportion to the k 1P1S measured values. Thus, the CM rate, 1 τ CM is assumed to be broadly similar for all three materials and so the competing cooling channel dictates the ε eh trend. The cooling rate trend is similar, at least semi-quantitatively, to the trends in the bulk phonon cooling rates predicted from the product of the polar coupling constants, α F , and longitudinal optical (LO) phonon energies, ω LO , for the same materials. The Fröhlich coupling constant for bulk materials is given as, where e is the electron charge, m is the effective mass, and κ ∞ , κ 0 , are the high frequency and static dielectric constants respectively. The broad similarity in the CM rate is also reflected in the similarity for the biexciton Auger rate constants for the three materials. Whilst Auger recombination and CM can be considered as mutually inverse processes, their rates are not equal as the densities of the final states in either direction are not identical. However, Stewart et al. [42] also pointed out that the densities of states, g xx , of the CM process that terminates in a biexciton, has a simple relationship to the Auger relaxation terminating in a hot exciton with a density of states (DOS), g x , g xx ∝ g n x , n > 2. The authors [41] suggested that the trends across the three chalcogenides may point to a simple near linear scaling of the rates for the forward and reverse processes consistent with the DOS argument. Thus, measurement of Auger decay constants and calculation of cooling rates for the bulk materials would allow an estimate of the likely trends in CM efficiencies (at least for related materials where the approximations behind Equation (2) are broadly similar). Materials 2017, 10, 1095 7 of 31 polar coupling constants, , and longitudinal optical (LO) phonon energies, ℏ , for the same materials. The Fröhlich coupling constant for bulk materials is given as, where is the electron charge, is the effective mass, and , , are the high frequency and static dielectric constants respectively. The broad similarity in the CM rate is also reflected in the similarity for the biexciton Auger rate constants for the three materials. Whilst Auger recombination and CM can be considered as mutually inverse processes, their rates are not equal as the densities of the final states in either direction are not identical. However, Stewart et al. [42] also pointed out that the densities of states, , of the CM process that terminates in a biexciton, has a simple relationship to the Auger relaxation terminating in a hot exciton with a density of states (DOS), , ∝ , > 2. The authors [41] suggested that the trends across the three chalcogenides may point to a simple near linear scaling of the rates for the forward and reverse processes consistent with the DOS argument. Thus, measurement of Auger decay constants and calculation of cooling rates for the bulk materials would allow an estimate of the likely trends in CM efficiencies (at least for related materials where the approximations behind Equation (2) are broadly similar). These include electron-hole scattering where, if their effective masses are dissimilar, the (lighter) electron may lose energy to the (heavier) hole. The latter can then relax more readily by phonon emission through the denser hole states to the band edge. In the absence of the Auger cooling channel, excess energy may be lost by sequential phonon emission via a Fröhlich type relaxation. This should be very slow overall. Some groups have suggested that multiphonon emission steps may be possible in a non-adiabatic process, as a consequence of strong confinement or mediated by surface interactions (e.g., with ligand molecular vibrations). CM competes with all cooling channels, converting some or all of the excess energy into an additional (or at high excitation energies, several additional) exciton(s). The hot carrier cooling process in QDs has been widely explored both experimentally and theoretically using a wide variety of modelling techniques. Kilina et al. [43] used an ab initio finite time domain model for 32 atom PbSe particles and observed evidence of multiphonon relaxation, strong coupling of both holes and electrons to acoustic phonon modes, and the relaxation of . Possible relaxation channels following the absorption of a photon to create a hot exciton. These include electron-hole scattering where, if their effective masses are dissimilar, the (lighter) electron may lose energy to the (heavier) hole. The latter can then relax more readily by phonon emission through the denser hole states to the band edge. In the absence of the Auger cooling channel, excess energy may be lost by sequential phonon emission via a Fröhlich type relaxation. This should be very slow overall. Some groups have suggested that multiphonon emission steps may be possible in a non-adiabatic process, as a consequence of strong confinement or mediated by surface interactions (e.g., with ligand molecular vibrations). CM competes with all cooling channels, converting some or all of the excess energy into an additional (or at high excitation energies, several additional) exciton(s). The hot carrier cooling process in QDs has been widely explored both experimentally and theoretically using a wide variety of modelling techniques. Kilina et al. [43] used an ab initio finite time domain model for 32 atom PbSe particles and observed evidence of multiphonon relaxation, strong coupling of both holes and electrons to acoustic phonon modes, and the relaxation of forbidden symmetry transitions, consistent with experimental findings. Hole relaxation only slightly outpaced electron relaxation with both occurring on the few picosecond timescales, whilst the model also failed to show any evidence of a phonon bottleneck that would otherwise dramatically slow carrier cooling. Whilst the hole and electron effective masses in the lead chalcogenides are very similar, the situation is less symmetric in materials such as CdSe and InAs, where the electron effective mass is far smaller than that of the holes. Califano [44,45] considered the effect of an Auger cooling mechanism (see Figure 4) in a semi-empirical pseudopotential model whereby a hot electron interacts with a hole, losing energy and in the processes exciting the hole. The latter then efficiently cools through the more densely spaced valence bands, losing the energy just acquired from the interaction with the electron in this process and so increasing the overall cooling rate. Such Auger cooling channels could also effectively compete with the CM process (especially where the two carrier effective masses are very dissimilar) and account for more rapid cooling observed experimentally than anticipated for a multi-phonon cooling process. Using similar modelling techniques Califano [29] also calculated the InAs CM lifetime, τ CM , as a few tens of femtoseconds thus suggesting that CM should be able to compete effectively with phonon or Auger cooling channels. A number of time domain ab initio modelling methods have been used by Neukirch and Prezhdo [46] to study not only exciton and biexciton formation but also to follow the relaxation processes such as carrier-phonon interactions and Auger cooling processes. These modelling methods are generally limited to relatively small structures of several tens of atoms at present due to limitations in processing speed and capacity, but nonetheless the results shed some light on many aspects of experimental findings with much larger particles. In regard to cooling mechanisms the modelling approaches considered various aspects of electron-phonon coupling, both direct and mediated by surface ligands. In both CdSe and PbSe alike, direct electron-phonon and hole-phonon coupling was seen to involve predominantly lower frequency acoustic modes rather than high frequency optical modes as in bulk materials. These processes also compete with Auger type cooling channels. The latter are assumed to be less effective in Pb chalcogenides than CdSe for example, on account of the very symmetric conduction and valence bands in the former and their very small effective masses for both holes and electrons. This means that electron-hole interactions result in very little energy transfer and little advantage in dissipating the excess energy. In CdSe by comparison, the hotter electrons can short circuit the need to cool by direct phonon emission through more sparsely separated bands by transferring energy to heavier holes which can then relax more efficiently by phonon emission through more densely spaced bands. However, electron cooling could be enhanced (in either type of material) by interaction with high energy vibrational modes due to surface ligands. The authors point out that separation of the electrons and holes in materials such as CdSe (e.g., in heterostructures) could suppress the extent of electron-hole energy transfer and allow CM the chance to compete more effectively with cooling. The intraband cooling rates in PbSe and CdSe QDs and their dependence on temperature and environmental modifications (ligand and solvent changes) have been compared by Schaller et al. [47]. They concluded that in both materials, the influence of the surface on the cooling rates was negligible [47,48]. In CdSe QDs the cooling rate was thermally insensitive whereas PbSe QDs showed a size dependent activation of a cooling channel (e.g., above 130 K and 170 K for 3.5 nm and 1.9 nm radii QDs respectively). The conclusion was that in CdSe QDs the Auger cooling channel was the dominant process whilst in PbSe QDs the latter was ineffective with a phonon relaxation channel being more important. Since a single phonon process would not be sufficiently fast enough to account for the cooling rates involved, a non-adiabatically coupled multiphonon emission process was suggested. Ten Cate et al. [49] studied the CM efficiency and cooling rate dependence on temperature in EDT linked PbSe solid films infilled with Al 2 O 3 or Al 2 O 3 /ZnO using the TRMC technique. Over the range 90 K-295 K there was no temperature dependence for either cooling rates or CM suggesting that phonons are not required or do not participate in the matching of transitions in the CM process. The efficiency was then limited by the cooling/CM competition, determined by spontaneous LO phonon emission. Interestingly the oxide infilling was found to be necessary to allow the CM process to be observed [27]. Non-infilled PbSe/EDT films showed no significant CM response in agreement with other similar studies [38,39]. The size dependence of the carrier cooling rates has been determined by several groups. Stolle et al. [50] saw a decline in the overall carrier cooling rate with increasing diameter in CuInSe 2 QDs consistent with a linear dependence on volume. Harbold et al. [51] saw a similar rising trend in the intraband relaxation times with PbSe QD diameters which may be consistent with a similar volume scaling law for the cooling rate. In the latter case, symmetrical valence and conduction bands mean that differences in hole and electron cooling rates cannot readily be distinguished. The decrease in cooling rates with increasing size in the face of a decreasing spacing of intraband states may suggest that cooling by LO phonon emission (as in bulk materials) is not the dominant cooling mechanism [52] in such QDs. Cooling dynamics have also been studied in InAs QDs which were either treated with pyridine as a surface species and measurements made in the presence of a sodium biphenyl reducing agent, or as regular trioctylphosphine oxide capped QDs under double pump pulse (IR + visible) excitation. In this case, the electron effective mass is far lower than that of the hole, but by removing the latter via the chemical treatments or by separating the charges, the electron cooling time in the absence of the hole was determined to be around an order of magnitude slower than when both types of carriers were present. This also reinforces the fact that in QDs, especially those with disparate carrier effective masses, carrier-carrier scattering effects play a very strong role in the cooling process. The individual cooling rates of electrons and holes in PbSe QDs have been determined by Spoor et al. [53,54] (see Figure 5) where the separation of hole and electron transitions was facilitated by using a dye (methylene blue) to extract photogenerated electrons and so identify which transitions in TA spectra arise from either type of carrier relaxation. The work was further extended to higher excited state relaxations away from the band edge [54]. Near the band gap, the electron and hole cooling rates were 0.54 eV/ps and 2.75 eV/ps respectively whilst for the highest excited states the rates increased to 1.52 eV/ps and 6.8 eV/ps. Given the symmetries in the electron and hole bands near the band gap and the similarities in the two effective masses the band edge cooling rates might be expected to be closer, though the differences in the higher excited state cooling rates could be attributed to less symmetric band structures well away from the band edge. The near band edge dissimilarity in cooling rates does lend support to Zunger et al.'s [55] modelling of PbSe QDs which shows differing valence and conduction band DOS (with the valence band DOS being greater than for the conduction band) in contrast to the case for the bulk material. Spoor et al. [54] attributed only the cooling at very high excess energies for either carrier to be due to bulk-like LO phonon emission but at lower energies resolved several separate cooling steps (in cooling rates) which were each explained as involving phonon or surface ligand vibrational modes in view of the larger separations in energy levels nearer the band edge ( Figure 5). CM Mechanisms So far little has been said about the exact nature of the CM process or mechanism, with the foregoing only considering the competition between CM (by whatever means) and other cooling processes. CM is well known in bulk semiconductors and is described by a number of impact ionization variants based on the different types of band structure encountered near the band gap. Expressions are given for the threshold energies for a number of these cases by Landsberg [1], for example. The threshold energies for those semiconductors (e.g., Si and Ge), for a long time considered the most relevant for solar energy generation, are unfortunately too high (≥4 Eg)to be of practical use in generating additional carriers with the terrestrial solar spectrum [9,10]. One of the primary uses of impact ionization in bulk semiconductors has been in the design and manufacture of solid state APDs [2] with photocurrent gain typically in the ×10-×100 range. Here kinetic energy for ionization is supplied by an accelerating bias voltage across the junction rather than by photogenerated hot carriers but otherwise the multiplication process is identical. The range of materials used in APDs includes Si, Ge, GaAs, GaP, InP, InAs, InSb, and InGaAs, GaAlSb and HgCdTe alloys. In materials such as GaAlSb and HgCdTe which have zincblende structures, spin-orbit coupling is exhibited with the split-off valence band lying below the band gap by an energy interval Δ. When the latter is on resonance with the bandgap energy Eg, the impact ionization process can be enhanced and since the inverse Auger transitions are all vertical (no changes in momentum), the energy threshold is in principle simply 2 Eg However, this circumstance has to be contrived by adjusting the alloy compositions which then leads to Eg values that are too low for practical solar energy applications, but still useful for IR photodetectors. The initial expectation was that CM in QDs should be both more efficient and have a lower threshold than in the corresponding bulk materials owing to the strong Coulomb interactions in strongly confined QDs [7]. Califano et al. [58] formulated a model based on impact ionization as the basis for direct carrier multiplication (DCM), and indeed saw higher multiplication rates in CdSe QDs than in bulk. Their pseudopotential model found the AR and DCM processes to be highly sensitive to the QD surface whilst the presence of gaps in the hole manifold of states encountered for excess energies well above threshold, allowed Auger cooling the chance to compete strongly with DCM across ranges corresponding to the gaps. Separation of the electrons and holes (e.g., in heterostructures) to prevent Auger cooling was a remedy suggested to suppress this competition. The same modelling approach was extended to PbSe QDs [59] and again only impact ionization was found to be necessary Direct observation of electron-phonon interactions and the discrimination between coherent optical and acoustic phonon mode couplings in CdSe QDs have been reported by Kambhampati et al. [56,57] using state resolved transient spectroscopy. They reported that the acoustic phonon coupling is slightly stronger than that for the exciton-optical phonon interaction. CM Mechanisms So far little has been said about the exact nature of the CM process or mechanism, with the foregoing only considering the competition between CM (by whatever means) and other cooling processes. CM is well known in bulk semiconductors and is described by a number of impact ionization variants based on the different types of band structure encountered near the band gap. Expressions are given for the threshold energies for a number of these cases by Landsberg [1], for example. The threshold energies for those semiconductors (e.g., Si and Ge), for a long time considered the most relevant for solar energy generation, are unfortunately too high (≥4 E g )to be of practical use in generating additional carriers with the terrestrial solar spectrum [9,10]. One of the primary uses of impact ionization in bulk semiconductors has been in the design and manufacture of solid state APDs [2] with photocurrent gain typically in the ×10-×100 range. Here kinetic energy for ionization is supplied by an accelerating bias voltage across the junction rather than by photogenerated hot carriers but otherwise the multiplication process is identical. The range of materials used in APDs includes Si, Ge, GaAs, GaP, InP, InAs, InSb, and InGaAs, GaAlSb and HgCdTe alloys. In materials such as GaAlSb and HgCdTe which have zincblende structures, spin-orbit coupling is exhibited with the split-off valence band lying below the band gap by an energy interval ∆. When the latter is on resonance with the bandgap energy E g , the impact ionization process can be enhanced and since the inverse Auger transitions are all vertical (no changes in momentum), the energy threshold is in principle simply 2 E g However, this circumstance has to be contrived by adjusting the alloy compositions which then leads to E g values that are too low for practical solar energy applications, but still useful for IR photodetectors. The initial expectation was that CM in QDs should be both more efficient and have a lower threshold than in the corresponding bulk materials owing to the strong Coulomb interactions in strongly confined QDs [7]. Califano et al. [58] formulated a model based on impact ionization as the basis for direct carrier multiplication (DCM), and indeed saw higher multiplication rates in CdSe QDs than in bulk. Their pseudopotential model found the AR and DCM processes to be highly sensitive to the QD surface whilst the presence of gaps in the hole manifold of states encountered for excess energies well above threshold, allowed Auger cooling the chance to compete strongly with DCM across ranges corresponding to the gaps. Separation of the electrons and holes (e.g., in heterostructures) to prevent Auger cooling was a remedy suggested to suppress this competition. The same modelling approach was extended to PbSe QDs [59] and again only impact ionization was found to be necessary as the basic mechanism for CM and to produce a model which replicated the experimental findings of the time in terms of low threshold energies (2.2 E g ) and sub ps CM rates. Many groups revised up their threshold energy values later on, in the light of the above-mentioned controversies over the appearance of CM-like artefacts in early measurements. More recently, Califano [44] examined the DCM and Auger cooling competition in CdSe QDs using both an impact ionization and a CM model-independent approach and found close agreement between both theoretical approaches. as the basic mechanism for CM and to produce a model which replicated the experimental findings of the time in terms of low threshold energies (2.2 Eg) and sub ps CM rates. Many groups revised up their threshold energy values later on, in the light of the above-mentioned controversies over the appearance of CM-like artefacts in early measurements. More recently, Califano [44] examined the DCM and Auger cooling competition in CdSe QDs using both an impact ionization and a CM modelindependent approach and found close agreement between both theoretical approaches. Low threshold energies (of around 2 Eg) found experimentally in PbSe and PbS prompted Ellingson et al. [9] to suggest a more elaborate CM mechanism, as depicted in Figure 6, by which multi-excitons are formed via a coherent superposition of single and multiexciton states, with the final outcome (multiexcitons) dictated by faster phonon relaxation (dephasing) from the multiexcitons than the hot single exciton. However, the lack of any observation of oscillations between single exciton and multiexciton states before dephasing could complete prompted Klimov [28,60], to propose an alternate CM mechanism where multiexcitons are generated directly via Coulomb coupling to a hot virtual (rather than real) single exciton state. The coherent superposition model has also been explored extensively by Shabaev et al. [18,61]. Again, oscillation (sometimes termed quantum beats) between the (hot) single exciton and multiexciton states is predicted for strongly coupled systems where the multiexciton decay rate is much slower than the Coulomb driven coupling rate between the two states. However, even theoretically the authors show that such oscillations can be obscured due to the large multiplicity of multiexciton states [18]. Shabaev et al. [18] also pointed out that, in contrast to the case in spherical QDs, in nanorods, nanowires and 2D nanoparticles there can be significant and unscreened penetration of the hole and electron electric fields into the surrounding and usually markedly lower permittivity dielectric surrounding the nanoparticle. This tends to strongly increase the rate of the Coulomb coupling via an enhanced exciton binding energy. Provided that the multiexciton relaxation (dephasing) is fast enough, this should result in enhanced CM Figure 6. Photoexcitation at 3 E g creates a 2P e -2P h exciton state. This state is coupled to multiparticle states with matrix element V and forms a coherent superposition of single and multiparticle exciton states within~250 fs. The coherent superposition dephases due to interactions with phonons; asymmetric states (such as a 2P e -1S h ) couple strongly to longitudinal optical (LO) phonons and dephase at a rate of τ −1 . Adapted with permission from ref. [9]. Copyright (2005) American Chemical Society. Low threshold energies (of around 2 E g ) found experimentally in PbSe and PbS prompted Ellingson et al. [9] to suggest a more elaborate CM mechanism, as depicted in Figure 6, by which multi-excitons are formed via a coherent superposition of single and multiexciton states, with the final outcome (multiexcitons) dictated by faster phonon relaxation (dephasing) from the multiexcitons than the hot single exciton. However, the lack of any observation of oscillations between single exciton and multiexciton states before dephasing could complete prompted Klimov [28,60], to propose an alternate CM mechanism where multiexcitons are generated directly via Coulomb coupling to a hot virtual (rather than real) single exciton state. The coherent superposition model has also been explored extensively by Shabaev et al. [18,61]. Again, oscillation (sometimes termed quantum beats) between the (hot) single exciton and multiexciton states is predicted for strongly coupled systems where the multiexciton decay rate is much slower than the Coulomb driven coupling rate between the two states. However, even theoretically the authors show that such oscillations can be obscured due to the large multiplicity of multiexciton states [18]. Shabaev et al. [18] also pointed out that, in contrast to the case in spherical QDs, in nanorods, nanowires and 2D nanoparticles there can be significant and unscreened penetration of the hole and electron electric fields into the surrounding and usually markedly lower permittivity dielectric surrounding the nanoparticle. This tends to strongly increase the rate of the Coulomb coupling via an enhanced exciton binding energy. Provided that the multiexciton relaxation (dephasing) is fast enough, this should result in enhanced CM efficiencies and lower thresholds in such materials. Schaller et al. [62] gave a simple expression for the threshold energy, E th , (irrespective of the details of the CM mechanism) on the basis of simple carrier effective mass considerations as, On this basis semiconductors, such as the lead chalcogenides, which have almost equal hole and electron effective masses are expected to have thresholds near 3 E g . CdSe with m e /m h at 0.17 should have a threshold at 2.17 E g whilst materials such as Hg 1−x Cd x Te, and the III-V materials InSb and InAs where the electron effective mass can be far lighter than that of the hole would have thresholds approaching the energy conservation limit of 2 E g (Figure 7). Schaller et al. [63] also suggested that if the biexciton Coulomb interaction energy (∆ XX ) is sufficiently strong, the threshold energy could be lowered even below the 2 E g energy conservation limit, Materials 2017, 10, 1095 12 of 31 On this basis semiconductors, such as the lead chalcogenides, which have almost equal hole and electron effective masses are expected to have thresholds near 3 Eg. CdSe with me/mh at 0.17 should have a threshold at 2.17 Eg whilst materials such as Hg1−xCdxTe, and the III-V materials InSb and InAs where the electron effective mass can be far lighter than that of the hole would have thresholds approaching the energy conservation limit of 2 Eg (Figure 7). Schaller et al. [63] also suggested that if the biexciton Coulomb interaction energy (∆ ) is sufficiently strong, the threshold energy could be lowered even below the 2 Eg energy conservation limit, This situation could arise in the low bandgap, lower permittivity, III-V materials such as InAs and InSb, whereas the lead chalcogenides have much higher permittivities, correspondingly stronger Coulomb interactions and so lower biexciton energies. Even in CdSe, neutral biexciton binding energies may reach several tens of meV [64]. It was also suggested that even larger attractive biexciton binding energies could be specifically engineered in heterostructures [64,65]. An alternate approach to the modelling of CM was adopted by Luo et al. [66], in particular for the purposes of surveying a wide range of materials based on their bulk electronic parameters to compare thresholds and efficiencies in QD forms. Rather than adopting the coherent superposition of states or virtual exciton mediated mechanisms, they simply considered the same impact ionization mechanism found in bulk materials, finding it sufficient to account for many experimentally observed This situation could arise in the low bandgap, lower permittivity, III-V materials such as InAs and InSb, whereas the lead chalcogenides have much higher permittivities, correspondingly stronger Coulomb interactions and so lower biexciton energies. Even in CdSe, neutral biexciton binding energies may reach several tens of meV [64]. It was also suggested that even larger attractive biexciton binding energies could be specifically engineered in heterostructures [64,65]. An alternate approach to the modelling of CM was adopted by Luo et al. [66], in particular for the purposes of surveying a wide range of materials based on their bulk electronic parameters to compare thresholds and efficiencies in QD forms. Rather than adopting the coherent superposition of states or virtual exciton mediated mechanisms, they simply considered the same impact ionization mechanism found in bulk materials, finding it sufficient to account for many experimentally observed measurements of thresholds and efficiencies, although phonon relaxation and the influence of surface states are not explicitly covered. A key figure of merit for CM, R 2 (E), was deemed to be based on the ratio of the Coulombically coupled biexciton density of states and the exciton density of states. As such it represents the extent to which the CM process is favoured over the inverse process, Auger recombination. Figure 8a shows a comparison of R 2 (E) values for a number of materials, and interestingly CdSe and PbSe are shown to be relatively similar. Figure 8b,c shows the calculated threshold data both in absolute energy units and also in terms of QD bandgap normalized values. ratio of the Coulombically coupled biexciton density of states and the exciton density of states. As such it represents the extent to which the CM process is favoured over the inverse process, Auger recombination. Figure 8a shows a comparison of R2(E) values for a number of materials, and interestingly CdSe and PbSe are shown to be relatively similar. Figure 8b,c shows the calculated threshold data both in absolute energy units and also in terms of QD bandgap normalized values. . For each material three points corresponding to three sizes are shown (smaller, larger and at the nanocrystal gap). Part (b) shows that as the dot size increases E0 decreases, whereas part (c) shows that the normalized E0/ sometimes increases (e.g., Si) and sometimes decreases (e.g., InAs) as the dot size increases. Adapted with permission from ref. [66]. Copyright (2008) American Chemical Society. The competition between CM and carrier cooling processes was addressed by Beard et al. [11] who used a coupled rate equation approach to model a cascaded set of relaxation processes such as depicted in Figure 9a. For the cooling rates, , they employed a parameterization due to Ridley [67] for bulk materials which is linked to the rate for each electron hole pair multiplication (EHPM) cascade step, , for different nanocrystals of size 6 × 6 × 6 unit cells; (b,c), The DCM critical energy E 0 , i.e., the photon energy at which R 2 (E) = 1, is shown as a function of nanocrystal band gap. In (b) E 0 is shown in absolute units (eV), while in (c) it is normalized as E 0 /ε dot g . For each material three points corresponding to three sizes are shown (smaller, larger and at the nanocrystal gap). Part (b) shows that as the dot size increases E 0 decreases, whereas part (c) shows that the normalized E 0 /ε dot g sometimes increases (e.g., Si) and sometimes decreases (e.g., InAs) as the dot size increases. Adapted with permission from ref. [66]. Copyright (2008) American Chemical Society. The competition between CM and carrier cooling processes was addressed by Beard et al. [11] who used a coupled rate equation approach to model a cascaded set of relaxation processes such as depicted in Figure 9a. For the cooling rates, k cool , they employed a parameterization due to Ridley [67] for bulk materials which is linked to the rate for each electron hole pair multiplication (EHPM) cascade step, k EHPM , where P and s are considered as adjustable parameters. This then allowed the authors to calculate the net CM rate for given values of P and s using a Monte Carlo method (see Figure 9b). Beard et al. [11] also compared experimental QD and bulk CM QY data, each in terms of bandgap normalized photon energy, showing that in this format QDs have lower thresholds than their bulk counterparts, and steeper QY vs. normalized energy gradients above the respective thresholds. [11]. A high-energy photon creates an exciton with excess energy, n * 1. The hot exciton can lose energy by cooling or multiplication to form either n1 or a hot biexciton, n * 2 and so on. (b) Calculated exciton QYs for different values of P, η′EHPM is shown at each P value. Reprinted with permission from ref. [11]. Copyright (2010) American Chemical Society. Tight binding methods have also been used as the basis for theoretical models of CM. Tight binding is used to determine the band structure and then transition rates (W) for processes such as impact ionization can be calculated using the Fermi golden rule expression for each of the transitions allowed between the exciton and multiexciton manifolds: Here is the density of final (multiexciton) states at the energy of the initial state, and is the transition matrix element of the Coulomb interaction between initial and final states. Allan and Delerue, together with others, have applied this approach both to the calculation of IR absorption spectra of HgTe QDs [68][69][70], (along with experimental comparisons) and also to the calculation of CM rates in a wide range of QD materials such as PbS [71], PbSe [71][72][73], InAs [72], Si [72], Sn [74] and HgTe [69]. In these studies Allan and Delerue found that impact ionization was a sufficient mechanism to explain all but the highest CM QYs, but for the very high exciton multiplicities initially reported for PbSe and InAs they found that an alternate or additional CM channel might be required (e.g., [71]) In their study of InAs, Si and PbSe QDs [72] they compared both impact ionization and multiexciton/exciton superposition of states models, but still found that very high multiplication factors (>×5) were hard to reconcile with the very low densities of final multiexciton states at such high excitation energies. However, the revisions of experimental data along the lines discussed earlier (Section 2), have probably helped somewhat to bring theory and experiment more into line once more [73]. Atomistic semiempirical pseudopotential methods have also been used to calculate densities of states, followed by transition rate evaluation by applying the Fermi golden rule. Rabani and Baer [75,76] used this approach with the inclusion of phonon interactions to model CM rates, again in Si, [11]. A high-energy photon creates an exciton with excess energy, n * 1 . The hot exciton can lose energy by cooling or multiplication to form either n 1 or a hot biexciton, n * 2 and so on. (b) Calculated exciton QYs for different values of P, η EHPM is shown at each P value. Reprinted with permission from ref. [11]. Copyright (2010) American Chemical Society. Tight binding methods have also been used as the basis for theoretical models of CM. Tight binding is used to determine the band structure and then transition rates (W) for processes such as impact ionization can be calculated using the Fermi golden rule expression for each of the transitions allowed between the exciton and multiexciton manifolds: Here ρ f (E i ) is the density of final (multiexciton) states at the energy of the initial state, and V i f is the transition matrix element of the Coulomb interaction between initial and final states. Allan and Delerue, together with others, have applied this approach both to the calculation of IR absorption spectra of HgTe QDs [68][69][70], (along with experimental comparisons) and also to the calculation of CM rates in a wide range of QD materials such as PbS [71], PbSe [71][72][73], InAs [72], Si [72], Sn [74] and HgTe [69]. In these studies Allan and Delerue found that impact ionization was a sufficient mechanism to explain all but the highest CM QYs, but for the very high exciton multiplicities initially reported for PbSe and InAs they found that an alternate or additional CM channel might be required (e.g., [71]) In their study of InAs, Si and PbSe QDs [72] they compared both impact ionization and multiexciton/exciton superposition of states models, but still found that very high multiplication factors (>×5) were hard to reconcile with the very low densities of final multiexciton states at such high excitation energies. However, the revisions of experimental data along the lines discussed earlier (Section 2), have probably helped somewhat to bring theory and experiment more into line once more [73]. Atomistic semiempirical pseudopotential methods have also been used to calculate densities of states, followed by transition rate evaluation by applying the Fermi golden rule. Rabani and Baer [75,76] used this approach with the inclusion of phonon interactions to model CM rates, again in Si, InAs and CdSe QDs of various sizes with either pseudohydrogen atom passivation or assuming ligand passivation on surface metal atoms. The InAs and CdSe QDs were close to stoichiometric in composition which may be a little unrealistic for larger colloidal QDs that are probably slightly more metal rich in practice [77] where ligands that bind to metal atoms are used in the syntheses. Their CM mechanism involved the decay of either a hot hole or hot electron to form respectively a positively or negatively charged trion (with the counter charge as a spectator), hence their need to compute the corresponding trion densities of states. With lower trion densities of states in InAs, CM is predicted to be weaker in the latter than in Si and CdSe. Small QDs were shown to have greater CM rates (on an E g normalized energy scale) than larger dots. The electronic band structure of InSb QDs has also been explored by Sills et al. [78]. Here the relaxation of the biexciton state is known to be extraordinarily fast [79] (6 ps-16 ps rather than several tens of ps) prompting debate over whether this is due to Auger recombination or some other process. One feature of the Sills et al. study [78] is that although they again used pseudohydrogen passivation, they also constructed and compared QDs that were either metal rich or pnictogen rich and saw differences in the behaviour of the conduction band variation with QD size according to the stoichiometry. The three principal CM mechanisms: impact ionization, coherent superposition of exciton and multiexciton states and direct multiexciton formation via a virtual exciton state have been compared by Neukirch and Prezhdo [46] alongside their ab initio modelling methods for small PbSe [43], CdSe, Si and Ge QDs. Such methods are atomistic in nature and may include descriptions of surface ligands (though often simple pseudohydrogen passivation is chosen), defects and dopants and the inclusion of carrier phonon interactions. Limitations still include the size of the system that can be handled (typically a few up to 70 or so metal atoms) and often studies are limited to (totally or nearly) stoichiometric compositions for binary QDs. Theoretical models and CM mechanisms are also comprehensively described in the recent Chemical Review by Pietryga et al. [3]. Size Effects The size dependence for biexciton recombination (AR) in QDs has long been established to show an inverse volume scaling dependence [80]. The CM lifetime, with CM being considered as the inverse of the AR process, is expected to show the same size dependence. However, as different final densities of states are involved in the two processes, so CM lifetimes are proportionately faster (sub-ps rather than tens of ps). It is difficult to directly measure CM lifetimes, partly due to the short experimental timescale, but also due to the fact that CM will always be in competition with other cooling processes that will obscure the measurement. The size dependence of the intraband cooling process is known [47] to show a rapid increase with decreasing QD diameter, which should increase (scaled) threshold energies and reduce the CM efficiency above the threshold for a given CM lifetime. The overall balance between the two competing processes will determine the size dependence of the CM threshold and efficiency. For PbS, PbSe and PbS x Se 1−x alloys Midgett et al. [81] report different dependences of the CM figure of merit vs. QD radius that show correlation with the degree of confinement for each material. In PbSe QDs the confinement was the strongest and very little size dependence was observed over the range of sizes studied. For PbS and the alloy QDs the size dependence was more marked with a linear trend vs. QD size. All three sets of data were reconciled to a single linear curve when presented as a function of the particle diameter normalized by a critical QD size, a c , defined as the size at which the electron-hole interaction energy and the confinement energies were equal [3,81] (Figure 10). El-Ballouli et al. [82] presented their data on CM in PbS QDs in a different format, showing the CM QYs vs. QD bandgap energy where each different sized QD was excited at 3 E g in each case. The study was similar to earlier work on CM and intraband carrier relaxation rates in PbS QDs by Nootz et al. [83] with cooling rates ranging from approximately 0.06 eV/ps-2 eV/ps over a range of QD sizes (1.4 nm-4.5 nm). Comparison with similar literature data for PbSe QDs [47] showed a very similar cooling rate trend. Schaller et al. [47] pointed out that the similar hole and electron effective masses in lead chalcogenide QDs rules out competing Auger cooling via the transfer of excess energy from electrons to holes. This would mean that cooling measurements should only be sensitive to phonon mediated relaxation channels. Contrastingly, in QDs with dissimilar carrier effective masses this is not the case. Cooling rates in CdSe for example are slightly higher than for PbSe [84] though the size dependence does follow a similar but displaced trend. By way of comparison, hot carrier cooling rates have also been measured in Si QDs by Bergren et al. [85] using terahertz transient spectroscopy methods where hot carrier lifetimes in the range of 500 fs-900 fs for QDs in the 1 nm-4 nm radius range were observed. The perils of including photocharging artefacts in CM measurements are nowadays well understood, and can usually be obviated by using stirred or flowing samples for solution measurements or corrected for following for example Nootz et al.'s approach [83]. Padilha et al. [86] pointed out in their study of (intentional) photocharging in QDs, that the latter process is also itself size dependent and so may therefore complicate apparent CM size dependence if not properly factored out. They reported that photocharging in PbSe QDs had an onset between 2.5 Eg and 3 Eg (similar to many reported CM thresholds for this material) and may itself be connected with the presence of multiexcitons (from CM or multiple photon absorption) driving an Auger ionization process. The size dependence for photocharging may therefore mask the underlying CM size variation if not properly suppressed or accounted for. Shape and Dimensionality Effects It was recognized quite early that elongated nanoparticles such as nanorods and nanowires CM processes may occur within the time window when the energy of the carrier remains above the energetic threshold for CM (E th ), which is determined by the excess energy of the excitation (E exc ) and the rate of non-CM relaxation (k cool ). Reprinted with permission from ref. [3]. Copyright (2016) American Chemical Society. The perils of including photocharging artefacts in CM measurements are nowadays well understood, and can usually be obviated by using stirred or flowing samples for solution measurements or corrected for following for example Nootz et al.'s approach [83]. Padilha et al. [86] pointed out in their study of (intentional) photocharging in QDs, that the latter process is also itself size dependent and so may therefore complicate apparent CM size dependence if not properly factored out. They reported that photocharging in PbSe QDs had an onset between 2.5 E g and 3 E g (similar to many reported CM thresholds for this material) and may itself be connected with the presence of multiexcitons (from CM or multiple photon absorption) driving an Auger ionization process. The size dependence for photocharging may therefore mask the underlying CM size variation if not properly suppressed or accounted for. Shape and Dimensionality Effects It was recognized quite early that elongated nanoparticles such as nanorods and nanowires would differ from QDs in terms of both the strength of carrier-carrier interactions (e.g., exciton binding strengths) and potentially carrier cooling rates [52]. Yu et al. [52] found faster relaxation in thin CdSe nanorods than thicker ones of comparable length, pointing to the importance of the nanorod aspect ratio. Although their study did not explore CM, they concluded that Auger cooling via electron-hole interactions could be playing a strong role in the cooling process, rather than LO phonon emission, as there would have been a greater density of states in the thin nanorod case compared with the thicker nanorods. In CdSe the electron and hole effective masses differ, so electrons may efficiently cool by transferring their excess energy to holes. However, if the latter was the dominant cooling process, it should be sensitive to the removal of holes by the addition of hole scavengers, etc., [87]. Whilst hole removal did affect the cooling rate, cooling could not be prevented completely, pointing to other cooling channels operating in parallel with electron-hole scattering. In materials such as PbSe, the electron and hole effective masses are almost equal, resulting in little Auger cooling benefit from electron hole-scattering. Bartnik et al. [88] used a multiband model to calculate electronic structures in PbSe nanorods and nanowires and predicted a strong enhancement in Coulomb interaction strengths that should enhance both Auger non-radiative recombination and conversely CM rates. A large part of the enhancement was found to arise from the strong dielectric contrast between the semiconductor and the surrounding medium, which is not significantly screened as in the case of spherical QDs [18]. The dielectric constant of semiconductor materials such as PbSe, etc., can be an order of magnitude greater than surrounding media such as organic solvents, ligands, etc. Padilha et al. [21,41,89], reported a decrease in the Auger recombination rate in PbSe nanorods rather than an increase, but yet saw that the CM rate was higher in nanorods. Moreover, they saw an optimum aspect ratio (length: width between 6:1 and 7:1) beyond which the CM enhancement declined, eventually reaching a point where the nanorods performed more poorly than equivalent QDs ( Figure 11). The initial contradiction between Auger recombination and CM rates was attributed to the very strong exciton binding in 1-D structures which changes the nature of the Auger recombination process from a three particle interaction (i.e., charged trions) in QDs to a two particle (neutral exciton-exciton) process in nanorods [21,89]. At high aspects ratios, the decline in CM was attributed to the resumption of bulk-like momentum conservation constraints. in Coulomb interaction strengths that should enhance both Auger non-radiative recombination and conversely CM rates. A large part of the enhancement was found to arise from the strong dielectric contrast between the semiconductor and the surrounding medium, which is not significantly screened as in the case of spherical QDs [18]. The dielectric constant of semiconductor materials such as PbSe, etc., can be an order of magnitude greater than surrounding media such as organic solvents, ligands, etc. Padilha et al. [21,41,89], reported a decrease in the Auger recombination rate in PbSe nanorods rather than an increase, but yet saw that the CM rate was higher in nanorods. Moreover, they saw an optimum aspect ratio (length: width between 6:1 and 7:1) beyond which the CM enhancement declined, eventually reaching a point where the nanorods performed more poorly than equivalent QDs (Figure 11). The initial contradiction between Auger recombination and CM rates was attributed to the very strong exciton binding in 1-D structures which changes the nature of the Auger recombination process from a three particle interaction (i.e., charged trions) in QDs to a two particle (neutral exciton-exciton) process in nanorods [21,89]. At high aspects ratios, the decline in CM was attributed to the resumption of bulk-like momentum conservation constraints. Figure 11. Comparisons of PbSe nanorod and QD CM performance: (a) QY threshold curves reported by Padilha et al. [21]. Inset showing evidence of an optimum aspect ratio for 3.8 nm diameter nanorods; (b) A similar comparison from Cunningham et al. [90] showing different CM onsets when comparing nanorod performance to literature QD data [11]. (a) Reprinted with permission from ref. [21]. Copyright (2013) American Chemical Society. (b) Reprinted with permission from ref. [90]. Copyright (2011) American Chemical Society. An effective mass model has been used by Sills and Califano [91] to mount a comparison of the CM figures of merit vs. aspect ratio for a wide range of semiconductors (GaAs, GaSb, InAs, InP, InSb, CdSe, Ge, Si and PbSe). A simple rectangular cross section rod was assumed and the influence of shape factors on the electronic structure (DOS) without including the effect of varying Coulomb interactions and surface effects was investigated. Whilst the initial drop in bandgap energy normalized CM thresholds with increasing aspect ratio was observed, at large aspect ratios the Figure 11. Comparisons of PbSe nanorod and QD CM performance: (a) QY threshold curves reported by Padilha et al. [21]. Inset showing evidence of an optimum aspect ratio for 3.8 nm diameter nanorods; (b) A similar comparison from Cunningham et al. [90] showing different CM onsets when comparing nanorod performance to literature QD data [11]. (a) Reprinted with permission from ref. [21]. Copyright (2013) American Chemical Society. (b) Reprinted with permission from ref. [90]. Copyright (2011) American Chemical Society. An effective mass model has been used by Sills and Califano [91] to mount a comparison of the CM figures of merit vs. aspect ratio for a wide range of semiconductors (GaAs, GaSb, InAs, InP, InSb, CdSe, Ge, Si and PbSe). A simple rectangular cross section rod was assumed and the influence of shape factors on the electronic structure (DOS) without including the effect of varying Coulomb interactions and surface effects was investigated. Whilst the initial drop in bandgap energy normalized CM thresholds with increasing aspect ratio was observed, at large aspect ratios the materials each tended towards thresholds corresponding to energy conservation alone (i.e., E th = 2 E g ) whereas the neglected strong Coulomb interactions will probably modify this behaviour somewhat (Figure 12). The observation about the sensitivity of nanorods to the surrounding dielectric constant also applies to thin 2D nanosheet materials (nanoplatelets) as well [18]. Compared with QDs and nanorods, the Coulomb interaction strength in nanoplatelets should be higher, whilst the electronic density of states should be higher than that in nanorods [92]. However, as in longer nanorods, 2D structures that are thicker and more extended in their lateral two dimensions will tend towards the bulk regarding momentum conservation requirements for CM [3]. A number of groups have synthesized such materials both by direct methods [93], and by subsequent ion exchange [94][95][96], and a few have investigated their Auger recombination rates, energy transfer and CM QYs. Aerts et al. [97] synthesized PbS nanosheets (with micron lateral dimensions, but thicknesses of a few nm) and for the thinnest sheets (4 nm) observed a lowering of the normalized CM threshold relative to bulk material, an increase in the CM efficiency, and a thickness dependent bandgap blue shift due to the layer confinement ( Figure 13). In CdSe and CdSe/CdS/ZnS core/shell nanoplatelets, biexciton Auger decay rates of 0.07 ns −1 and 0.12 ns −1 were measured by Kunneman et al. [22] using TRPL techniques. Again, the fact that the Auger rate is about an order of magnitude slower than for QDs or nanorods of comparable volume is attributed to the necessity for some degree of adherence to momentum conservation rules in nanoplatelets. Slower Auger recombination was exploited by Rowland et al. [98] in the use of much faster Förster resonant energy transfer (FRET, transfer time 6 ps-23 ps) to drive the transfer of excitations between nanoplatelets of two differing sizes, optimized so that one was spectrally positioned to act as a donor and the other as the acceptor for FRET. PbSe1−xSx The observation about the sensitivity of nanorods to the surrounding dielectric constant also applies to thin 2D nanosheet materials (nanoplatelets) as well [18]. Compared with QDs and nanorods, the Coulomb interaction strength in nanoplatelets should be higher, whilst the electronic density of states should be higher than that in nanorods [92]. However, as in longer nanorods, 2D structures that are thicker and more extended in their lateral two dimensions will tend towards the bulk regarding momentum conservation requirements for CM [3]. A number of groups have synthesized such materials both by direct methods [93], and by subsequent ion exchange [94][95][96], and a few have investigated their Auger recombination rates, energy transfer and CM QYs. Aerts et al. [97] synthesized PbS nanosheets (with micron lateral dimensions, but thicknesses of a few nm) and for the thinnest sheets (4 nm) observed a lowering of the normalized CM threshold relative to bulk material, an increase in the CM efficiency, and a thickness dependent bandgap blue shift due to the layer confinement ( Figure 13). In CdSe and CdSe/CdS/ZnS core/shell nanoplatelets, biexciton Auger decay rates of 0.07 ns −1 and 0.12 ns −1 were measured by Kunneman et al. [22] using TRPL techniques. Again, the fact that the Auger rate is about an order of magnitude slower than for QDs or nanorods of comparable volume is attributed to the necessity for some degree of adherence to momentum conservation rules in nanoplatelets. Slower Auger recombination was exploited by Rowland et al. [98] in the use of much faster Förster resonant energy transfer (FRET, transfer time 6 ps-23 ps) to drive the transfer of excitations between nanoplatelets of two differing sizes, optimized so that one was spectrally positioned to act as a donor and the other as the acceptor for FRET. PbSe 1−x S x nanoplatelets derived from CdSe 1−x S x nanocrystals by interparticle attachment followed by cation exchange have also been observed to show long (from 70 ps to 80 ps) decay times and shown to offer advantages in photoelectrochemical hydrogen generation [94]. Heterostructures A number of different QD heterostructures have been explored for possible enhancement of CM performance. Several early studies used type I core/shells [64,100] (e.g., CdSe/ZnS), though the benefits of this type of band alignment for the CM rate or for carrier cooling rates is not thought to be so significant. Indeed, the Auger recombination rate in type I QDs is believed to scale with the volume of the whole structure, just as for core only QDs [101,102]. There may be some minor benefit from improved surface passivation and therefore reduced carrier trapping at the surface [103]; even recently Singhal et al. [104] compared CdSe and CdSe/ZnS where the ZnS shell was still sufficiently thin enough (2.5 monolayers) to permit competitive hot hole extraction (with transfer times of 5 ps and 20 ps respectively). In that case the anticipated application was in QD doped solar cell devices where the type I shell would improve photostability. Type II and quasi type II where the electron can range over both core and shell, whilst the hole remains localized in the core or the converse case where the electron is confined to the core whilst the hole can move throughout the structure have proved to be of far greater interest for CM. Pandey and Guyot-Sionnest first used type II structures with further additional layers to allow holes to be localized in and even trapped at thiol ligand sites on a thin outer CdSe shell (CdSe/ZnS/ZnSe/CdSe/thiol). Confinement of the electron in the core, without access to a hole to transfer excess energy to, resulted in cooling times being extended from under 6 ps to over 1 ns. Similar frustration of the cooling process has been seen in PbSe/CdSe core/shells with thick CdSe shells [3,105,106]. In this case it is the holes that are strongly confined in the small core, with a concomitant increase in the core valence level spacings. In addition, the overlap between the lower energy core and higher energy shell valence levels is reduced further slowing hole relaxation. The presence of the CdSe shell brings a benefit that helps to offset the very similar hole and electron effective masses that leads to the high CM threshold in lead chalcogenide QDs. Absorption at higher energies is more dominated by the CdSe shell leading to a non-equal partition of excess energy between the two photogenerated carriers. Excitation near the shell band edge therefore bestows far more excess energy on the holes which is said to lead to more efficient CM, i.e., a fourfold increase in CM efficiency (relative to equivalent sized core only PbSe QDs) and a reduction in threshold energy to close to 2 Eg [106]. In these core/shells, weak visible emission at close to twice the IR band edge emission is also visible, showing evidence of the preservation of hot holes long Figure 13. Quantum yield plotted vs. band gap multiple hν/E g , for PbS nanosheets with thicknesses, d, as indicated. In addition, literature data on obtained quantum yields in PbS bulk are shown [99]. The slope of the data points is equal to the CM efficiency η cm (solid lines). Adapted by permission from Macmillan Publishers Ltd.: Nature Communications ref. [97], copyright (2014). Heterostructures A number of different QD heterostructures have been explored for possible enhancement of CM performance. Several early studies used type I core/shells [64,100] (e.g., CdSe/ZnS), though the benefits of this type of band alignment for the CM rate or for carrier cooling rates is not thought to be so significant. Indeed, the Auger recombination rate in type I QDs is believed to scale with the volume of the whole structure, just as for core only QDs [101,102]. There may be some minor benefit from improved surface passivation and therefore reduced carrier trapping at the surface [103]; even recently Singhal et al. [104] compared CdSe and CdSe/ZnS where the ZnS shell was still sufficiently thin enough (2.5 monolayers) to permit competitive hot hole extraction (with transfer times of 5 ps and 20 ps respectively). In that case the anticipated application was in QD doped solar cell devices where the type I shell would improve photostability. Type II and quasi type II where the electron can range over both core and shell, whilst the hole remains localized in the core or the converse case where the electron is confined to the core whilst the hole can move throughout the structure have proved to be of far greater interest for CM. Pandey and Guyot-Sionnest first used type II structures with further additional layers to allow holes to be localized in and even trapped at thiol ligand sites on a thin outer CdSe shell (CdSe/ZnS/ZnSe/CdSe/thiol). Confinement of the electron in the core, without access to a hole to transfer excess energy to, resulted in cooling times being extended from under 6 ps to over 1 ns. Similar frustration of the cooling process has been seen in PbSe/CdSe core/shells with thick CdSe shells [3,105,106]. In this case it is the holes that are strongly confined in the small core, with a concomitant increase in the core valence level spacings. In addition, the overlap between the lower energy core and higher energy shell valence levels is reduced further slowing hole relaxation. The presence of the CdSe shell brings a benefit that helps to offset the very similar hole and electron effective masses that leads to the high CM threshold in lead chalcogenide QDs. Absorption at higher energies is more dominated by the CdSe shell leading to a non-equal partition of excess energy between the two photogenerated carriers. Excitation near the shell band edge therefore bestows far more excess energy on the holes which is said to lead to more efficient CM, i.e., a fourfold increase in CM efficiency (relative to equivalent sized core only PbSe QDs) and a reduction in threshold energy to close to 2 E g [106]. In these core/shells, weak visible emission at close to twice the IR band edge emission is also visible, showing evidence of the preservation of hot holes long enough for detectable hot carrier recombination to occur. A similar hole in core confinement scheme was used by Gachet et al. [107] using type II structures based on CdTe/CdSe core/shells (with additional shell layers of CdS and ZnS). At low excitation energies, below either the CdTe or CdSe band edges, weak CM was tentatively identified which was interpreted as arising from CM via a spatially indirect process across the core-shell interface, at least in part. InP/CdS quasi type II core/shells have also been investigated by several groups [20,103]. Smith et al. [103] grew 3.9 nm diameter InP cores coated with 0.7 nm thick CdS shells with both core and shell in cubic phases. CM QYs (122% at 3 E g ) were similar to that for type I InP/ZnS/ZnO QDs [108], and biexciton lifetimes were measured as 50 ps, broadly similar to those determined by Dennis et al. [20] for the thinner shell samples in a series with thicknesses ranging from 1 monolayer up to 11.4 monolayers of CdS. For their thicker shell materials, the biexciton lifetimes (determined from time resolved PL measurements) increased to 7.2 ns ± 1.1 ns showing a dramatic reduction in Auger recombination and a similar abatement of QD emission blinking. However, the authors noted that the ratio of the single exciton (702 ns) to biexciton lifetimes is still far higher than the theoretical ideal case of 1:2 to 1:4, indicating that whilst Auger recombination is drastically suppressed, it is still not completely absent as a non-radiative recombination channel. Dennis et al. mentioned that their core/shells have InP in its cubic phase but that the overgrown CdS shells adopt the room temperature stable hexagonal phase, with less than 1% lattice mismatch at the core-shell boundary. Surface Effects It has long been recognized that QD surfaces can substantially modify carrier dynamics. Imperfect surfaces may provide lattice defects that act as traps for either electrons or holes, or generate additional phonon modes that may couple to the carriers [102]. Even a perfectly formed surface must act as the interface between the interior lattice and surface ligands, bringing the possibility of coupling to the molecular vibrations of the latter, often this is very evident in low bandgap IR QDs [109][110][111]. In regard to CM, the surface can influence the multiplication process in two ways: the carrier cooling processes that compete with CM for hot carriers can be modified, as shown by Pandey and Guyot-Sionnest [102] where photogenerated holes were intentionally localized on the surface ligands of CdSe/ZnS/ZnSe/CdSe heterostructures, leaving electrons to cool much more slowly than normal without being able to transfer energy to the holes via Auger cooling channels. The authors also noted major differences in TA signal decay times between samples terminated with alkane thiols or amines and ligands such as phosphonic or carboxylic acids. The former, having weaker and less extensive mid-IR spectra provide less scope for surface coupling and energy loss to molecular vibrations than the latter. This exciton-ligand interaction then furnishes a second cooling mechanism, distinct from regular Fröhlich type carrier-phonon relaxation. The other way in which CM can potentially be affected by the QD surface is by modification of the CM rate itself, for example by the opening of additional impact ionization channels via defect states appearing in the gap [112]. The influence of the oxidation of PbS QDs to form a surface layer of PbSO x [113,114] and its impact on CM yields, thresholds and carrier extraction efficiencies was investigated by Hardman et al. [115]. They found evidence of a reduction in CM efficiencies and an increase in threshold energies, along with a reduction in the carrier injection efficiency that correlated with the degree of surface oxidation, probed by XPS characterization of the surface species. We have of already mentioned above that photocharging can masquerade as MEG and Auger recombination signals [116], and in many cases the degree of photocharging and its kinetics can be linked to differences in surface states and surface treatments between different samples [19,86]. Midgett et al. [81] inferred a size dependent hot carrier cooling rate in addition to a size dependent CM rate in their studies on PbS, PbSe and PbS x Se 1−x alloy QDs which they suggested may be tied to variations in stoichiometry with size, imperfect surfaces or changes in ligand coupling modifying the competing cooling channels. Spoor et al. [54] also note the potential impact of the choice of ligands and exciton-molecular vibration coupling on the cooling rate in their detailed study of the hole and electron cooling spectrum in PbSe QDs (see Figure 5). Near the band edge in particular, a cooling mechanism via coupling of both electrons and holes to surface ligand vibrations or to surface phonon modes of the QD itself is a necessary conclusion. As such, the choice of ligand (due to its IR overtone and combination band spectrum) will have a significant effect on the cooling rate and therefore the competition between cooling and CM. CM and carrier dynamics have also been studied in Ag 2 S, CuInS 2 and CuInS 2 /ZnS core/shell QDs by Sun et al. [117,118]. Here the radiative recombination mechanism is slightly different from that in the II-VI, III-V and IV-VI QDs in that it is associated with internal defect states e.g., Cu vacancies in CuInS 2 . Carrier dynamics are very sensitive to the surface where charges (holes in particular) may become strongly localized (or trapped) and the surrounding medium polarity can also exert a large effect. In CM measurements on Ag 2 S QDs, Sun et al. [117] reported that two different types of Auger processes were present, one involving relaxation of tightly bound excitons and the other weakly bound excitons. The proportions of each type of recombination was found to be sensitive to the polarity of the solvent the QDs were dispersed in: lower polarity favoured the weakly bound excitons, where the hole was determined to be localised near the QD surface, whilst QDs dispersed in higher polarity solvents had a more equal proportion of both types of exciton. CM threshold was 2.28 E g , with an efficiency of 173% measured at 3.2 E g . Stolle et al. [50] reported a CM threshold of 2.4 E g and an efficiency of 36% per unit of E g above the CM onset in CuInSe 2 QDs. Sub-CM threshold excited carrier cooling rates were in the 1 eV/ps range and exhibited a QD volume dependent scaling. Doping/Photodoping Effects QDs may be doped in order to selectively alter their carrier mobilities or to modify their electronic and optical properties. They may be manipulated by impurity doping [119,120] just as in bulk semiconductors to be n-or p-type, or they may be electrochemically doped by a combination of surface treatments [121] and charge injection via electrodes [122] or contact with electrolytes in electrochemical cells [123,124] Another strategy is so-called photodoping, where partial population of a lower lying excited state is effected by optical excitation in advance of an event such as a pump and probe pulse during TA measurements, etc. In all cases, the normally vacant excited states of the QD are partially filled. This can be useful in solid state QD solar cells where p and n type layers (with one or both being doped QDs) can lead to the formation of heterojunctions with a built-in field to drive charge separation and transport towards the electrodes. Doped QDs can be used to open up normally inaccessible intraband transitions [125] with energy level spacings that can be used to extend the IR range of QD photodetectors [126] for example. In QD lasers, partial filling of the degenerate upper lasing level can reduce the excitation threshold for laser action to occur [123]. Given the interest in doped QDs for solar cells in particular, it is of interest to know what effect doping may have upon CM performance, i.e., does one compromise the other? Two possibilities might arise: additional intraband transitions from the populated band edge states could open up further channels for CM; or, where the doping populates the band edge state either fully or one carrier short of fully, transitions to biexciton states in that level can become blocked according to the Pauli exclusion principle (Pauli blocking) [46,123]. In the CM context, multiplication would then be blocked until much higher excess energies where higher levels could be filled by carrier fission. Pijpers et al. [30] attempted to show evidence of just this blocking mechanism in relation to CM in InAs/CdSe/ZnSe core/shell QDs by photodoping the 1S e electron transitions with a leading pulse a few ns before measurements of sufficient fluence to ensure that by the time the measurement was made, the 1S state of the ensemble was almost completely singly populated. With a degeneracy of two for the 1S electron states, CM can lead to a triexciton state (the original leading pulse exciton plus a further two excitons from CM, one into the 1S and one into the 1P states). This would alter (increase) the CM threshold, and the CM biexciton creation energy. Initially the authors reported a reduction in CM efficiency from 1.6 to 1.3, however, they were subsequently unable to repeat the CM experiments with fresh InAs/CdSe/ZnSe QD batches that were nominally identical to the previous materials. The CM part of the earlier reported work was consequently withdrawn [30] leaving the photodoping demonstration open to question. Alloy Composition Effects The engineering of bulk semiconductors for the enhancement of CM is limited to the formation of superlattice structures [2] or manipulation of the band structure around the bandgap by the formation of alloys. The latter has been successfully applied to Ga 1−x Al x Sb, InGaAs and Hg 1−x Cd x Te in particular in relation to the production of APDs [4,6]. In the zincblende Ga 1−x Al x Sb and Hg 1−x Cd x Te cases, the composition is adjusted to bring the split-off valence band and the band gap energy differences (∆ and E g respectively) into resonance to improve the impact ionization efficiency. This resonant condition is nearly true for bulk InAs without resorting to the use of alloys [4]. For direct semiconductors satisfying this condition the impact ionization process is vertical and with no momentum transfer the threshold can approach the 2 E g energy conservation limit. In QDs there is an additional degree of flexibility in that as well as being able to tune the composition, the size can also be easily adjusted. This then means that the resonance condition could be matched for a wider range of bandgap energies, though the tuning of the two energy level differences, ∆ and E g , is not completely independent. In QDs, the notion of a split-off valence band is superseded by more discrete valence band levels, but the spirit remains the same. The energy gap "∆" can be determined by spectroscopic ellipsometry if good optical quality thin films of QDs can be prepared [127,128]. In a previous study of CM in Hg 1−x Cd x Te QDs prepared by ion exchange [129,130], we estimated a size/composition point where a strong resonance could be expected and then bracketed that point by preparing a series of same-sized but different composition alloys starting from a single batch of CdTe QDs. A composition sweet spot was observed where the CM QY (determined by the TG method) reached almost 200% for excitation at 2.9 E g (Figure 14a). At this point the composition was x = 0.52, and the m e /m h ratio would be around 0.14 which would lead to a prediction of a low E th , close to the energy conservation limit. Subsequent threshold excitation energy dependence measurements revealed a threshold of close to 2 E g , with evidence of saturation between 2.5 E g and 3 E g (Figure 14b). Alloy Composition Effects The engineering of bulk semiconductors for the enhancement of CM is limited to the formation of superlattice structures [2] or manipulation of the band structure around the bandgap by the formation of alloys. The latter has been successfully applied to Ga1−xAlxSb, InGaAs and Hg1−xCdxTe in particular in relation to the production of APDs [4,6]. In the zincblende Ga1−xAlxSb and Hg1−xCdxTe cases, the composition is adjusted to bring the split-off valence band and the band gap energy differences (Δ and Eg respectively) into resonance to improve the impact ionization efficiency. This resonant condition is nearly true for bulk InAs without resorting to the use of alloys [4]. For direct semiconductors satisfying this condition the impact ionization process is vertical and with no momentum transfer the threshold can approach the 2 Eg energy conservation limit. In QDs there is an additional degree of flexibility in that as well as being able to tune the composition, the size can also be easily adjusted. This then means that the resonance condition could be matched for a wider range of bandgap energies, though the tuning of the two energy level differences, Δ and Eg, is not completely independent. In QDs, the notion of a split-off valence band is superseded by more discrete valence band levels, but the spirit remains the same. The energy gap "Δ" can be determined by spectroscopic ellipsometry if good optical quality thin films of QDs can be prepared [127,128]. In a previous study of CM in Hg1−xCdxTe QDs prepared by ion exchange [129,130], we estimated a size/composition point where a strong resonance could be expected and then bracketed that point by preparing a series of same-sized but different composition alloys starting from a single batch of CdTe QDs. A composition sweet spot was observed where the CM QY (determined by the TG method) reached almost 200% for excitation at 2.9 Eg (Figure 14a). At this point the composition was x = 0.52, and the me/mh ratio would be around 0.14 which would lead to a prediction of a low Eth, close to the energy conservation limit. Subsequent threshold excitation energy dependence measurements revealed a threshold of close to 2 Eg, with evidence of saturation between 2.5 Eg and 3 Eg (Figure 14b). Midgett et al. [81] also observed CM in alloy QDs of PbSxSe1−x though in the lead chalcogenides the lattice is hexagonal and so the degeneracy of the band edge states is higher than cubic Hg1−xCdxTe. This should allow far higher exciton multiplicities to be reached well above Eth though no reports to date have shown evidence of a staircase-like rise in CM QY vs. excitation energy. In all cases (not just alloys) a linear data fit prevails to at least 3 Eg if not higher. Midgett et al. [81] also observed CM in alloy QDs of PbS x Se 1−x though in the lead chalcogenides the lattice is hexagonal and so the degeneracy of the band edge states is higher than cubic Hg 1−x Cd x Te. This should allow far higher exciton multiplicities to be reached well above E th though no reports to date have shown evidence of a staircase-like rise in CM QY vs. excitation energy. In all cases (not just alloys) a linear data fit prevails to at least 3 E g if not higher. Conclusions and Outlook Since the first observations of CM in QDs, the understanding of the underlying carrier fission process and the competing relaxation channels has come a long way, though there is undoubtedly still more to learn about the latter. The possibility of several simultaneous carrier cooling mechanisms has complicated the unravelling of the carrier dynamics and even now there remains some conjecture even for simple QD structures, particularly when it comes to the role of surface polarons and ligand molecular vibrations in dissipating the initial hot carriers' excess energies. Refinement of measurement and analysis techniques has removed some of the wide variations in the early reported CM data, allowing a clearer picture to emerge. In particular, the comparison of symmetric (in terms of the carrier's effective masses, meaning m e~mh ) QDs such as the lead chalcogenides with asymmetric QDs (m e << m h ) such as CdSe and InAs allowed the role of Auger cooling (electron to hole energy transfer) in the latter to be fully appreciated. The observation of an engineered phonon bottleneck in CdSe/ZnSe core/shell QDs further supported the development of this thinking. Detailed carrier cooling studies in the lead chalcogenides have highlighted the influence of surface phonons/ligand vibrations in phonon mediated cooling processes, which seem to outpace the classic bulk LO and acoustic phonon mechanisms, and which could explain the cooling rates seen experimentally [54]. The further development of CM has and will continue to focus not so much on the multiplication mechanism itself (though several such mechanisms are postulated) as on minimization of the net competing cooling rate. To date, the effects of composition, size, the use of nanostructures and dimensionality have all been explored with a view to slowing each of the cooling processes. Once multiplication has been given the chance to occur, the resulting multiexciton must then resist non-radiative recombination, particularly via Auger recombination, this being effectively the inverse of the CM process. Here again, the use of heterostructures, or higher dimensionalities such as in nanorods and nanoplatelets have been shown to enhance biexciton and multiexciton survival. For applications such as solar cells and photodetectors which rely on extraction of the hot, or in the case of CM the enhanced numbers of, carriers there are certain time windows for that extraction. If hot carriers are to be extracted [17], the charges must be removed before cooling, so that CM and any kind of Auger recombination cannot occur. CM is a fast, tens to hundreds of fs, process whilst cooling overall can often range from sub-to a few ps in core-only QDs. Thus, for hot carrier extraction it is probably better to select materials which do not themselves show efficient CM. For multiexciton extraction following CM, the removal of charges must come after the hot exciton fission but before multiexciton recombination, and in some of the materials seen to date this can be extended into the few ns regime, which is a less difficult prospect than for hot carrier extraction. Whilst we have not covered the extraction or transfer of charge to adjacent acceptors in this review, it is a very active field of research and there are many reviews [132], and recent research papers [133] on this subject. Of course, the efficient transport of the carriers through the QD or QD/host matrix film once they have been carefully harvested is an equally, if not arguably a more, important subject which also continues to be investigated [134][135][136][137]. To make an appreciable impact on QD solar cell power conversion efficiencies, overall CM still needs to improve to be much closer to the ideal limit to be useful [14]. QD solar cells without CM still have power conversion efficiencies around the 10-12% mark [138,139], far below the Shockley-Queisser limit [12], so clearly as a field we must continue to address the basics such as carrier extraction and transport. Some examples of CM enhancement of device photocurrent exist [15], but the improvements are still relatively marginal and require careful measurement methods to unambiguously allow them to be attributed to CM. Calculations have shown [13] that simply being able to observe CM at some level in a QD solution does not necessarily translate into a quick fix for QD solar cell efficiencies. A lot more development to bring CM up to the ideal in terms of a near energy conservation limited threshold and much higher CM efficiencies (i.e., following the staircase-like characteristic) is still required. The newly emerging field of lead halide perovskite nanocrystals and nanoplatelets [140] offers some potentially interesting prospects for future exploration of hot carrier extraction and CM. The occurrence of traps within the bandgap is known to be uncommon, or at least limited to very shallow trap states so such nanoparticles are termed to be defect tolerant [141][142][143]. With traps occurring within rather than between the conduction and valence bands, the exciton QYs can be very high (>90%) even without the use of core-shell passivation. This suggests that photocharging may be somewhat reduced in these materials, as well. Biexciton lifetimes of 90 ps in perovskite nanocrystals have been observed by Hu et al. [144] Interestingly, a hot phonon bottleneck has already been seen in thin films of bulk perovskites [145], and has been observed to be far more effective at slowing carrier cooling than in some regular epitaxial semiconductors such as GaAs [146]. The origin of the slow, hot hole cooling has been explained theoretically as originating from the relatively sparse density of phonon states in the valence band [147]. Li et al. [148] have already demonstrated that this slow cooling translates into colloidal perovskite nanocrystals, with cooling rates up to two orders of magnitude slower than for films-cooling times of up to 20 ps were seen at low fluence whilst at higher fluences an Auger heating process contributed to extending this to 27 ps. This allowed the authors to demonstrate efficient (83%) hot carrier extraction in under 1 ps using an organic electron acceptor molecule. Already, several groups have shown that core-shell structures, although less straightforward than in II-VI QDs, etc., and so far with a more restricted choice of band offsets, can nonetheless still be grown [149,150]. The growth of 2D perovskite nanoplatelets has also been demonstrated [140,151], so there is already a toolkit of structural options that can be explored to further manipulate carrier cooling rates to favour CM. However, for CM tailored to solar applications the perovskite bandgap would need to be best positioned just into the near IR (e.g., 1.2 eV-1.4 eV). This is still a slight problem for many perovskite NCs, where even red emitting materials tend to be metastable. Protesescu et al. [152] have recently broken through this so called 'red wall' by using formamidinium-cesium lead iodide to form more stable 780 nm emitting materials, offering a further step towards solar cell applications for perovskite nanocrystals. To date there have been a few attempts at the demonstration of CM in perovskite nanocrystals, however, despite the large exciton binding energies which should favour high CM rates [153], multiplication has not been observed up to 2.65 E g whilst the biexciton and trion Auger recombination rates have been faster than in their II-VI counterparts [154]. However, there is much scope for future improvements with nanostructure engineering using the lessons already learned from II-VI, IV-VI and III-V nanoparticles.
2017-10-03T21:52:42.389Z
2017-09-01T00:00:00.000
{ "year": 2017, "sha1": "295786d193516188daf98aa676474463b34d68ef", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1944/10/9/1095/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "894b57bd9904eac1326eb2468e88679007ba36f5", "s2fieldsofstudy": [ "Materials Science", "Physics" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
4684913
pes2o/s2orc
v3-fos-license
The Structure of Matter in Spacetime from the Substructure of Time The nature of the change in perspective that accompanies the proposal of a unified physical theory deriving from the single dimension of time is elaborated. On expressing a temporal interval in a multi-dimensional form, via a direct arithmetic decomposition, both the geometric structure of 4-dimensional spacetime and the physical structure of matter in spacetime can be derived from the substructure of time. While reviewing this construction, here we emphasise how the new conceptual picture differs from the more typical viewpoint in theoretical physics of accounting for the properties of matter by first postulating entities on top of a given spacetime background or by geometrically augmenting 4-dimensional spacetime itself. With reference to historical and philosophical sources we argue that the proposed perspective, centred on the possible arithmetic forms of time, provides an account for how the mathematical structures of the theory can relate directly to the physical structures of the empirical world. Introduction Building upon the initial proposal of Kaluza [1] and Klein [2] there are many ways in which a theory can be constructed over 4-dimensional spacetime by utilising extra spatial dimensions. The options include for example the number of extra dimensions, the properties of the overall geometric structure and the manner in which physics in the familiar four dimensions is extracted. This is reflected in the extensive literature on the subject as reviewed for example in [3,4,5,6] and the references therein. However, despite the many approaches, the geometric structures of extra spatial dimensions do not readily lead to known empirical properties of the Standard Model of particle physics without some difficulty in contriving such features (see for example [7,8]). In particular the origin of the distinctive symmetry patterns of Standard Model particle multiplets remains a puzzle. In this paper we describe how a physical theory can be derived from the seemingly counter-intuitive starting point of a single temporal dimension only. This can be achieved through exploiting the basic arithmetical structure of the real line, as representing progression in time. In contrast with postulating an initial higher-dimensional structure from which physics in 4-dimensional spacetime is extracted, building the theory up from the one dimension of time alone provides a well-defined and much more restricted basis for a theory. One striking feature of this new approach [9,10,11,12] is the ability to directly account for significant elements of Standard Model structure on adopting this change of perspective. In the following section we describe the contrast in the conceptual basis in more detail and explain how it is possible to construct a theory from this simple idea based on time. In section 3 the explicit development of the theory and explanatory power that can be achieved is summarised, based on the technical details described in [9,10,11,12]. The main focus of this paper is the underlying change in perspective regarding the relationship between space, time and matter that accompanies the mathematical development and physical successes of the theory. The core arguments for this change in emphasis from extra spatial dimensions to the one dimension of time are elaborated in section 4. The underlying motivation is further considered in section 5 in comparison with that of other physical theories and unification schemes, before reiterating the role of this paper in the context of [9,10,11,12] in the conclusions. Space, Time, Matter; a Choice in Perspective From an objective point of view events in the physical world take place at locations in space and time and are typically ascribed to the properties of matter, postulated with the aim of accounting for empirical phenomena observed on all scales as sketched in figure 1(c). The 4-dimensional spacetime manifold M 4 , as the arena for such events at locations x ∈ M 4 , is pictured by itself in figure 1(a). This spacetime manifold possesses a metric tensor g(x), as signified by the rectangular frames in figure 1, defining a light cone structure on M 4 . With components g µν (x), for general coordinate indices µ, ν = 0, 1, 2, 3, the metric g(x) may be considered either globally or only locally equivalent to the Minkowski metric η = diag(+1, −1, −1, −1), depending upon the theoretical framework. A physical theory might then be constructed by introducing further entities upon M 4 as the background 'stage', or by geometrically augmenting M 4 itself with extra spatial dimensions with particular properties. Alternatively we might first ask where the mathematical structure for (M 4 , g) itself arises from, that is, what supports the stage itself? As an irreducible feature of this spacetime arena events on M 4 are infused with the flow of time, in that any 'clock' at any location and on any physical trajectory within the light cone structure will 'tick' as a record of the passage of time, which can be more generally parametrised by a real number s ∈ R as depicted in figure 1(b). From a subjective point of view we also note that this passage of time accompanies any observer, as represented in the centre of figure 1(c), and any observations that we can make in the universe. The metric structure of spacetime, as incorporated into general relativity con- Figure 1: (a) The empty stage of spacetime (M 4 , g), with one spatial dimension suppressed in the sketch, (b) contains a universal flow of time s which, through the change in perspective that we describe in this paper, can be considered as the mathematical progenitor of 4-dimensional spacetime itself as well as (c) the matter content on all scales from particle physics to cosmology. sistent with the equivalence principle, implies that at any spacetime location local inertial coordinates (x 0 , x 1 , x 2 , x 3 ) may be identified such that an infinitesimal proper time interval δs between two proximate events (such as two infinitesimally separated 'ticks of a clock') can be expressed, within a local Lorentz transformation, as: Hence a one-dimensional flow of time s ∈ R can be conceived of as completely filling the 4-dimensional spacetime volume M 4 in figure 1(b) while being everywhere locally expressed in the form of equation 1. This universal temporal flow is analagous to 'cosmic time' in models for the evolution of the universe. However we initially consider figure 1(b) to represent the flow of time through a flat, empty spacetime; that is there are global coordinate systems in which the metric can be globally equated with the Minkowski metric g(x) = η = diag(+1, −1, −1, −1) implicit in equation 1 and, with no other structure identified on M 4 , this manifold describes the arena of special relativity. With respect to the metric the local Lorentz invariant δs can be interpreted as a timelike interval constructed from four local coordinate intervals in equation 1 relating 3-dimensional space (x 1 , x 2 , x 3 ) and time (x 0 ) in a unified 'spacetime' structure. The alternative perspective that we are proposing is to consider the extended 4-dimensional spacetime manifold M 4 itself in figure 1(b) to be a possible manifestation of the one-dimensional temporal flow s ∈ R. That is, on taking s ∈ R as the primary entity in equation 1 this quadratic expression for an infinitesimal interval of time implies that time itself contains an elementary substructure and symmetry that underlies the construction of the extended spacetime manifold M 4 pictured in figure 1(b). While the arithmetic substructure of time in equation 1 expresses the geometric structure of an infinitesimal local inertial coordinate frame the Minkowski metric implicit in that equation (which is equivalent to equation 2 below) is exhibited throughout an implied extended manifold M 4 ≡ R 4 through the translation symmetry over the full range of (x 0 , x 1 , x 2 , x 3 ) ∈ R 4 for that equation (as described in detail in [12] opening of subsection 4.1 and figure 1 there). Hence the globally flat manifold M 4 in figure 1(b) can be interpreted as a representation of the flow of time. Since the 'equals' sign in equation 1 makes no statement about the priority of the left or right-hand side in this expression, we are free to make this alternative interpretation with δs ∈ R identically containing the arithmetic substructure of the basis for an extended 4-dimensional spacetime. In this sense 'spacetime', practically as the name might suggest, can be interpreted as an augmentation of the flow of time itself. We can obtain equation 1 by initially writing δs = δx 0 ∈ R and (δs) 2 = (δx 0 ) 2 and then opening up this trivial expression for time directly through the arithmetic modification (δs) 2 = (δx 0 ) 2 − |δx| 2 , incorporating a 3-vector δx = (δx 1 , δx 2 , δx 3 ) ∈ R 3 of further real numbers, on introducing Lorentz transformations algebraically as a 'symmetry of time' leaving δs invariant. This allows the flow of time to be simultaneously manifested in a geometric spacetime form as depicted in figure 1(b), with time δs hence incorporating space through this direct arithmetic composition. In particular we can consider equation 1 to represent an infinitesimal interval of time δs = 0 associated with an observer and a local inertial reference frame constructed through this 4-dimensional expression for time with Lorentz and translational symmetries. Hence here the Lorentz symmetry on the spacetime manifold M 4 enters the theory as a symmetry of time that preserves the ordered causal property of time within the light cone structure on M 4 . With equation 1 written as (δs) 2 = (δx 0 ) 2 − |δx| 2 the Lorentz transformations leave δs invariant for any timelike trajectory with δs = 0, with Lorentz boosts changing the ratio |δx| : δx 0 and hence the 'speed' through space. However any physical entity that propagates within this causal structure on M 4 but along a null-trajectory, on the light cone itself with δs = 0, will have the same |δx| : δx 0 ratio in any Lorentz frame and hence the same speed. This will be the case for example for an electromagnetic or a gravitational wave, and in particular the constancy of the speed of light for any reference frame hence follows as a consequence. This contrasts with the basis of special relativity founded upon the postulate of the 'principle of relativity' -that the equations of physics, and in particular of electrodynamics as well as mechanics, should take the same form in any inertial reference frame and hence transform in a covariant manner between such frames in uniform relative motion -together with the further postulate that the speed of light in empty space should be the same in any inertial frame, independent of the motion of the light source ( [13], [14] chapter 7). A departure from Galilean relativity is then required to accommodate the second postulate, with the Lorentz transformations between inertial reference frames in uniform relative motion shown to derive from these first principles in maintaining (δs) 2 = (δx 0 ) 2 − |δx| 2 = 0 for light propagation. With Maxwell's equations for electrodynamics then expressible in Lorentz covariant form the apparent conflict between the theories of Newton and Maxwell was resolved. While for special relativity, founded on the constancy of the speed of light with δs = 0, space (x) and time (x 0 ) coordinates enter on an equal footing, mutually mixing under Lorentz transformations in a unified spacetime structure, here the founding motivation places the priority on a causally ordered fundamental flow of time (s) with δs = 0 associated with the perspective of a local observer, with space intrinsically embedded within this flow via equation 1. For the full theory the local observer, as depicted in figure 1(c), may or may not be associated with a local inertial reference frame while, similarly as for general relativity, the extended Minkowski base space of special relativity, and of figure 1(b), arises in the flat spacetime limit. To develop the full theory, building upon equation 1, we can then consider what form might be taken by a further, more general, augmentation as an arithmetic expression for the flow of time. Given that we are generalising from equation 1 this can also be interpreted as broadening the notion of a local inertial coordinate frame of general relativity. The form of the local proper time interval δs can be generalised from the 4-dimensional Lorentzian form of equation 1 (for which a, b = 0, 1, 2, 3 here can be interpreted as indices for a local inertial coordinate frame): as a homogeneous p th -order polynomial in n dimensions. With component labelling of 0, . . . , (n − 1) or 1, . . . , n being a matter of convention, in equation 3 the sum is taken over each index a, b, c . . . = 1, . . . , n for the real intervals {δx 1 , . . . , δx n } ∈ R n ([12] equations [40][41][42]. With emphasis upon the temporal interval δs on the left-hand side of these expressions, the generalisation in equation 3 is not required to be of a quadratic spacetime form with a higher-dimensional Lorentz symmetry. In particular, cubic and higherorder homogeneous polynomial forms are also permitted. While equation 3 is hence more general than the approach of adding extra spatial dimensions, the development of the theory is more constrained in that we add nothing else beyond the structure and symmetries of explicit forms for equation 3 in directly deducing the basic structure of the physical theory. These observations mark the main contrast with the range of models based upon extra spatial dimensions alluded to in the opening of section 1. The conceptual and philosophical aspects of this shift in perspective will be considered further in section 4 as the central theme of this paper. While equation 3 describes a possible higher-dimensional mathematical form of time, we still see the physical world in the form of the arena of the 4-dimensional manifold M 4 of figure 1 with the local metric structure of equation 2. With the form of 3-dimensional space incorporated within 4-dimensional spacetime, the necessary identification of the extended structure M 4 with a local Lorentz symmetry breaks the full symmetry of the higher-dimensional form in equation 3 (see for example [12] subsection 4.1). The residual 'extra dimensions' of equation 3, over and above 4-dimensional spacetime, can then be interpreted as underlying the structure of matter in spacetime, in principle accounting for empirical phenomena such as sketched in figure 1(c). That is, under this change in perspective, time is not considered a benign independent variable flowing through the world but rather, via the substructure of equation 3, as the antecedent source of both spacetime and the matter it contains. Rather than being a mere spectator the flow of time is simultaneously manifested as the background stage together with the theatrical scenery and cast of characters upon it. The question then concerns the degree to which the substructure of time, as expressed through equation 3, might account for the structure of matter. Such augmentations from equation 2 allow the incorporation of the non-flat external spacetime of general relativity in association with further physical features on M 4 , with for example the manifestly covariant form of Maxwell's source-free equations proposed to arise as reviewed for ( [9] equations 5.29 and 5.30) and implying the constant speed of light propagation. More generally the theory might then be tested on all scales of the universe, from the microscopic world to the expanses of cosmological structure, as recorded by the observer, all collectively progressing in time as depicted in figure 1(c). In particular a correspondence between the substructure of time and the properties of elementary particles as observed in high energy physics experiments may be sought. In considering possible symmetries of higher-dimensional forms of time it is convenient to express equation 3 in terms of the generally finite components, defined by v a = dx a ds = δx a δs δs → 0 for a = 1 . . . n, of an n-dimensional vector v n ∈ R n by rearranging that equation, on dividing both sides by (δs) p and taking δs → 0, to obtain the homogeneous polynomial form denoted: The origin of this expression, as the central equation of the theory, is also described for ( [9] While v n ∈ R n in equation 4 represents the n components of the full ndimensional form of time considered, the subcomponents v 4 = (v 0 , v 1 , v 2 , v 3 ) ∈ TM 4 represent the projection out of the full set in R n onto the tangent space of M 4 , as an intrinsic feature of the symmetry breaking structure. (In general the components of v 4 ∈ TM 4 need not be in one-to-one correspondence with a conventional list of subcomponents of v n ∈ R n , see for example the discussion of [12] equation 54). In the following section we briefly review the more detailed structure of this symmetry breaking for explicit mathematical forms for equation 4 and summarise the resulting physical structures on M 4 that are derived, before returning in section 4 to elaborate upon the conceptual perspective adopted for this theory. Explicit Substructure of Time; Elements of Physics In seeking explicit higher-dimensional forms for equation 4 with a high degree of symmetry natural extensions from the quadratic Lorentzian form (rewriting equations 1 and 2 in the form of equation 4): lead to an E 6 symmetry of the 27-dimensional cubic form: [15,16]). In general we then have L(v 4 ) = h 2 = 1, with h ∈ R, as a substructure embedded within L(v 27 ) = 1 (as noted for [9] equations 5.46 and 13.1), with a similar generalisation applying for subsequent augmentations. In turn the symmetry and structure of equation 6 can be further embedded within an E 7 symmetry acting upon the 56-dimensional quartic form: with the 4 th -order form q defined on the In the case of equation 7 the symmetry breaking projection of v 4 ∈ TM 4 over the external spacetime M 4 out of v 56 ∈ F (h 3 O) leads to transformation properties of the reduced 56-dimensional representation space under the resulting external Lorentz and residual internal gauge symmetry that describe 'matter fields' bearing a close resemblance to structures of the Standard Model as summarised in table 1. In particular Lorentz spinor structures are identified as well as internal SU(3) c colour singlets and triplets with the appropriate fractional charges under an internal U(1) Q associated with electromagnetism. . Unlike the case for the other lepton and quark states the neutral components associated with the neutrino can only be accommodated in either the left or right-handed sector of the fragmented 56 components and is hence denoted ν L in table 1, being complementary to the projected v 4 ∈ TM 4 components (as discussed in [12] shortly before figure 4). The Standard Model Higgs is conjectured to be intimately related to the external 4-vector v 4 ∈ TM 4 itself and the symmetry breaking projection over M 4 (for reasons explained in [12] shortly after figure 4). The need to address the discrepant external Lorentz symmetry transformation properties, underlined in table 1, and to account for a full three generations of leptons and quarks incorporating a full electroweak theory and associated Higgs phenomena, leads to the prediction of a significant role for the largest exceptional Lie group E 8 as the full symmetry of time, as emphasised in [12]. While the possibility of connections between the above exceptional Lie groups and structures of the Standard Model is well known (see for example [19], [20] and [21] for examples involving E 6 , E 7 and E 8 respectively, each however with a symmetry breaking pattern that differs from that considered for the present theory), here we begin with a well-defined conceptual starting point which motivates equation 4 as the pivotal expression through which these symmetries and their breaking patterns might be realised in the physical world. Via equation 4 for n > 4 the flow of time is channelled through the external dimensions of the spacetime manifold M 4 and the residual dimensions of an internal space, the absolute distinction of which implies an absolute breaking of the full symmetry, denotedĜ (taken asĜ = E 6 , E 7 or E 8 for example), which is reduced to the direct product: Here the external Lorentz symmetry may be expressed via its double cover SL(2, C), as for the E 7 case (more explicitly described in [12] figure 4) for which the external tangent space is embedded in the higher-dimensional spaces as TM 4 Out of the full n-dimensional space implicitly underlying the n-dimensional form of time L(v) = 1, wherev = v n for the largest n considered, only a 4-dimensional subset, exhibiting the quadratic form in equation 5, is perceived as an extended geometrical manifold, namely M 4 as depicted for example in figure 1(b). This structure incorporates the 'spatialisation of time', as described by 3-dimensional spacelike hypersurfaces embedded within M 4 , which on account of this spatial property can hence be directly diagrammatically visualised, albeit with one spatial dimension suppressed in the sketch of M 4 in figure 1. This describes the arena within which the structures of the physical world within which we are immersed arise, with the properties of matter determined by the structure and symmetries of the additional components over 4-dimensional spacetime as summarised in table 1 for the E 7 level. These latter internal algebraic structures do not have a literal geometric interpretation and hence, unlike the external dimensions, cannot be described by a direct visual representation other than as a distribution of 'matter' in spacetime, as sketched in figure 1(c). Within this symmetry breaking structure, on utilising the translational symmetry of equations 1 and 5 in (x 0 , x 1 , x 2 , x 3 ) ∈ R 4 ≡ M 4 for the corresponding subcomponents of the higher-dimensional form L(v) = 1, a globally flat spacetime can again be generated as depicted in figure 1(b) with Einstein tensor G µν (x) = 0 (the Einstein tensor is defined in general in terms of components of the Riemann curvature tensor, see for example [11] subsection 3.1). However, more generally now a relation between the external curvature, associated with the Lorentz symmetry, and an internal curvature, associated with the residual gauge symmetry G of equation 8, analogous to that constructed in non-Abelian Kaluza-Klein theories, can be identified with G µν (x) = 0 ( G µν = f (A,v) =: −κT µν (9) which also defines the energy-momentum tensor T µν (x), with κ a normalisation constant. In these cases the Minkowski metric η of equation 5 strictly only applies on the manifold M 4 for local inertial frames as for general relativity. In equation 9 the tensor function f denotes a composition of fields arising from the symmetry breaking over M 4 , with a more specific structure for this function to be determined by the constraints of the theory and the full form of time employed. The component A(x) represents gauge fields associated with the internal symmetry G, such as SU(3) c × U(1) Q ⊂ E 7 in table 1, whilev(x) represents the fragmented components of the multi-dimensional form of time, as also listed in table 1 Historically the difficulties in combining general relativity and quantum theory in a single consistent mathematical framework has proved practically intractable. However, consistent with the existence of an underlying unifying conceptual structure, the empirical world itself seamlessly exhibits the properties of both of these principal pillars of modern physics, albeit as empirically verified in typically complementary observational environments. An understanding of how the two theoretical frameworks of general relativity and quantum theory might be extracted from a single unifying theory is likely to involve a more complete understanding of the nature of the connection between spacetime geometry and the properties of matter, whether or not the latter has its origins in a structure of 'extra dimensions'. This connection is proposed to arise here through equation 9 with the principles of general relativity largely preserved, and with the Einstein field equation: contained within that expression. On the other hand the machinery of quantum theory is conjectured to arise from the indeterminacy implied in the field composition f (A,v) in equation 9 under the idealisation and approximation of a flat spacetime limit. Standard postulates for the quantum theory of matter are formulated against a flat space or spacetime arena. However according to general relativity the presence of matter is associated with a non-flat spacetime through the Einstein field equation 10. Here we interpret this association to apply on all scales and hence standard quantum theory based on the assumption of a flat spacetime environment can only represent an approximation. In attempting to combine gravitation and quantum theory the approximation of Newton's theory of gravity is not generally employed for the former, and here we suggest that applying the apparent approximations of quantum theory, in particular to 'quantise' the gravitational field which describes deviations from a flat spacetime, is similarly insufficient for a complete theory. Instead we propose that it is gravity itself, through the degeneracy of spacetime solutions for equation 9, that provides the fundamental mechanism for the quantisation of all non-gravitational fields. Both classical general relativity and standard quantum theory can in principle consistently emerge as limiting cases from the unifying conceptual picture of this theory ([9] chapter 11, [11] subsection 5.3, [12] section 6). While Newtonian gravity has been superseded by Einstein's theory we do not possess a framework to supplant quantum theory, and hence here we are proposing such a possibility through equation 9. The mismatch between the higher symmetryĜ of the full temporal form L(v) = 1 and the lower local Lorentz symmetry of spacetime M 4 , breaking the former to the product of equation 8 as the full multi-dimensional form of time is necessarily filtered through the external 4-dimensional frame of all observations depicted in figure 1, is central to the resulting quantum nature of matter. This universal symmetry reducing structure, proposed as the origin of the quantum properties of matter, is closely analogous to the further symmetry reducing conditions imposed by the measurement apparatus through which particular quantum phenomena are typically observed in the laboratory, as noted with reference to the Zeeman effect in ([9] section 11.4). As noted above the identification of an extended 4-dimensional spacetime manifold M 4 , with the local Minkowski metric structure of equation 5, breaks the full symmetryĜ of the full multi-dimensional mathematical form of time L(v) = 1 from equation 4 absolutely down to the direct product subgroup of equation 8 as the surviving symmetry for physical structures in the spacetime arena M 4 . Hence the full unifying symmetryĜ of the theory is hidden and not accessible for empirical phenomena. In particular we note that the structure of a direct product of the external Lorentz and internal symmetry G in equation 8 for the physics is compatible with the Coleman-Mandula theorem [22] for non-trivial particle scattering in the relativistic quantum theory limit. On the gravitational side in general relativity there is an ambiguity in the meaning of the Einstein equation 10 in terms of the relative priority of the geometry G µν (x) or the energy-momentum T µν (x) on either side of that expression. The original view of Einstein was influenced by Mach's principle -as expressed through the complete determination of the metric g µν (x), underlying the Einstein tensor G µν (x), by the mass distribution of physical bodies, or more generally T µν (x) ( [14] sections 15(e and f), [23] section 6.2). With 'distant masses and their motions. . . regarded as the seat of the causes' ( [24] section 2), and while in turn the gravitational field guides the course of material processes, general relativity was not proposed as a 'theory of matter' ( [24] section 18). A similar view generally prevails today as expressed through the popular interpretation of the Einstein equation that 'spacetime is warped by matter', that is 'G µν ← T µν '. This interpretation also has its origins in Newton's theory for which mass can be considered as the source of gravitation and which can be identified in the appropriate limit of Einstein's theory, indeed with the constant in equation 10 expressed as κ = 8πG N /c 4 where G N is Newton's gravitational constant. Here, on the contrary, in equation 9 we place the priority on possible solutions for the spacetime geometry G µν = f (A,v) with the energy-momentum T µν being essentially defined through this expression, that is 'G µν → T µν '. (A similar interpre-tation of the Einstein equation 10 is found for example in [25] chapter XI, although still with assumptions needed for an effective matter Lagrangian to shape the specific empirical properties of matter). We might then also turn around the connection with Mach's principle, with geometric solutions G µν = f (A,v) being of primary concern and with the inertial properties of matter T µν derived as a consequence of the nature of these solutions for equation 9. Indeed the geometric Bianchi identity for G µν (x) itself effectively implies both energy-momentum conservation (as discussed in [9] opening of section 5.2) and the geodesic flow of matter (as reviewed for [9] equation 5.36). While shaped by gravity the inertial properties of matter are made apparent through an interplay with the other forces of nature such as electromagnetism, since the nongravitational forces can act within a local inertial reference frame. These more general empirical structures of matter derive from the restrictive forms for f (A,v) =: −κT µν in the possible solutions for equation 9, as permitted within the constraints of the theory (as described for [9] equation 11.29), in principle avoiding the need to postulate a matter Lagrangian entirely. The explicit development of this theory has led to the specific structures listed in table 1, incorporating features such as fractional charges and a left-right asymmetry that resemble the Standard Model. Hence through equations 4 and 9 here we do have a 'theory of matter', and one which more generally applies for matter on all scales as sketched in figure 1(c), as will be discussed further in the following section. This has been achieved by stepping back from spacetime and matter and founding the theory ultimately on the flow of time alone. The ambiguity in the meaning of the 'equals' sign in equation 10 is analogous to that for equation 1. In the latter case here we also give priority to the left-hand side, that is to δs in expressing an interval of time. In the following section we further discuss the motivation, meaning and consequences of adopting this latter choice of perspective in placing the emphasis on the underlying one-dimensional flow of time. Objectively we typically treat three dimensions of space along with a fourth dimension parametrised by a timelike component as given together in the spacetime arena M 4 , as pictured in figure 1(b), which accommodates the passage of time s ∈ R subject to equation 1. It is perhaps from the subjective point of view that the shift in perspective we are considering regarding figure 1(b) is more readily made, with all of our thoughts and observations taking place through the passage of time, through which in turn a spacetime arena can be constructed via the substructure in equation 1, as reviewed in section 2. From the philosophical point of view the 'gapless continuum' of time is in a sense necessary to hold a thought together as 'a thought' or as part of a continuous coherent 'train of thoughts'. We might then begin by considering Descartes' position of sceptical doubt in reducing certainties about the world to a minimal 'I think therefore I am' ( [26] part IV), adapted rather to the proposal that 'I think therefore time exists', as a generic observation for the possibility of there being thoughts. In contrast to Descartes' philosophical argument for constructing a theory of knowledge of the external empirical world from the 'I am', as expounded in his book 'Meditations on First Philosophy' of 1642, here we construct a full physical theory of the world from the minimalist starting point of the existence of time. Analysis of the arithmetic substructure of the real line representing time allows the construction of such a theory which might be tested against empirical phenomena observed in the physical world. At an elementary level this is possible since the arithmetic properties implicit in a real interval include operations of multiplication and division, opening up a richer substructure than addition and subtraction alone, as utilised for example in equation 1. As a mathematician Descartes was also one of the first to note a correspondence between these basic arithmetic operations (+, −, ×, ÷, together with the extraction of roots) and geometric constructions ([26] appendix 3 'La Géométrie', see also [27] part III on Descartes). This work contributed to the invention of coordinate geometry as also developed independently in the 1630s by Pierre de Fermat ( [27] part III on Fermat). Through this connection algebra can be applied to model and solve geometric problems using mathematical methods also known as analytic geometry. Here we consider that the quadratic composition in the real components (δx 1 , δx 2 , δx 3 ) for the possible arithmetic substructure of time in equation 1 can provide the basis for the geometrical form of a physical 3-dimensional Euclidean space itself, arbitrarily extended through the (x 1 , x 2 , x 3 ) ∈ R 3 translation symmetry. In addition to the necessity of time as an a priori form for all thoughts and experiences, the a priori necessity of space as an arena to frame the physical objects we perceive was emphasised by the philosopher Kant in the 1780s [28]. Here we are proposing that the arithmetic form of time itself in equation 1 provides our a priori predisposition to perceive the world in space as well as time with the appropriate underlying mathematical framework as something to 'get hold of'. That is, the quadratic structure of this multi-dimensional temporal form provides the equivalent mathematical representation underlying the geometric construction of the Pythagorean theorem and the formulation of Euclid's postulates pertaining to a continuous, indefinitely extended, homogeneous and isotropic, metric space. Hence the flow of time carries with it, via equation 1, an arithmetic substructure that can be simultaneously apprehended 'externally' in the form of an extended geometric spatial arena, providing the necessary background for our perceptions of the physical world. The more general properties of an n-dimensional manifold space with a metric geometry were elaborated by Riemann in 1854 ( [29], see also [27] part III on Riemann), with the case of physical space described as a triply extended magnitude upon which the square of an indefinitely small line element δs can be expressed locally as Hence the Pythagorean theorem still holds in the limit of small scale geometric structure. In order to analyse the mathematical structure of global metric relations Riemann introduced methods of tensor analysis, including the Riemann curvature tensor as a measure of the deviation from flatness of the manifold. Riemann noted that the assumption of a global Euclidean spatial geometry with zero curvature is only a hypothesis and not a certainty when, in the mid-19 th century, extrapolating beyond the bounds of observation both on the very large and very small scale. He also speculated upon the possible implications this may have for physics and raised the question of the source of metric relations. In particular Riemann conjectured that components of the curvature of space can have arbitrary values for the smaller scales, provided the total curvature of the region is close to zero, and continues (towards the end of [29] and [27] part III on Riemann, here quoting from the latter): Even greater complications may arise in case the line element is not representable, as has been premised, by the square root of a differential expression of the second degree. By 1916 Einstein had proposed the energy-momentum of matter as the source of geometric curvature through the Einstein equation 10 and established his theory of gravity [24], as we described towards the end of the previous section. To achieve this Einstein employed a 4-dimensional manifold with the local Minkowski metric structure of equation 2, which we can write as (δs to incorporate the equivalence principle with special relativity holding in the limit of small local inertial reference frames. Hence, while the metric of Riemannian geometry is positive definite, both special and general relativity adopt the generalisation of a pseudo-Riemannian or Lorentzian manifold with a non-degenerate but indefinite metric for 4-dimensional 'spacetime'. This smooth symmetric metric g describes a light cone structure on the spacetime manifold (M 4 , g) as discussed for figure 1. Although the 'line element' is now identified with the 'proper time interval' δs a quadratic structure for the line element is maintained in the theory of relativity in equations 1 and 2. In the present theory we are generalising further, echoing the above quote from Riemann, and are led to higher-order homogeneous forms for the 'line element' on interpreting equations 1 and 2 as a 'form of time' consistent with the general expression in equation 3, which is not restricted to homogeneous polynomials of the second degree. That is, since we are taking the perspective of placing the emphasis on the left-hand side of these expressions, with 'time' taking priority over 'space', the generalisation from (δs) 2 to (δs) p , with p > 2, is a natural one. Here we simply analyse the possible basic arithmetic forms of time which, as well as incorporating the spatial form in equation 2, exhibit the more general structures and symmetries of equation 3, or equivalently equation 4. The higher-order polynomial forms still contain quadratic substructures that underpin our perception of spatial structures. The full symmetryĜ of the full form of time L(v) = 1 is then broken through the a priori requirement of perceiving the world not only through time but also in the form of space, as elucidated by Kant, described now by Euclidean geometry locally and still to within a good approximation on the macroscopic scale. This symmetry breaking in identifying the 4-dimensional spacetime background M 4 is described for equation 8 and leads directly to the microscopic structure of matter via equation 9. That is, the properties of the 'extra dimensions' in equation 4, over and above those needed to construct the 4-dimensional spacetime manifold, are manifested as the physical structures of matter subject to laws of physics that might be deduced from the constraints of the theory. Pursuing this idea for the 56-dimensional form of time L(v 56 ) = 1 of equation 7, and analysing the breaking pattern of the full E 7 symmetry, has led to non-trivial success with a series of empirical Standard Model properties identified as summarised for table 1. Since the flow of time through the general multi-dimensional form of equations 4 is perceived as motion in space through the extended 4-dimensional geometric substructure M 4 , as depicted in figure 1(c), and since 'matter' is in part defined as that which 'occupies space', the practical interpretation of observations in the world naturally leads to conceptions of material substance and its interactions. For any theory of matter such interactions are proposed to account for both the apparent properties of matter and our ability to make observations of them. In one of the first books on general relativity Weyl suggests the general definition: 'In the wide sense, in which we now use the word, matter is that of which we take cognisance directly through our senses' ([30] section 25). More precisely Weyl also notes that we can 'assign the term matter to that real thing, which is represented by the energy-momentum tensor' ( [30] section 25), that is T µν on the right-hand side of the Einstein equation 10. In the historical context of the early 1920s Weyl describes how this practical definition can incorporate a theory in which the basic elements are fields. 'Matter' might then be considered 'an offspring of the field' with the atomic properties of matter, including electron phenomena, associated with 'energy-knots' of localised extreme values propagating in the electromagnetic field ([30] section 25). In the present theory both the classical and quantum properties of matter are incorporated in T µν as defined in equation 9 through the degeneracy of field solutions under f (A,v) for the external spacetime geometry G µν . The resulting properties of matter then include the elementary particle phenomena observed in high energy physics experiments with the range of possible particle interactions shaped by the internal substructure of the flow of time itself as described for example for table 1. Ordinary macroscopic matter is not then to be considered as 'built out of' elementary particles, rather matter on all scales is a direct manifestation of mathematical relations deriving from the multi-dimensional forms of the underlying flow of time, with the latter essentially perceived as a 'flow of matter' through spacetime within which we are immersed. The conception of a microscopic material particle substratum derives from the process of breaking up 'matter' -so to account for the macroscopic properties of matter as a composite of such elementary 'material' entities is essentially circular. Similarly the impression we have of matter on any scale as having an apparent independent existence or sense of inertia is only relative to other test or reference bodies, which are also assumed to possess similar innate 'material' properties, and the hypothesis of independent material bodies on any scale cancels out by the circularity of the argument. The postulated material concept however remains of great pragmatic value in describing the world and communicating information about it, as for the empirical phenomena depicted in figure 1(c), while saying very little about what the physical world actually is at a fundamental level. Indeed our prevailing understanding of the nature of matter as distributed in space has evolved significantly historically in time. The conception dating from Democritus (circa 460-370 B.C.) in ancient Greece proposing that everything is composed of indivisible and indestructible atoms of matter pursuing a pattern of motion according to deterministic natural laws has remained influential. The laws of motion, based on quantitative empirical observations, were expressed with mathematical precision for the extended and impenetrable parts, subject to forces of attraction, composing all bodies in the Newtonian mechanical worldview of the 17 th century. Subsequently in the 19 th century Faraday and Maxwell developed the field concept to account for electromagnetic phenomena, for which the notion of 'action at a distance' between particles of matter through empty space could consequently be discarded. Maxwell's theory and equations for the electromagnetic field influenced Einstein's theory of the gravitational field, culminating in equation 10 in the early 20 th century. A unified field picture could then be sought with solutions for classical fields representing corpuscular states, either in terms of localised regions of high energy density in the electromagnetic field, as described by Weyl as noted above, or with massive particles corresponding to microscopic extreme structures of the gravitational field itself. Also in the first half of the 20 th century a more ephemeral conception of matter was introduced with quantum mechanics and quantum field theory, with 'particle' phenomena again ascribed to the properties of fields, now as 'quanta' of field excitations, while in other developments unification schemes with fields themselves deriving from the properties of extra spatial dimensions were first proposed by Kaluza and Klein [1,2]. In the latter half of the 20 th century these two frameworks were combined in string theory, with the methods of quantum theory based upon a point-like particle model in 4-dimensional spacetime adapted and applied consistently for onedimensional relativistic vibrating 'strings' in a 10 or 26-dimensional spacetime, with particle states represented by quantised string excitations. As a leading candidate for a theory of 'quantum gravity' the technical developments of string theory continue to progress in the 21 st century, as we also discuss in the following section. In his historical review of western philosophy Bertrand Russell, in considering the meaning that might be attached to the word 'matter', adopts a pragmatic approach in expressing the opinion: 'My own definition of 'matter' may seem unsatisfactory; I should define it as what satisfies the equations of physics' ( [31] book 3 towards the end of chapter 16). This raises the question regarding the purpose of theoretical physics itself, in terms of whether it concerns an ever evolving description of matter, indefinitely refined by observations and an improving mathematical account, or whether the ultimate goal is to uncover an understanding of what matter actually is, with material entities possessing a structure that can be considered isomorphically equivalent to the mathematical expressions of the theory. Indeed, while our theoretical conception of matter has evolved, we nevertheless tend to assume that there is a real objective sense in which 'matter' in 5 th century B.C. Athens is exactly the same as that in 21 st century A.D. Princeton, having a coherent, rational essence, with only the domain of our knowledge having changed. For the theory proposed in this paper the ambition is to explain what matter actually is, and why it has the properties it is observed to have. This aim is based on the approach of developing the theory by beginning with an underlying conceptual motivation from which the mathematical structure of the theory and corresponding properties of matter are subsequently deduced, rather than by adopting a conception of matter that directly describes or parametrises observed empirical phenomena or by setting the theory within a largely internally motivated and sophisticated mathematical framework from its inception. This approach is summarised in the title of this paper, with the properties of matter together with the geometrical form of spacetime itself proposed to derive directly from the elementary substructure of the underlying flow of time, as a universal unifying principle. The point of view adopted here regarding the relations between space, time and matter might then be considered as a non-trivial 'gestalt shift' from a more standard conception of these structures. Here the emphasis is placed firmly on the pre-eminence of time, and our perception of it through the mathematical forms of time, rather than focusing from the outset upon forms of matter in space and time. This significant shift in perspective hinges on the interpretation of equation 1. A well known example of a gestalt shift involving our perception of an image is the drawing of the 'duck-rabbit' as presented here in figure 2(b). There is no objective fact of the matter regarding whether figure 2(b) depicts a duck or a rabbit, with the choice being purely a question of adopting a particular subjective perspective, which can be alternated. However once we have decided to see either a duck or a rabbit the details of an augmented drawing will diverge between the two cases as we extrapolate below the neckline of figure 2(b), with unambiguous differences between features of a duck or a rabbit emerging as depicted in figures 2(a) and 2(c) respectively. In the case of equation 1, which encapsulates the structure of the local inertial frames implicit in figure 1(b), we can see this relation either as an expression for 'time' δs, by emphasising the left-hand side, or for 'spacetime' (δx 0 , δx 1 , δx 2 , δx 3 ), by focusing on the right-hand side. If we adopt the latter perspective and extrapolate beyond equation 1 for higher-dimensional spacetime structures the first step would be to add a single extra spatial dimension x 4 with a '−(δx 4 ) 2 ' term appended to the right-hand side as considered by Kaluza and Klein [1,2]. With four of the additional components of the augmented 5-dimensional metric field interpreted as the electromag-netic 4-vector potential field A µ (x), incorporated into a single framework alongside the original gravitational field components g µν (x), this initial step dating from the 1920s was encouraging in terms of provisional connections with the empirical world as it was known then. Further augmentations in the structure of extra spatial dimensions have led to a large range of possible models in recent decades, as alluded to in the opening of section 1, for which however, as we also noted there, identifying a direct and unambiguous connection with specific properties of the modern-day Standard Model of particle physics has proved difficult. On the other hand on making a gestalt shift and looking at equation 1 the other way, adopting the alternative interpretation of considering that equation to represent a particular arithmetic expression for the form of time on the left-hand side, the extrapolation now leads to the generalisation of equation 3. Rearranged as equation 4 this naturally leads to the explicit 56-dimensional quartic form of time in equation 7, as briefly reviewed in section 3, with the breaking of the corresponding E 7 symmetry over a 4-dimensional spacetime base exhibiting the structure of table 1. Hence in exploring the natural mathematical extrapolation for this interpretation we directly identify features that closely resemble specific empirical properties of the Standard Model, including fractional charges and a left-right asymmetry. Beyond uncovering this rich vein of esoteric properties of the Standard Model, we are led to a prediction of a yet higher-dimensional form of time with an E 8 symmetry to complete this empirical structure, as proposed in ([9] section 9.3) and explored further in [12]. The achievement of these non-trivial inroads into the otherwise seemingly arbitrary and puzzling features of the Standard Model to a large degree vindicates this shift in perspective towards a unification scheme with both external spacetime and the observed properties of matter deriving from the arithmetic substructure of the underlying one-dimensional flow of time alone, and suggests that for establishing a fundamental physical theory this might ultimately be the right way to look at the world and to comprehend the underlying workings of the universe. In Unlike the analogy of the duck-rabbit in figure 2(b), which involves different ways of looking at something in the world, the shift in perspective described here for figure 1(c) regards the manner in which we see the whole physical universe. This hence requires stepping back further from our preconceptions and assumptions made about the nature of the world itself. In objective terms this gestalt shift is particularly hard to see since we immediately encounter physical structures in spacetime as apparently given, out there for us to observe as depicted in figure 1(c). This situation is somewhat different to that regarding figure 2(b), for which the motivation for seeing a duck or a rabbit is essentially a symmetric 50 : 50 choice of perspective. For the universe as a whole our tendency to see the world around us, as depicted for example in figure 1(c), as 'matter in space and time' is a deep-seated, firmly rooted, viewpoint that we generally take for granted. Inevitably the early developments in science set out by describing and cataloguing what we can physically detect, before seeking a deeper explanation for these empirical observations. However, from that perspective, beyond being posited for pragmatic purposes, it is difficult to conceive of what 'matter' in space and time might fundamentally be without forever begging the question of the nature of the next layer down, either literally or in an underlying explanatory sense. Here we are suggesting a change in viewpoint from a basis of 'matter in space and time' to a foundation in the 'flow of time' alone. The corresponding gestalt shift to this way of seeing the world as a manifestation of multi-dimensional forms of time remains however objectively somewhat counter-intuitive. It is from a subjective perspective, with all of our thoughts and observations of the physical world as represented in figure 1(c) flowing through and accompanied by an irreducible progression in time, as described near the opening of this section, that this change in viewpoint, with time playing the primary role, is perhaps more readily seen. Indeed, while we do not perceive extra spatial dimensions, and even find it very hard to conceive of what a higher-dimensional space would 'look like', as also alluded to in the opening of this section, we do intimately experience the onedimensional passage of time, and in this subjective sense this change in perspective is a natural one, and one that provides a simple and unambiguous starting point for a theory. The corresponding mathematical basis for this theory is elementary, as seen in equations 1-4, but is nevertheless accompanied by a significant and non-trivial gestalt shift towards this conceptually novel way of looking at the world. The approach could be taken of constructing a theory based on positing equation 4 as an 'ansatz', then following the mathematical structures of equations 4-7 and exploring the consequences, without considering the underlying conceptual and philosophical elements of the motivation and interpretation for the theory. In this manner similar success in obtaining the symmetry breaking pattern of table 1 and a foothold in the properties of the Standard Model could be achieved. However, this would miss the conceptual origin of the theory, which is seen as an essential and irreducible element of its foundation. Alternatively, if we do start with this conceptually novel basis and set out to construct a physical theory purely out of the one-dimensional flow of time we are naturally led to these mathematical structures, which are found to exhibit this recognisable correspondence with empirical properties of particle physics. Speaking in the early 1920s Einstein [34] noted that while the natural sciences gained a degree of security from applying mathematics, the connection between mathematics and the physical world remained an uncertain one. In general the observation of connections made between the mathematical structure of scientific theories and the physical structure of the empirical world has often been accompanied by an appreciative element of surprise, as famously pondered by Einstein [34]: How can it be that mathematics, being after all a product of human thought which is independent of experience, is so admirably appropriate to the objects of reality? Here we have adopted the perspective that an irreducible element of all 'experience', and of all 'human thought', is the passage of time. The one-dimensional That is, the substructure inherent in an interval of time δs resembles the microscopic structure of the physical world as explored in high energy physics experiments. All properties of matter more generally are proposed to arise in this way, infused in, deriving from and intimately connected with the passage of time through which all physical objects of our experience are encountered, as represented in figure 1(c). Hence in principle this theory carries with it an explanation of how the mathematical structures deriving from it can account for the structures of the empirical world in a less surprising manner through this intrinsic connection. Since the employment of the independent continuous variable of time in Newton's method of fluxions, which he invented and then applied to describe the motion of bodies through space in his mechanics, successful physical theories utilising differential calculus, including Maxwell's equations, the Dirac equation, quantum theory and general relativity, have incorporated the notion of a continuous flow of time as parametrised by a real number s ∈ R. The properties of this continuum are employed here in particular in deriving the general expression for the form of time in equation 4, which relies on the infinitesimal nature of the real numbers. This is similar to the way that other theories utilise the continuous progression of time in differential expressions, except that the present theory is founded upon the structure of the temporal continuum alone. We conclude this section by summing up the essential argument for how such a construction is possible. When seen as an extension from the integers, with elements p, q ∈ Z, and the rational numbers, with elements expressed in the quotient form p q ∈ Q for q = 0, the unique complete ordered field of the real number system R, in containing the former cases as subsets, can be defined in turn through the relatively sophisticated, and isomorphically equivalent, constructions of Cantor, via Cauchy sequences of rational numbers, or of Dedekind, via cuts partitioning the set of rational numbers (see for example [35] chapter 13, [27] part I on Dedekind). However the intuitive notion of a gapless continuum is a very simple idea, as for the conception of progression in time. While time itself is not a number, the real numbers provide a rigorous mathematical representation of this one-dimensional ordered continuum as employed in physical theories. This structure R may appear somewhat mysterious in comparison to the apparent substructures of Z and Q, but these latter number systems are not appropriate mathematical objects to represent the flow of time, which is not here to be thought of as described by a 'collection of points'. The notion of a 'point in time' is mathematical idealisation or limiting extrapolation which does not embody the essential property of time which, as considered here, necessarily is a one-dimensional continuum. The essence of time is lost in extracting 'a mathematical point' of time, as something that can never be subjectively encountered. Here we begin with the concept of time as a one-dimensional gapless continuum and then utilise the real numbers as a structurally isomorphic mathematical model as the basis for a physical theory. The real numbers appear perhaps less mysterious when introduced through this conceptual motivation rather than in the mathematical context of other number systems. The continuum property is the key feature, with actual 'real numbers' only associated with arbitrary intervals of time for pragmatic purposes relative to a particular unit, such as an Earth day, and only to within the limits of precision of measuring devices and to within the number of significant figures employed in the explicit decimal representation of the real number. When we think of the substructure of time we might first think of days containing hours containing minutes containing seconds and so on, according to conventional units for dividing up the real line representing time. However the structure of the real number system is much richer than this, in particular involving multiplicative as well as additive operations, containing the substructure of equation 3 at an elementary level for an infinitesimal real interval. Since the interval δs ∈ R has this arithmetic property, and since this mathematical continuum is identified with the continuum of time, then time itself can be considered to possess the richer substructure described in equation 3. We can then ask how this substructure of time might be manifested. In particular since the possible arithmetic compositions of the interval δs incorporate the quadratic metrical structure in equation 1 and 2 time itself carries with it a substructure that can be realised in a geometrical 'spatial' form. This is the change in perspective we are adopting for figure 1(b). Through this basic arithmetic expression for a real interval the one-dimensional gapless continuum of time can be manifested as an indefinitely extended 4-dimensional gapless continuum of spacetime M 4 ≡ R 4 , deriving directly from the translation symmetry of equation 1 and 2, incorporating the geometric properties of 3-dimensional Euclidean space. Given the more general higher-dimensional and higher-order forms for time in equation 3 the necessity of perceiving the world in space as well as time projects out the quadratic substructure which supports the spacetime base manifold M 4 , maintaining a local 4-dimensional pseudo-Euclidean structure, as a framework for the observation of apparent forms of matter such as sketched in figure 1(c), with the properties of matter deriving from the residual temporal components and symmetry breaking pattern. That is, by mapping the continuous flow of time, through which we perceive the physical world, in a structurally isomorphic one-to-one manner onto the real numbers a theoretical structure can be directly derived in purely mathematical terms that we can then map back onto the empirical world in a structurally isomorphic one-to-one manner in principle in the form of a unified physical theory. The theory might then be tested against the empirical data to within the precision of calculation and experiment. The general form for the continuum of time in equation 3 can be rewritten in terms of finite components in equation 4 and explicit full forms of time with a high degree of symmetry considered, as described in section 3 for extensions from equation 5 with Lorentz symmetry to equations 6 and 7 with an E 6 and E 7 symmetry respectively. Analysing the properties of the remnants surviving the E 7 symmetry breaking projection over M 4 , as summarised in table 1, the elementary substructure of time is found to bear a close resemblance to the elementary microstructure of the physical world as observed in the high energy physics laboratory. In making this 'gestalt shift' in perspective, and following through the consequences, it is striking that the development of this very simple idea, founded upon the one dimension of time alone, leads to a series of esoteric empirical properties of the Standard Model of particle physics, without the need to postulate an independent material substratum to accommodate these properties. Time, from this perspective, is not just a benign independent parameter, not just a spectator of events passing in the world, but rather simultaneously underlies the geometric form and determines the empirical structure of all observations in the world. The temporal structure of the world is the world, and to adopt this perspective is to understand the basis for this theory. Establishing a Firm Foundation Many unification frameworks, such as those that posit extra spatial dimensions as alluded to in the opening of section 1 [3,4,5,6], set out by introducing extra structures over and above 4-dimensional spacetime. For some models with extra dimensions there are predictions for effects beyond the Standard Model that may be accessible to laboratory experiments, although no such effects have yet been empirically observed (see for example [36,37]). There is also perhaps a danger of overreaching without first making decisive progress in assimilating specific features of the Standard Model itself into the extra spatial dimensions paradigm, which has proved difficult as also noted in the opening of section 1 with reference to [7,8]. For the present theory, based in contrast on the single dimension of time, we have been able to utilise the rich structure of existing clues from observations in high energy physics experiments, as embodied in the Standard Model, as an empirical criterion to initially test the theory against. The connections established at the level of the E 6 symmetry for the form of time in equation 6 include the identification of Weyl spinors and an internal SU(3) c × U(1) Q symmetry with the appropriate fractional charge structure, as reviewed in ([12] subsection 4.2). The natural embedding of this cubic form in the quartic form of equation 7 suggests that the latter can be interpreted as a higher-dimensional form of time with an E 7 symmetry, for which further connections with the Standard Model might be expected. This has been verified, with the further features of Dirac spinors and an intrinsic left-right asymmetry identified for this E 7 symmetry of time, as summarised here for table 1 and reviewed in ([12] subsection 4.3). This connection with observations provides a reassuring foundation to build upon, leading to the theoretical prediction of a further augmentation to an E 8 symmetry of time as pursued in ( [12] section 5), and in turn to potential empirical predictions as tentatively outlined in ([12] section 7). We also note that the present theory, based on multi-dimensional forms of time, is also very different from models with extra timelike dimensions (see for example [38,39] and the references therein). While maintaining a quadratic form such models augment the 4-dimensional spacetime form on the right-hand side of equation 2 with a non-Lorentzian metric signature, with care then needed to avoid conflict with causality and unitarity. Here we are generalising the form of time itself, leading to cubic and higher-order polynomial forms as described for equation 3. The identification of a smooth 4-dimensional spacetime background from a quadratic substructure, as the necessary arena for all observations, then breaks the full symmetry of time. This 4-dimensional manifold M 4 incorporates a local metric with the Lorentz signature of equation 2 which determines a light cone structure on M 4 within which causal relations are well-defined, reflecting the underlying ordered one-dimensional progression in time itself (see also the discussion of causality in [9] midway through section 13.3). This underlying motivation was contrasted with that of special relativity in section 2. Drawing the two postulates of special relativity together in a consistent framework for electrodynamics the aesthetic guide of simplicity led Einstein to a clarification of the formulation of time and simultaneity ( [13], [14] chapter 7(a)). With no absolute time or preferred reference frame defined in special relativity 'there are as many times as there are inertial frames' ( [14] chapter 7(a)). This new approach to a physical theory was further developed in general relativity to incorporate in a consistent manner inertial frames that are not related by a uniform relative motion as well as arbitrary reference frames. For the present theory essentially 'there are as many times as there are observers', each of whom is associated with a fundamental temporal parameter s ∈ R as represented in the centre of figure 1(c), and each of whom is carried inexorably into the future with the ordered causal progression in time encapsulated in its spacetime M 4 manifestation. The compatibility of this multiplicity of times, corresponding to a multiplicity of observers, and the mutual reciprocal relations between them in the full theory is very similar to that in special and general relativity ([9] ending of section 5.3 and near the opening of section 13.1). Extrapolating beyond the 4-dimensional form of time of equation 2, and generalising beyond the local inertial frames of general relativity, we are led directly to equation 3 and observe that in projecting higher-dimensional forms of time over the 4-dimensional spacetime manifold M 4 we obtain a 'theory of matter' in spacetime, as described towards the end of section 3. In this manner the structures arising directly from an observer's temporal flow s include both matter fields and other forces of nature in addition to gravity, with the trajectory of the observer through spacetime buffeted by the non-gravitational forces and not in general pursuing the course of a local inertial reference frame -a situation that arises in the vicinity of the Earth only under very special, or temporary, circumstances. The specific properties of matter derived will depend upon the full form of time, that can be written as L(v) = 1 as described for equation 4, and the full symmetrŷ G, which has been proposed to be the exceptional Lie group E 8 as noted above. In light of the above-mentioned issues of both causality and unitarity we also note that the particular non-compact real form of E 8 to be employed is proposed to be obtained through augmentations from the 4-dimensional Lorentz group via the 10-dimensional Lorentz group, as described for ( [12] equation 89), with a compact internal symmetry group G in equation 8 required for a consistent quantum theory limit ([12] section 6). Even for the full quantum structure of this theory, deriving from equation 9 as described in section 3, the external 4-dimensional spacetime M 4 with a local Lorentz symmetry is considered a smooth continuous base manifold structure. In other theories spacetime itself may be composed of or exhibit an intrinsically discrete or grainy structure. This is the case for 'loop quantum gravity' (see for example [40]) with 'quanta of space', on a microscopic scale associated with the Planck length, represented by the nodes of a 'spin network'. In this case the apparent features of a smooth spacetime only emerge on the macroscopic scale, which extends down to all scales presently observable. The theory aims to construct a generalisation of quantum field theory without a background metric structure and consistent with general covariance, hence respecting this central symmetry of classical general relativity. The philosophy adopted by loop quantum gravity is to tackle one major problem at a time, specifically the identification of a quantum field theory for which general relativity arises in the classical limit, while unification with the Standard Model is not incorporated within this picture. In this sense the aims are less ambitious than those of string theory (see for example [41]), the other main candidate for a consistent quantisation of gravity. Any proposed fundamental theory will ultimately need to account for established successful theories, consistently combining general relativity with quantum field theory as applied for particle physics phenomena together with an explanation of the Standard Model, at least in the appropriate limiting approximations consistent with all observations, and ideally with some novel predictions empirically verified. Even if a unification scheme should achieve these technical and empirical successes questions can still be raised concerning the origins of the theory in terms of why the world should be this way. Given the focus upon questions at the other end, regarding for example the derivation of observed Standard Model properties, the foundational questions are sometimes postponed or overlooked. This may leave a theory protractedly suspended upon the provisional basis of an ansatz or set of postulates that is declared this way since 'we have to start somewhere', which seems insufficient for an 'ultimate' theory. Questions regarding the ultimate origin of a theory, beyond its pragmatic utility, might in fact be considered intractable, prompting in some cases the subjective notion that the workings of the universe ought to be described by 'aesthetically pleasing' mathematics, which might provide a guide towards constructing such a theory. This has also been the case in employing Lie groups for proposals of Standard Model unification, ranging from the early SU(5) 'Grand Unified Theory' [42] for which the authors propose from the outset that 'the uniqueness and simplicity of our scheme are reasons enough that it be taken seriously', to the incorporation of gravity also in the E 8 model of [21] which opens with an appeal to the principle that 'the mathematics of the universe should be beautiful'. However, while being of some heuristic value, such a criterion is neither well-defined nor decisive in pointing towards an ultimate unification scheme, and hence is not fully satisfactory in itself in motivating the basis for a theoretical framework. A good deal of work in theoretical physics involves addressing internal mathematical technicalities or problems that have arisen in developing the structure of existing theories, often with no immediate sight of either the foundational questions at the one end or connections with the empirical world at the other. This is perhaps the case for some of the progress made in developing string theory [41], in pursuing the ambition of incorporating a consistent quantised theory of gravity. If this program is ultimately successful, even identifying one or more preferred string configurations that reproduce the properties of both the Standard Model of particle physics and large scale cosmological structure out of a vast collection of possible solutions on addressing the 'landscape problem' [43], the question would still remain concerning why the world should be this way, apparently constructed from the fundamental objects of one-dimensional 'strings' or higher-dimensional 'branes' in a 10 or 11-dimensional spacetime for example. These foundational issues are perhaps exacerbated by the fact that historically string theory was discovered somewhat accidentally, having its roots in a different application as an unsuccessful model for hadrons from the 1960s, rather than a more direct motivation. All consistent string theories possess a closed string state describing a zero mass spin-2 particle, which is problematic for a model of hadrons and as such the theory was superseded by quantum chromodynamics. However in the 1970s it was 'felt that string theory was too beautiful to be just a mathematical curiosity' [44], and with the massless spin-2 state in principle describing the 'graviton', the proposed carrier of the gravitational force, string theory was reinterpreted as a natural candidate for a fundamental theory of 'quantum gravity' united with the quantum theories for the other forces of nature and matter fields. In avoiding point-like particle entities string theory also brought with it a softer short distance behaviour, in principle evading the calculational infinities that plagued other attempts to quantise gravity. The conceptual motivation for string theory however still remains of a seemingly provisional nature into the 21 st century, with the emphasis perhaps being placed more upon the rigor of the mathematical formulation of the theory, which is a somewhat novel approach compared to earlier developments in physics. In the case of general relativity by contrast a simple conceptual picture based on Einstein's insight into the intrinsic structure of spacetime as demonstrated by his 'thought experiment' concerning the perspective of an observer in free fall came first in 1907; as described by Einstein as 'the happiest thought of my life' ([14] chapter 9) and encapsulated in the equivalence principle. This principle was itself motivated both by general experience and experimental observation of falling objects. There then followed several years of technical mathematical development in the geometric structure of the theory leading to the Einstein field equation 10 and a theory of gravitation in 1915 ( [24], [14] chapters [9][10][11][12][13][14]. On the other hand the mathematical formulation of quantum theory was introduced in the mid-1920s based on innovation and a working set of assumptions, improvised by a number of physicists including the key figures of Heisenberg, Schrödinger, Born and Bohr, driven by the empirically observed quantities it was designed to model (see for example [45] chapter 'Theory, Criticism and a Philosophy' by Heisenberg, [46] chapter 12). Only after the mathematical scheme had been postulated and successful results achieved was the language developed to describe it, while the conceptual interpretation of quantum theory is still being debated today. The theory is nevertheless grounded in unequivocal laboratory observations. Heisenberg [45] also explains his scepticism towards placing too much emphasis on rigorous mathematical methods, based on the concern of becoming too detached from the experimental data. This focus upon the mathematical scheme became increasing significant in developing the sophisticated calculational tools of quantum field theory (QFT) in the mid-20 th century, which nevertheless have achieved considerable success in matching the measurements made in the high energy physics laboratory. Without a firm conceptual underpinning and with the technical formulations of theories seemingly taking on an independent life of their own it was in this context that Wigner wrote of the 'unreasonable effectiveness of mathematics in the natural sciences' [47]. This sentiment echoes the 'How can it be. . . ?' quote from Einstein cited in the previous section. It would seem all the more surprising that a mathematical theory should account for phenomena in the empirical world if both the founding motivation for the theory lacks a clear attachment to the physical world and the internal formalism of the theory has been developed in a similarly detached vein. While general relativity makes this connection with the physical world through the equivalence principle, quantum theory, in the form for example of Heisenberg's 'matrix mechanics', is rooted in the empirical observations it relates, in particular regarding patterns of atomic spectral data. In both cases significant empirical successes have been achieved beyond the original scope of the theories and without meeting any failures. On the other hand formulations of quantum gravity, such as string theory or loop quantum gravity, arguably lack either a conceptual or observational anchor in the physical world, being founded largely upon addressing the technical challenges arising from the assumption that gravity should be quantised, and empirical successes for these theories have to date been limited. While possessed of elegant and sophisticated mathematical structure, some of which does reflect our knowledge of the physical world, it may be that the development of these frameworks, and even of QFT itself, despite the technical and pragmatic successes, may have been somewhat premature in lacking the support of a firm conceptual basis. It is sometimes suggested that a final unifying theory is still very remote from us by a considerable amount of further work and technical breakthroughs into the future ( [48] see for example the contribution from Rovelli), or even that the goal of a single unifying theory may be untenable [49]. These views are typically expressed with reference to the status, rate of progress, and presently perceived obstacles in the context of an existing theoretical framework, such as string theory or loop quantum gravity, which may indeed be some distance from providing an ultimate resolution. In principle however a new idea offering a new perspective has the potential of providing a different path towards that same shared ultimate goal, along which the obstacles may not appear so insurmountable, bringing the prospects of a complete unified theory much closer than otherwise anticipated. In particular, with respect to foundational questions, compared with string theory the situation is essentially diametrically opposite for the theory described in this paper. Here from the beginning we consider the conceptual and philosophical questions concerning what a theory might look like in order to explain why the universe should be this way. Posed in the context of studying theories based on extra spatial dimensions we make a subtle change in perspective in founding the theory upon the flow of time alone, as the elementary one-dimensional continuum through which all of our observations are made. Pursuing the elementary mathematical expression of this idea the implicit substructure of an interval of time can provide the source of both spacetime and the matter it contains, via the structure and interpretation of equations 1-4. Explicit mathematical forms have then been identified and applied to fill out this conceptual picture, rather than moulding the development of the theory from the outset within the confines of a preconceived or postulated mathematical framework. With mathematics providing a precise extension of familiar spoken language in order for a theoretical framework to connect with and describe the physical world, and to understand what the theory means, it should ideally be built upon the support of a rational underlying conceptual picture -one that can be comprehended and conveyed in unambiguous linguistic terms and which itself exhibits a manifest connection with the world. Based on the firm conceptual foundation of a single dimension of time the present theory is also consistent with the view expressed by Einstein [50]: It can scarcely be denied that the supreme goal of all theory is to make the irreducible basic elements as simple and as few as possible without having to surrender the adequate representation of a single datum of experience. This sentiment is often paraphrased as the maxim: 'Everything should be made as simple as possible, but not simpler'. As for the quote from Einstein discussed here in the previous section the above quote is frequently cited by theoretical physicists in the 21 st century -they are included in this paper to reflect inclinations common in modern-day theoretical physics as much as those of Einstein nearly a hundred years ago. The foundation of the present theory can be compared with the origins of general relativity, developed from a largely conceptual basis, and that of quantum theory, motivated mainly from an empirical basis, as reviewed earlier in this section. Here the original motivation is not based upon a particular kind of experience or upon particular experiments but rather we simply note that all experience and all experiments take place irreducibly in time and that time contains within itself a substructure that can be expressed mathematically and utilised to construct a full physical theory of the world. In addition to the underlying conceptual and mathematical simplicity elements of mathematical naturalness and uniqueness are in part employed in leading from equation 4 via equation 5 to equations 6 and 7, with the actions of the respective Lie groups E 6 and E 7 describing a high degree of symmetry for these multi-dimensional forms of time. In this manner contact is made with both familiar structures from the mathematical physics literature as well as with empirical structures of the physical world as summarised in table 1. The present theory is hence firmly grounded at both ends, being conceptually founded on the simple notion of the flow of time through which all observations are made through to the successes achieved in accounting for a series of empirical features of the Standard Model of particle physics (with further possible empirical connections reviewed in section 3 and cited in section 6). There is also a close and transparent connection between these two ends, with properties of the Standard Model deriving directly from the symmetry breaking pattern for the full form of time of equation 7, which is motivated as a natural instantiation for the general form of time of equation 4 which, as the central equation of this theory, provides a direct mathematical expression of the underlying conceptual picture. This firm foundation in both the conceptual and the empirical sense, together with the close relation between them, then provides a robust basis for the further mathematical and technical development of the theory. The explanatory power of the theory leads also to predictive power in pointing to a role for E 8 as the ultimate symmetry of time [12] as recalled in the opening of this section. As reviewed in ([12] subsection 2.3) real forms of the exceptional Lie groups E 6 , E 7 and E 8 are known to describe symmetries associated with certain natural mathematical generalisations of 4-dimensional spacetime (see for example [51]). For E 6 and E 7 these structures can also be interpreted as symmetries of time, for the forms of equations 6 and 7 respectively, which naturally incorporate as a substructure 4-dimensional spacetime and the Lorentz symmetry of equation 5. These observations in part motivate considering the largest and unique exceptional Lie group E 8 as the symmetry of the full homogeneous polynomial form of time, accommodating the subgroup chain E 8 ⊃ E 7 ⊃ E 6 ⊃ Lorentz, in each case with a symmetry breaking pattern deriving from the necessary projection over the external 4-dimensional spacetime substructure with the Lorentz symmetry subgroup acting on the local tangent space TM 4 of the spacetime manifold M 4 . While analysis of the E 6 and E 7 stages has already in principle provided explanations for several puzzling features of the Standard Model, the breaking of the predicted full E 8 symmetry group is proposed to complete the full Standard Model particle multiplet picture, as we argue in [12]. The identification of the precise structure of this E 8 action and the specific composition of the full form L(v) = 1 itself, with the appropriate properties, then remains as a theoretical puzzle to be addressed. As well as the natural mathematical embeddings in the progression towards higher-dimensional forms of time and the unique mathematical structures involved, we might also in principle attempt to associate these structures with a notion of 'mathematical beauty', if such a concept might be correlated with possession of a high degree of symmetry. There are four classical Lie groups which, as for the largest exceptional Lie group E 8 , are associated with a Lie algebra having an 8-dimensional maximal Abelian subalgebra, which is hence a common core feature of these five groups. These four rank-8 classical Lie algebras are A 8 (su(9)), B 8 (so (17)), C 8 (sp (16)) and D 8 (so (16)), composed respectively of a total of 80, 136, 136 and 120 independent symmetry generators. By comparison the rank-8 exceptional Lie algebra E 8 comprises a total of 248 generators, and hence describes a greater concentration of independent symmetry actions that could be interpreted as quantifying a higher degree of mathematical beauty. We note, however, that while such an aesthetically appealing property is perhaps desirable it is not our primary guiding motivation here. This progression towards a significant role for E 8 as a symmetry of time can be considered then as a testable theoretical prediction of the theory (as noted in [12] at the end of subsection 5.1). This in turn is sufficient to give a hint towards the potential empirical predictions for the theory (as listed in [12] section 7). As well as the predicted application of an E 8 symmetry the precise nature of 'quantisation' for the theory is currently the other main area of focus in developing this framework; we have touched upon both of these connections with physics here in section 3. As noted there a picture has emerged in which the gravitational field itself is not quantised. With quantum theory based on a set of postulates we don't actually know why anything is subject to quantum theory, so the assumption that everything, including the gravitational field, should be quantised seems highly provisional. On the other hand since the gravitational field can be identified with the 4-dimensional spacetime geometry, and since all matter is in spacetime, in this sense everything is covered by gravity. Rather than incorporating gravity under an all-embracing set of postulates of quantum theory, the present theory can be considered more as a generalisation of general relativity (as described in [12] opening of section 3 and alluded to here before equation 2). Here gravitation provides the source and explanation of the quantisation of all non-gravitational fields, through the local degeneracy in underlying field solutions for identifying the geometry of the external spacetime M 4 itself, as described for equation 9 in section 3, with the standard machinery of QFT proposed to arise in the flat spacetime limit. From this point of view the motivation for constructing a consistent theory incorporating 'quantum gravity', and the technical difficulties that arise from the assumption that gravity should be quantised, are no longer a concern. This then marks another significant difference with the origins of string theory, for which the consistent quantisation of gravity is a central goal. The existence of 'gravitons' would of course be very difficult to empirically verify, other than perhaps through the proposal that such hypothetical quantum fluctuations in the gravitational field in the very early universe might be greatly amplified by an inflationary phase and observed today via signals of a classical cosmological gravity wave background [52]. Theories of quantum gravity that imply a discrete or 'foamy' texture for spacetime itself on the Planck scale, such as loop quantum gravity, might also be experimentally probed [53]. Advances in technology have already in principle brought each of these potential signatures for quantised gravity or spacetime within reach, with constraints being placed in the absence of any clear signals [54,55]. Hence the picture that has emerged for this aspect of the present theory in which the gravitational field is not quantised and spacetime is considered smooth down to arbitrary scales is also testable, in the non-trivial sense of being in principle 'falsifiable' given the potential for observations to the contrary. In the meantime, while the data remains inconclusive, it is perhaps in any case worth exploring theoretical frameworks both for which gravity is and is not quantised. For the present theory, with standard QFT being an effective theory arising in the flat spacetime limit, it is also the case that quantum phenomena for nongravitational fields in a highly curved spacetime will need to be understood, and may well differ from the predictions that have been obtained from formulations of QFT applied in such an acutely non-flat background environment. Indeed there are theoretical issues that remain to be resolved in modelling quantum particle phenomena for an extreme spacetime geometry (see for example [56]). For the new framework we will need to reassess fundamental questions such as how, and even whether, black holes radiate and lose mass, and the issues raised such as the 'information paradox'. The need for a coherent description of extreme gravitational regions, such as in the vicinity of the 'singularity' of a black hole or the Big Bang, for which classical general relativity alone is ultimately insufficient, and to correspondingly incorporate quantum phenomena in a theoretically consistent manner into this picture, then in principle provides a further ambitious theoretical test for this framework. On the other hand, despite the differences, there is a significant overlap between the mathematical structures we have been led to for the present theory and elements of the mathematical formalism employed in frameworks such as string theory (as noted for example in [12] sections 2 and 6). While there remains a degree of debate over the merits of string theory as an ultimate unified theory (see for example [57,58]) diverse applications in physics and mathematics associated with a repurposing of string theory or M-theory have been identified. More generally, much of the mathematics literature that we have employed, including elements of [15,16,17,18,51] as alluded to here in the opening of section 3 and above, has been in part motivated by various developments in theoretical physics in recent years. Closer examination of the nature of these mathematical connections might in principle prove to be mutually beneficial in contributing to the understanding and development of both the present and other theoretical frameworks, in particular with the common goal of a complete unified theory in mind. For the present theory while the simplicity of the basic underlying idea, expressed in the form of equation 4, and non-trivial structural correspondence identified with the Standard Model, via equation 7 and table 1, provide a robust basis, other areas of the theory for which progress has been made, including the relation between gravity and quantum phenomena centring upon equation 9, are mathematically at a provisional stage requiring further development. In the meantime, while borrowing related mathematical structures from other theories, a significant contribution from the present framework is in identifying a means of establishing a firm and unambiguous conceptual foundation with a direct link to the empirical successes. This has been achieved through a change in perspective in placing the flow of time, and its possible multi-dimensional manifestations, at the heart of the theory. Conclusions In concluding we place this paper, the central theme of which has been the nature of the 'gestalt shift' in perspective from matter in spacetime towards a fundamental role for time, in the context of our earlier papers that have developed this theory. The technical details underlying the connections made between the theory and the Standard Model of particle physics, as outlined here for table 1, are described in ([9] chapters 6-9) as summarised in [10] and reviewed in [12], with further analysis and emphasis upon the predicted role for E 8 in the latter reference. The question of the ultimate symmetry and specific structure of the corresponding full multi-dimensional form for time is an open one. In the previous section of this paper we have considered the high degree of symmetry exhibited by the exceptional Lie groups as a possible factor, and while other factors are described in the above papers a more complete understanding is desired. The plausibility of identifying an underlying explanation for 'quantisation' through equation 9 is explored in detail in ([9] chapters 10 and 11), through a close comparison with the canonical formalism of QFT, and further elaborated in ([11] subsection 5.3, [12] section 6). Progress has also been made in incorporating some of the existing geometrical techniques employed for Kaluza-Klein theories, as developed in ([9] chapters 2-5) and analysed further and more succinctly in [11], as alluded to here before equation 9. Possible contributions to questions concerning the large scale structure of the universe, including potential 'dark sector' candidates deriving from the bottom line of table 1, are described in ([9] chapters 12 and 13) in the context of the standard model of cosmology, as summarised in ( [12] towards the end of section 6). Overall the change in perspective emphasised in this paper has proved very fruitful, with the length of [9] in part reflecting the author's attempt to examine a wide range of the low-hanging fruit within reasonable reach. The prediction of the E 8 symmetry of time further developed in [12] points towards an ambition of grasping one of the higher branches of the theory, as does the proposal of providing a consistent unified framework for gravity and quantum theory. Progress may be needed in both of these areas in the upper canopy of the theory in order to more fully reproduce high energy physics phenomena and make decisive predictions (see for example [9] section 15.2). In this paper we have returned to the roots of the theory in expanding upon its robust and firm foundations. Based simply upon the substructure of the one dimension of time alone the most explicit success to date for this theory has been in the uncovering of several distinctive features of the Standard Model of particle physics, seen to emerge in a more direct and transparent manner than for models based upon the introduction of extra spatial dimensions. More generally, all branches of the present theory covered in [9,10,11,12] are directly relevant to the aim of accounting for structures of the physical world unfolding from the one-dimensional flow of time, with all areas under development and with open questions remaining as we have attempted to describe in the papers. However, the observation that such a simple theory, based on such a simple idea, can have something to say about all of these corners of the empirical world on all scales is noteworthy for this proposed unification scheme. The adoption of this change in perspective on the universe, in placing time at the foundation of the theory, is then further justified by this broad range of applicability and potential for further advances.
2018-02-22T14:31:19.000Z
2018-02-22T00:00:00.000
{ "year": 2018, "sha1": "3a44f54378799482e1eb566c1f3a7741e293600b", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "3a44f54378799482e1eb566c1f3a7741e293600b", "s2fieldsofstudy": [ "Philosophy" ], "extfieldsofstudy": [ "Physics" ] }
126267478
pes2o/s2orc
v3-fos-license
Spatial Estimation of Thermal Indices in Urban Areas — Basics of the SkyHelios Model Thermal perception and stress for humans can be best estimated based on appropriate indices. Sophisticated thermal indices, e.g., the Perceived Temperature (PT), the Universal Thermal Climate Index (UTCI), or the Physiologically Equivalent Temperature (PET) do require the meteorological input parameters air temperature (Ta), vapour pressure (VP), wind speed (v), as well as the different shortand longtime radiation fluxes summarized as the mean radiant temperature (Tmrt). However, in complex urban environments, especially v and Tmrt are highly volatile in space. They can, thus, only be estimated by micro-scale models. One easy way to apply the model for the determination of thermal indices within urban environments is the advanced SkyHelios model. It is designed to estimate sky view factor (SVF), sunshine duration, global radiation, wind speed, wind direction, Tmrt considering reflections, as well as the three thermal indices PT, UTCI, and PET spatially and temporarily resolved with low computation time. Introduction In the era of climate change, a growing interest in information about the local impact of global climate change can be seen in politics, public, as well as in science.To assess the local effect, many questions e.g., about heat related mortality [1] and the urban heat island [2] have been worked on.Another important field of interest is the impact of the local morphology on thermal perception and stress of humans (e.g., [3][4][5][6]). Thermal conditions for humans are not only dependent on air temperature (T a ), but also on moisture (e.g., in terms of vapor pressure (VP)), wind velocity (v) and the short-and longwave radiation fluxes [4][5][6][7][8][9].While T a and VP are rather inherent in space, v and the different radiation fluxes are severely modified by the environment and, thus, highly volatile in space [10].This especially holds for very inhomogeneous environments like urban environments. As more and more people live in urban areas and cities suffer the most from local effects of global warming, information is needed there the most.Such information is hard to measure and, thus, can only be derived by numeric modeling. Thermal comfort at street level can be modified by urban planning through changes in building configuration [5,6,11], surface materials [12] and urban green [9,13].To develop and assess adaptation measures, models are required, which are both fast and easy to use, but comprehensive at the same time. Currently, the ENVI-met model [14,15] is mostly applied for analyzing the thermal conditions in urban environments.While the prognostic model ENVI-met is quite comprehensive, it is not exactly easy to use.The model does require input in a specific format and some input parameters that are hardly available (e.g., the atmospheric conditions in 2500 m above ground level).Furthermore, ENVI-met is quite slow.Depending on model domain size and parameters, a simulation can require one day of computation time for only some hours of model time.As the domain size is limited to 250 on 250 on 40 grids, only rather small areas of interest can be investigated in high resolution of e.g., 1 on 1 m. While there are more models with, among others, the intention of calculating urban thermal comfort available or under development (e.g., PALM-4U or UMEP / SOLWEIG [16,17]), none of them do meet all the criteria described above.Therefore, a new model needed to be developed.To meet the criteria, the diagnostic SkyHelios model initially developed as a tool for the rapid estimation of the sky view factor and sunshine duration only [18] needed to be extended by a radiation model, a wind model, as well as the routines for the calculation of the thermal indices like the Perceived Temperature (PT) [19], the Universal Thermal Climate Index (UTCI) [20,21], or the Physiologically Equivalent Temperature (PET) [22].The SkyHelios model together with the extensions is addressed as the "advanced SkyHelios model" in the following. The purpose of this paper can be summarized in terms of three objectives: • Provide a detailed and comprehensive description of the extensions to the diagnostic SkyHelios model, • Describe the models capabilities in the course of a small case study, • Indicate opportunities and limitations analyzing the results, • Identify the most suitable shading type for the reduction of heat stress. Materials and Methods The SkyHelios model is a micro scale model for the calculation of sky view factor (SVF) and shading in complex environments using the graphics engine MOGRE [18,23].The model does run on 64 bit Windows machines. SkyHelios supports many different spatial input file formats that also may be combined (mostly formats supported by the geospatial data abstraction library/openGIS simple features reference implementation GDAL/OGR [24]).However, some functionality can only be used with vector input files (e.g., polygon shapefiles (3D, or containing a height field) for buildings and point-feature shapefiles for trees.All spatial input should be projected in a metric projection, which needs to be specified by its spatial reference ID (SRID) number. For modeling thermal conditions for humans in complex environments, apart from SVF, a quite sophisticated radiation model, as well as a wind model is required.Both parts are added to the SkyHelios model forming the "advanced SkyHelios model" and are introduced in more detail below.T a and VP are currently considered to be static in space by the advanced SkyHelios model.However, as the SkyHelios model is completely diagnostic, they are variable in time depending on the model input. Radiation Modeling The radiation calcuations in SkyHelios are performed on vector-basis and therefore can be run for any point within the model domain.Most of the parametrizations used equal the ones used in the RayMan model [25,26].The most relevant ones are described in the following section. In contrast to most other models, the SkyHelios model does not consider the urban surroundings in terms of different surfaces, but in terms of pixels in the fisheye image (compare to Figure 1) weighted by the sine of the distance to the images center.Other parameters like the incident diffuse shortwave radiation at a specific location are estimated for the upper hemisphere at once. Sky View Factor All radiation calculations in SkyHelios are based on the local sky view factor (SVF).SVF is the fraction of the visible sky, seen from a certain point ( [27], p. 353).It is dimensionless and ranges from 0 to 1, where 0 means that the sky is totally covered by terrain or obstacles, while 1 stands for a free sky.In SkyHelios, first, a fisheye image is rendered for the current location and elevation by the graphics engine (e.g., Figure 1).SVF is determined by distinguishing transparent and colored pixels in the image.Transparent pixels are counted as free sky, while all others are considered as covered by obstacles.As in the real environment, the Fish-eye is a half-sphere, not all of the pixels should have the same influence on the SVF.Therefore, a dimensionless weighting factor ω proj (Equation (1)) is used to consider the projection that adjusts the impact of a pixel by the sine of the zenith angle ϕ ( • ) This results in a spheric SVF.If the planar SVF is desired, another correction by ω planar (dimensionless) needs to be performed (compare to Equation (2)).It increases the impact of objects close to the ground by the cosine of the azimuth angle (counted from the ground to the top): The accuracy of the sky view factor calculations has been assessed by [28]. Shortwave Radiation Global radiation (G) consists of the direct solar irradiation (I) and the diffuse shortwave irradiation (D) ( [26,27], p. 14) (all energy flux densities in W m 2 ).Both of them are dependent on several parameters and therefore need to be modeled individually. For a perfect clear sky condition (with no clouds and no horizon limitation), G can be calculated directly using Equation (3) proposed by [10,29,30].In SkyHelios, Equation ( 3) is also used to derive an initial global radiation (G 0 ): . ( Equation ( 3) requires the initial direct solar radiation I 0 , an energy flux density in W m 2 , the solar zenith angle ϕ ( • ), the actual air pressure pr in hPa, as well as the one at sea level for a standard atmosphere (pr 0 = 1013 hPa) and the Linke turbidity factor T L . Direct Shortwave Irradiation According to [29], I can be estimated as a function of I 0 ( W m 2 ), ϕ ( • ), T L (dimensionless), the relative optical air mass r opt (dimensionless), the vertical optical thickness of a standard atmosphere δ opt (dimensionless as well), pr in hPa, and cloud cover cc in octas (0 = clear sky to 8 = overcast sky, Equation ( 4)): This of course only holds for unshaded conditions.Under shaded conditions, I is usually assumed to equal 0 W m 2 .The relative optical air mass (r opt ) can be estimated by (Equation ( 5)) [31]. In Equation (5), β s describes the solar elevation angle in • .Using β s and r opt , δ opt can be estimated following an approach by (Equation ( 6)) [32]: Direct shortwave reflections by the surrounding obstacles are considered evaluating the surfaces lighting factor in terms of the blue channel of the fisheye image (compare to Figure 1).The reflection is estimated for any pixel in the fisheye as the blue color value (considering the orientation of the sun to the target surface) times the direct shortwave incident radiation and is considered to be isotropic.The shortwave reflections are therefore not calculated directly as proposed by e.g., [33][34][35] for it is found to be too time-consuming at run-time, but abbreviated from the graphic engines scenes lighting algorithm results.The directional lighting of the scene thereby considers the orientation of any surface to the light source (the sun) as the cosine of the angle between light source direction (a vector into the sun) and the surfaces normal vector. Diffuse Shortwave Irradiation According to [36], D can also be calculated as the sum of isotropic (D iso ) and anisotropic diffuse radiation (D aniso , both in W m 2 ).The isotropic part can be calculated by Equation ( 7): This equation requires the direct solar irradiation assuming a clear sky with no clouds I clear ( W m 2 ).The anisotropic component D aniso ( W m 2 , Equation ( 8)) can be approximated by a similar equation if the sun is visible: For the case of the sun covered by horizon or obstacles, D 0,aniso becomes 0 W m 2 . For non clear sky conditions, a linear correction according to [36] can be applied (Equation ( 9)).It considers the cloud cover (cc) in octas: For a completely covered sky (cc = 8/8), Ref. [36] proposes a simplified equation: In that case, global radiation can be approximated by scaling the initial global radiation (G 0 ) by 0.28 and the local sky view factor (Ψ S ) [36]. Shortwave diffuse reflections are considered in SkyHelios by estimating the diffuse incident radiation based on Equations ( 7)- (10) and scattering it isotropical scaled by the surfaces albedo. Longwave Radiation All surfaces are emitting longwave radiation according to the Stefan-Boltzmann law.It is calculating the longwave radiation flux density P lw ( W m 2 ) emitted by a perfectly black radiator surface (s) at a given surface temperature T s (K).The Stefan-Boltzmann law is modified by including an emission coefficient lw (dimensionless, Equation (11)) to be able to apply for non-perfectly black surfaces ( = 1.0), e.g., for humans with an lw of approximately 0.97 ( [10,37], p. 151): Elements within the equation are the longwave radiation flux density P lw in W m 2 , the Stefan-Boltzmann constant of 5.67 × 10 −8 W m 2 •K 4 (σ), the radiating surface area A s in m 2 , and its surface temperature T s in K. Equation (11) is the basis for the estimation of both the longwave emissions by the sky (P lwA ), as well as the surroundings.The downwards longwave radiation is calculated based on T a and a virtual surface area determined from VP and modified by cloud cover (cc) if available (Equation ( 12)). Finding the appropriate value for lw can get very hard when it comes to longwave irradiation emitted by the atmosphere [38].While there are empirical values available for quite some time [39][40][41], the determination of lw for a cloudy sky remains quite complex [38,[42][43][44]: The lw of the current surrounding urban structures (buildings, plants) is derived from the object's color in the fisheye image (compare to Figure 1).The surface temperature is determined iteratively evaluating short-(direct and diffuse) and longwave radiational gain as well as longwave emissions.The local T a thereby serves as an initial guess.Soil heat flux is approximated to be 19% of the surfaces energy balance Q if Q ≥ 0.0 W m 2 .For cases with Q ≤ 0.0 W m 2 , a negative ground heat flux of 0.32 • Q is considered (please refer to [26] for details). Mean Radiant Temperature The mean radiant temperature T mrt ( • C) is one of the most important input parameters for all sophisticated thermal indices applied in human-biometeorology.It is an equivalent surface temperature, which summarizes the effect of all the different short-and longwave radiation fluxes [29,37,45,46]. T mrt is defined as the surface temperature of a perfectly black, isothermal surrounding environment, which leads to the same energy balance as the current environment [10,37,47,48]. By including the shortwave radiation flux density P sw,s ( W m 2 ) calculated by the diffuse solar irradiation and the diffuse reflected global radiation D s ( W m 2 ) multiplied by the shortwave absorption coefficient (1.0-Albedo, α abs,s ) into Equation (11), the total radiation flux density to and from a surface, P s ( W m 2 ), e.g., the human body can be calculated by Equation (13): Dividing the environment of a person p into a number n of isothermal surfaces i and considering a projection factor Pr to correct the relative surface size of p and s as well as the clothing clo of p, T mrt in W m 2 can be calculated following the principle of equal radiation fluxes caused by the actual and the reference environment (Equation ( 14)): Solving Equation ( 14) for T mrt results in Equation (15), which is perfectly applicable by numerical micro scale models: Most numerical models in urban biometeorology, however, are using further simplifications [17,25,26]. Wind Modeling Wind data is estimated in SkyHelios based on a diagnostic wind model based on the principles by [50].It considers up to four stream modifications for each individual obstacle.In agreement to [50], a upwind stagnation zone, a downwind recirculation, a downwind velocity deficite zone, as well as a street canyon vortex can be considered.However, updated parametrizations are implemented to allow for improved precision. Upwind Stagnation Zone For most obstacles, an upwind stagnation zone according to [51] is considered.The maximum windwards extension is determined by Equation ( 16): In Equation ( 16), L f describes the maximum windward extension of the stagnation zone, h is the obstacles height, and wi stands for the obstacles width (all units in meters). Ref. [51] also introduces a modified power law profile to determine the average wind speed component ū (m/s) at any vertical level z for within the front eddy zone containing a factor that reduces the wind component perpendicular to the obstacle (compare to Equation ( 17)): In addition, a vortex zone was included into the front eddy zone [51].Its streamwise length L f v (m) is determined by Equation ( 18): The ellipsoidal vortex zone is then filled by a parametrization gathered from fitting experimental wind-tunnel data [51].The trigonometric Equations ( 19) and ( 20) are used to calculate the average vortex components in the horizontal ( ū, Equation ( 19)) and the vertical ( w, Equation (20), both in the m/s) direction: ū(z) w(z) l v and h v are defined as the current length and height of the vortex in meters. Downwind Recirculation Zone Ref. [52] introduced an improved parametrizations for the velocity deficit zone in the lee of an obstacle.The shelter model by [52], which is also adopted in the QUICK-URB model calculates a Gaussian velocity deficit pattern using Equations ( 21)-( 23): u d represents the velocity deficit in the lee of an obstacle in m/s, x, y, and z are the stream coordinates in x-, y-, and z-directions, W the width, and H the height of an obstacle in m, U(H) the mean wind speed at the top of the obstacle in m/s based on the upstream power law profile, and C D is the drag coefficient.Γ is defined as 0.6 • c 2 a .The similarity coordinates η, and ζ, as well as the vertical coefficients c a , and a g are calculated according to the following equations: κ in Equation ( 26) is the von Kármán constant of 5.67 The main advantage of the integration of the shelter model by [52] is the streamwise velocity deficits that can be calculated more accurately than by the original parametrization [53]. Street Canyon Parametrization As a response to the overestimation of the width of a street canyon vortex by the original parametrization (compare to [54]), a modified street canyon model was implemented in SkyHelios.One significant difference to the original parametrization according to [50] is the transition zone at the ends of the vortex formed by vertical wedges. The vertical wedges are modified after Equation (28), where d sc represents the distance of a point to the upwind obstacle (in meters) and u rt the wind speed component in rooftop height (m/s): In addition, the criteria to detect a street canyon has been modified according to [54].It is now based on the length of the upstream obstacles recirculation zone (L R ).Whether a street canyon is set or not is distinguished according to Equation (29).l represents the obstacles streamwise length: For the central part of a street canyon, Ref. [54] proposes a streamwise speed modification according to Equation ( 30): Equation ( 30) calculates the street canyon modification from the distance to next wall d nw , the street canyon width d SCw (both in meters) and the function F SC , stated by Equation ( 31): The only new parameter in Equation ( 31) is the crosswind distance to the street canyons center d SCccw in m. For further details on the parametrizations, please see [55]. The wind field calculated by the given functions most likely contains some divergence.Assuming incompressible air, this divergence has to be minimized in order to get a valid wind field.Mathematically, this is performed by minimizing the functional for the scalar H in Equation (32): In Equation (32), α h and α v are horizontal stability factors in s/m, u, v and w are the stream components in m/s, u 0 , v 0 and w 0 are representing the initial stream components in m/s while dx, dy and dz are the grid spacings in m. Area of Interest and Data The advanced SkyHelios model is tested for two study areas in Freiburg, southwest Germany (approximately 47 • 59 N, 7 • 51 E).The place of the old synagogue is located westwards from the inner city of Freiburg between the main university buildings I and II (KG I and KG II), the university library and the theater (compare to Figure 2, bottom left).The Institutes Quarter is a city quarter north of the city center mainly consisting of institute buildings of the Albert-Ludwigs-University Freiburg.It covers an area of approximately 700 m on 500 m (compare to Figure 2, top right). At the place of the old synagogue, some tree information is also available.The point-feature shapefile available consists of 21 trees.To test the radiation calculation capabilities of the SkyHelios model, the attribute fields "opticalDen", "albedo", and "emissivity" (for the optical density, surface albedo and surface emission coefficient) have been assigned arbitrary values in a wide range.For example, the optical density ranges from 0.06 to 1.0, where 0 is transparent and 1 is opaque. The urban climate station Freiburg, a meteorological background station [56] at the top of a highrise building within the "Institutes Quarter" provides data that can be used as a roof-top reference.The stations records are covering a 13-year period from 1 September 1999 to 30 Results and Discussion In spite of there being a lot of results generated during this study, only some of them are presented here.However, the selected results are sufficient for the identification of the most appropriate shading type in terms of reduction of heat stress as well as for demonstrating the capabilities of the advanced SkyHelios model.For a spatial result, the example of PET in a height of 1.1 m on 1 July 2008 at 1:30 p.m. LST (Figure 3) was selected.The index is strongly influenced by both radiation and wind speed [6,57] and thus shows the impact of both. Looking at Figure 3, the strongest reduction of PET by approximately 8.0 • C (compared to PET 38.3 • C at undisturbed locations) can be found in locations shaded by solid obstacles (mostly buildings).This is mainly caused by a reduction in T mrt of almost 22.5 • C due to the obstacles blocking direct solar radiation.Global radiation is reduced by around 650 W m 2 .This is in quite good agreement with studies for summer days in Freiburg, e.g., [58].Obstacles with transparency are causing a slighter reduction.For example, the trees with an optical density of 0.8 are reducing PET by approx.6.3 • C, those with an optical density of 0.6 are causing a reduction of 5.4 • C. Global radiation is reduced from 857 W m 2 in unshaded locations to 393 W m 2 and 334 W m 2 in the same locations, respectively.The effect on PET thereby strongly depends on wind speed.During a measurement campaign in August 2007, a measuring point below a tree was found to be 4.6 • C (PET) cooler on average than one not located below a tree [59].However, as the shortwave transitivity of the tree is unknown, wind speed is slightly higher and radiation fluxes within complex urban environments are generally quite inhomogeneous in space, results are hard to compare.This is also shown by a PET reduction of 3.0 to almost 10.0 • C was found comparing a measurement point under a tree with four other ones without trees for the 2nd of August 2001 in Freiburg [25].However, that study applied the RayMan model, which does not account for transparency.In a significantly warmer setting, Ref. [60] found a reduction in PET of up to 15.6 • C due to (solid) trees for a summer case in Lisbon.The dependence of the reduction on general thermal conditions is indicated by the maximum reduction found for winter conditions was 2.7 • C in the same study [60].Wind speed in the model domain was quite low within the whole model domain at 1:30 p.m. on the 1st of July 2008 with only 2.4 m s in a height of 10 m above ground level.This causes even lower wind speed at pedestrian level (1.1 m above ground) where wind speed was reduced to 0.9 m s in (spatial, arithmetic) average.However, this comprises wind speed in locations without obstacles input (in the northwest and in the southeast of the combined domain where wind speed is approximately 1.4 m s constantly.Wind speed therefore is mostly found to be lower close to the obstacles, but can also be increased due to corner flow and channeling to up to 3.8 m s in some locations.It thereby needs to be noted that wind speed (in the height of 1.1 m) is only slightly decreased by most of the trees.Only the small trees (e.g., in the center of the Place of the Old Synagogue) cause a stronger reduction in wind speed of around 0.1 m s at that level.However, in some locations, an increase by the same amount can be found due to the air current avoiding the tree crown. Increased wind speed at building corners does reduce PET by up to 2.0 • C in the results.Stagnation caused by buildings, on the other hand, does lead to an increase in PET of up to 15.4 • C in this example.Both effects are clearly present in the results.However, the reduction of PET by increased wind speed can only be found in quite small areas, while low wind speed increases PET in rather large areas (almost all the red areas in Figure 3 are caused by low wind speed). In spite of the size and the high resolution of the model domain, the model run time of almost one hour on a below average machine can be considered to be quite fast (e.g., compared to the prognostic model ENVI-met [15]).However, to the authors' knowledge, there is currently no other model available that can calculate similar results for a domain size and resolution considered in this study.The model performance in terms of computation time can therefore hardly be compared. The low computational effort is partly by utilizing the graphics hardware, but also due to the diagnostic model design that allows for avoiding solving transport equations.This comes at the price of unknown previous conditions resulting in e.g., ground or wall heat storage can only be parameterized assuming rather constant conditions.This will e.g., lead to an underestimation in surface temperature in the time after sunset [26]. Some parts of the advanced SkyHelios model have been validated in the past (e.g., the SVF model [28]).Other parts do equal those of the very well validated RayMan model (e.g., the radiation model without the reflections part [25,26]).Furthermore, the results generated by the advanced SkyHelios model are generally plausible and in basic agreement to findings by other studies (e.g., [6,[61][62][63]).However, further validation of the whole model and comparison to on-site measurements is required to assess the model's accuracy. Conclusions Results show that heat stress for humans in urban areas can best be prevented by providing shading and, at the same time, not reducing wind speed too much.While buildings are solid obstacles in terms of radiation and therefore can provide more comfortable conditions in the shade, they are solid obstacles to wind at the same time.This mostly causes a reduction in wind speed leading to more uncomfortable conditions.The most comfortable conditions can be found below the trees.They are providing shade and are hardly causing wind speed reductions at the pedestrian level.This of course only holds as long as they are large enough (the small trees in the center of the place of the old synagogue are hardly casting any shadow, but are causing wind sheltering as the crowns are only slightly above the pedestrian level) and as long their crowns are dense enough.The most suitable shading type is therefore found to be provided by individual large trees with dense crowns. The advanced SkyHelios model is capable of estimating wind speed and -direction, the mean radiant temperature, as well as the thermal indices PT, UTCI, and PET for large areas of interest in high resolution as shown by the results.This allows for analyzing larger areas of interest in more detail by using average office computers, which makes it a very valuable tool for all users working on spatial and temporal dimensions in the field of human thermal biometeorology. Figure 1 . Figure 1.Fisheye image showing the upper hemisphere with trees and buildings as generated by the SkyHelios model in production mode.The colors and opacity correspond to different shortwave albedo, longwave emissivity, direct radiation factor of the surfaces (including direct shortwave reflections).The checkerboard background was added to visualize the objects opacity. April 2013 in hourly resolution.The parameters used in this study are air temperature (T a ) in • C, vapour pressure (VP) in hPa, global radiation (G) in W m 2 , wind speed (v) in m/s and wind direction (WD) in • .To keep it short, only results for the 1 July 2008 at 1:30 p.m. local standard time are presented here.The 1st of July 2008 was a clear summer day with T a of 28.2 • C, vapour pressure of 17.8 hPa, an incident wind speed of 2.4 m/s in 10 m height from 244 • and global radiation of 887 W m 2 at 1:30 p.m. Figure 2 . Figure 2. Screenshot of SkyHelios main window showing a combined model domain consisting of two areas of interest in Freiburg, Southwest Germany: the "Institutes Quarter" and the "Place of the Old Synagogue". igure 3 . Physiologically Equivalent Temperature (PET) on 1 July 2008 at 1:30 p.m. in a height of 1.1 m above ground level.The calculations consider both the "Institutes Quarter" (upper right) and the "Place of the Old Synagogue" (lower left) together in one large area of interest.
2019-04-22T13:11:45.542Z
2018-05-29T00:00:00.000
{ "year": 2018, "sha1": "fb33dacd6c0f34d4afbb73521b1df92697cbb254", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4433/9/6/209/pdf?version=1527595631", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "fb33dacd6c0f34d4afbb73521b1df92697cbb254", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
255048400
pes2o/s2orc
v3-fos-license
Effects of Y-27632, a Rho-associated Kinase Inhibitor, on Human Corneal Endothelial Cells Cultured by Isolating Human Corneal Endothelial Progenitor Cells Purpose Human corneal endothelial progenitor cells (HCEPs), which has been selectively isolated and differentiated into human corneal endothelial cells (HCECs), are crucial for repairing corneal endothelial damage. In this study, we evaluated the roles of a Rho-assisted kinase (ROCK) inhibitor, Y-27632, on the isolation and expansion of HCEPs, and assessed the in vitro effects of different concentrations of Y-27632 on the differentiated HCEPs. Methods HCEPs were isolated and expanded in a medium with and without 10μM Y-27632, and then differentiated into HCECs in a medium with fetal bovine serum. The characteristics of HCEPs and differentiated HCEPs were confirmed by immunofluorescence staining. The proliferation, viability, morphology, and wound-healing ability of differentiated HCEPs were assessed in the presence of different concentrations of Y-27632. Results Y-27632 enabled the isolation and expansion of HCEPs from the corneal endothelium. The differentiated HCEPs showed an optimal increase in proliferation and survival in the presence of 10μM Y-27632. As the concentration of Y-27632 increased, differentiated HCEPs became elongated, and actin filaments were redistributed to the periphery of cells. Y-27632 also caused a concentration-dependent enhancement in the wound-healing ability of differentiated HCEPs. Conclusions Y-27632 enabled the isolation and expansion of HCEPs. It also enhanced the proliferation, viability, and migration of differentiated HCEPs. Human corneal endothelial cells (HCECs) have limited proliferative potential in vivo [1,2]; therefore, damage to CECs leads to irreversible endothelial dysfunction and corneal edema. The conventional treatment of severe corneal endothelial dysfunction is corneal transplant, which consists of either the entire corneal layer or only the posterior lamellar corneal layer, derived from donated corneal tissue. To overcome the shortage of donor corneas, trans-plantation of cultivated HCECs has been suggested as an alternative treatment [3][4][5][6][7][8][9]. Many groups have successfully cultured and amplified HCECs in vitro, and treated animal models with corneal endothelial dysfunction [10,11]. However, there are no defined protocols for the clinical application of cultured HCECs. In addition, current corneal endothelial engineering has restriction, such as limited proliferative ability, fibroblastic transformation, and cellular senescence [12][13][14][15][16][17][18]. Several studies have reported the presence of human corneal endothelial progenitor cells (HCEPs) [19][20][21][22][23]. Hara et al. [24] had established a method for culturing HCEPs and differentiating them into HCECs. They had suggested that, unlike conventional HCEC culture, HCEPs could be selectively expanded with high proliferative potency and to generate transplantable CEC sheets [24]. Using progenitor cells in this manner could help overcome the limitations of corneal endothelial engineering. Recently, cultured CECs were injected into the anterior chamber of the eye along with an inhibitor of Rho-associated kinase (ROCK) [18]. ROCK inhibition enhanced cell engraftment at the posterior corneal layer, enabling the cell-based treatment of corneal endothelial dysfunction [18,25]. This was consistent with many reports that demonstrated that the inhibition of ROCK signaling decreased apoptosis and increased proliferation and cellular adhesion in the CECs cultured by various methods [12,18,26]. The aim of this study was to evaluate the role of the ROCK inhibitor, Y-27632, on the isolation and expansion of HCEPs, and to assess the in vitro effects of different concentrations of Y-27632 on the differentiated HCEPs. The results of the study might contribute to cell injection treatments of differentiated HCEPs, for corneal endothelial dysfunction. Ethical statements The study protocol was approved by the Institutional Review Board of Inha University Hospital (No. 2016-05-018) and adhered to the guidelines of the Declaration of Helsinki. Written informed consent was obtained from the next of kin of all deceased donors. Ten human peripheral corneal tissues from eight donors (mean donor age, 54.6 ± 19.4 years) were obtained by trephination with 7.0-mm trephines and stored in storage medium (OptiSol-GS, Bausch & Lomb) at 4°C. Isolation and expansion of HCEPs HCEPs were isolated and cultured as previously described by Hara et al. [24]. Briefly, Descemet's membranes with HCECs were stripped from the human corneas using sterile surgical forceps. The tissue was transferred to an enzyme cell detachment medium (Accutase, Life Technologies) at 37°C for 30 minutes, and centrifuged at 15,000 rpm for 5 minutes. The cells were seeded at a density of 100 to 300 cells/cm 2 onto culture plates coated with 20 μg/mL laminin-511 (BioLamina). The culture medium comprised Dulbecco's modified Eagle's medium/nutrient mixture F-12 (DMEM/F12), supplemented with 20% knockout serum replacement (KSR), 2mM L-glutamine, 1% nonessential amino acids, 100μM 2-mercaptoethanol, 50 U/mL penicillin G, and 50 μg/mL streptomycin (all from Life Technologies) along with 4 ng/ mL basic fibroblast growth factor (bFGF) and 10μM Y-27632 (both from Wako Pure Chemical Industrials). Paired corneas from two donors were used to compare the expansion of HCEPs with and without 10μM Y-27632. HCEPs were cultured in a humidified atmosphere with 5% CO2 at 37°C, and the culture medium was changed every 2 to 3 days. When the cells reached confluence, they were harvested with Accutase and passaged at ratios of 1:2 to 1:4. Differentiation of HCEPs into differentiated HCEPs HCEPs were differentiated into HCECs on culture plates coated with fibronectin, collagen, and albumin coating mix (AthenaES). The differentiation medium was composed of low-glucose DMEM with 10% fetal bovine serum, 50 U/mL penicillin G, and 50 μg/mL streptomycin (all from Life Technologies). The differentiated HCEPs were cultured in a humidified atmosphere with 5% CO2 at 37°C. When the cells reached 70% confluency, they were harvested with 0.05% Trypsin-EDTA (Life Technologies) and passaged at a ratio of 1:4. The differentiated cells were viewed using optical microscope (Olympus CKX41, Olympus). Immunofluorescence staining for characterizing of the HCEPs and the differentiated HCEPs The HCEPs and differentiated HCEPs were cultured on 8-well culture slides (BD Biosciences) for immunofluorescence staining. They were fixed with 80% acetone at -20°C for 10 minutes, and nonspecific absorption was blocked with an antibody diluent solution (Life Technologies) at 37°C for 20 minutes. The cells were incubated overnight at 4°C with the primary antibodies against p75 neurotrophin receptor (p75NTR; 1:100, Merck Millipore), SOX9 (1:100, Abcam), ZO-1 (1:100, Life Technologies), and Na + /K + -ATPase (1:100, Merck Millipore) in an antibody diluent solution. They were washed three times in phosphate-buffer saline with 0.1% Tween 20 (PBS-T). The cells were then incubated for 2 hours in a 1:200 dilution of rhodamine-labeled goat anti-mouse immunoglobulin G and human serum absorbed fluorescein labeled goat anti-rabbit immunoglobulin G (both from KPL), and again washed three times in PBS-T in the dark. To stain the nuclei, the cells were stained with 4',6-diamidino-2-phenylindole, dihydrochloride (DAPI; Thermo) for 5 minutes. After mounting, the cells were observed using a f luorescence microscope (Olympus BX43, Olympus), and images were processed using the ISCapture Professional Imaging Software (Tucsen). Ki67 immunofluorescence staining of differentiated HCEPs The differentiated HCEPs were cultured on 8-well culture slides (BD Biosciences) with and without 10μM Y-27632 for 24 hours and fixed and incubated overnight at 4°C with purified mouse anti-Ki67 antibodies (1:100, BD Biosciences). The cells were then incubated with goat an- ti-mouse secondary antibodies for 2 hours as described in the previous section and washed three times in PBS-T in the dark. The nuclei were stained with DAPI, and cells were visualized as previously described. Cell viability assay of differentiated HCEPs treated with Y-27632 The differentiated HCEPs were plated at a density of 2,000 cells per well on 96-well plates in a medium containing 0μM, 5μM, 10μM, and 30μM Y-27632 for 24 hours. The number of viable differentiated HCEPs was determined using the cell counting kit-8 (Dojindo Molecular Technologies Inc). Absorbance was measured at 450 nm to determine cell viability in each well using a Universal Microplate Reader ELx800G (BioTek Instruments Inc). Morphological assessment of differentiated HCEPs treated with Y-27632 The differentiated HCEPs were stained with immunofluorescent actin to evaluate their morphologic changes when treated with the different concentrations of Y-27632. The cells were cultured on 8-well culture slides with 0μM, 5μM, 10μM, and 30μM Y-27632 for 24 hours. The cells were fixed and incubated overnight at 4°C in 1:200 dilution Alexa Fluor 594-conjugated phalloidin (Life Technologies), and washed three times with PBS-T. After counterstaining with DAPI, the cells were visualized in a fluorescence microscope as described in previous sections. Wound-healing assessment of differentiated HCEPs treated with Y-27632 The differentiated HCEPs were cultured until conf luence in 60-mm culture dishes and scraped with a 200-μL plastic pipette tip. The unattached cells were washed away in PBS before the remaining cells were incubated in media with 0μM, 5μM, 10μM, and 30μM Y-27632. The extent of the wound was determined by estimating the area between cells at the two opposite edges of the defect. Images of migrating of cells were obtained using an optical microscope, and relative wound sizes were analyzed using the ImageJ software (US National Institutes of Health) after 0, 6, 24, and 48 hours of incubation. Statistical analysis All the data are represented as mean ± standard deviation. Statistical significance was determined using Student t-test for single comparison and one-way analysis of variance, followed by Tukey's test for multiple comparisons. A p-value of <0.05 was considered statistically significant. ROCK inhibitor, Y-27632, enables HCEP isolation and expansion A comparative study was performed on paired corneas from two donors (48 and 46 years old) to evaluate the role of Y-27632 in the isolation and expansion of HCEPs. When 10μM Y-27632 was added, HCEPs from both donors were successfully isolated and expanded on a laminin-511-coated dish, while HCEPs were not observed in the absence of Y-27632, even after 30 days of culture (Fig. 1A, 1B). The cells had a bipolar, spindle-shaped morphology, and expressed the neural crest markers, p75NTR and SOX9, which are known markers of HCEPs [24]. HCEPs from the corneas of eight donors were subcultured for several pas-sages (mean numbers of passages, 4.8 ± 1.8) (Fig. 2A, 2B). Differentiation of HCEPs into HCECs and characterization of differentiated HCEPs We confirmed that HCEPs in low passages (below three passages) were differentiated into HCECs in the differentiation medium. The morphology of differentiated HCEPs changed from bipolar and spindle-shaped to confluent and hexagonal through the passages (mean numbers of passages, 7.8 ± 2.1) (Fig. 3A). In addition, the differentiated HCEPs displayed characteristic expression of ZO-1 and Na + /K + -ATPase in the plasma membrane (Fig. 3B) and this confirmed by Western blotting (Fig. 3C). Effects of Y-27632 on the differentiated HCEPs We used the differentiated HCEPs from passages 2 to 7, which confirmed the expression of ZO-1 and Na + /K + -AT-Pase. 1) Proliferation assessment of differentiated HCEPs treated with Y-27632 To determine the effects of Y-27632 on the proliferation of differentiated HCEPs, we used Ki67 immunostaining as a reliable marker of cell proliferation. The differentiated HCEPs treated with 10μM Y-27632 had significantly more Ki67 positive cells than the untreated control cells (p = 0.028) (Fig. 4A, 4B). 2) Cell viability assay of differentiated HCEPs treated with Y-27632 To determine the optimal concentration of Y-27632 required for obtaining viable differentiated HCEPs, we performed a cell viability assay after 24-hour exposure to different concentrations of Y-27632. The number of adherent and viable differentiated HCEPs increased as the Y-27632 concentration increased from 0μm to 10μm, then decreased at a 30μm concentration (p = 0.003). At 10μm, viability was significantly higher than that in the untreated control (p = 0.006) (Fig. 5). 3) Morphological assessment of differentiated HCEPs treated with Y-27632 The effect of Y-27632 on the morphology of differentiated HCEPs was examined by immunostaining with phalloi- din, which is used to assess the distribution pattern of actin filaments. As the concentration of Y-27632 increased, the differentiated HCEPs lost their polygonal shape and became more elongated. The differentiated HCEPs remained polygonal at concentrations of Y-27632 below 10μm. With 30μm Y-27632, a large proportion of differentiated HCEPs showed the elongated morphology. The actin filaments were also altered, changing from a central radial distribution into a peripheral distribution along the plasma membrane (Fig. 6). 4) Wound-healing assessment of differentiated HCEPs treated with Y-27632 We observed a concentration-dependent enhancement in the wound-healing ability of differentiated HCEPs treated with Y-27632. In response to the wound, differentiated HCEPs became elongated and fibroblast-like in appearance, and began to migrate from the scraping edge. At both 24 and 48 hours, after treatment with 10μm or 30μm Y-27632, statistically significant increases in wound closure rate, were observed, compared to the untreated control (at 24 hours, p = 0.005 and p < 0.001; at 48 hours, p = 0.005 and p = 0.018) (Fig. 7A, 7B). In this study, we isolated and expanded HCEPs from peripheral corneas using KSR-based serum-free media supplemented with bFGF and laminin-511 [24], which have been used to maintain undifferentiated cells, such as human embryonic stem cells and mesenchymal stem cells [24,28,29]. Laminin-511 was considered important for isolating and expanding HCEPs, because the substrate has been used in the serum-free culture of human embryonic stem cells, and is present in the corneal Descemet's membrane [24,30,31]. We cultured HCEPs using a protocol previously described by Hara et al. [24] who had brief ly treated the stripped Descemet's membranes with Y-27632 at the initial stage only, not throughout the culture period. Furthermore, they did not study the effects of Y-27632 on the culture method. In our study, we demonstrated that Y-27632 was necessary for the isolation and expansion of HCEPs. This finding was consistent with the results of many previous reports that demonstrated a significant increase in the adhesion of HCECs cultured with Y-27632 [12,32]. In one study, the addition of Y-27632 into a dual media culture system resulted in a twofold to threefold higher yield of CECs [33]. ROCK signaling is known to be involved in cell adhesion, morphogenesis, migration, and cell cycle progression [34]. A selective ROCK inhibitor, Y-27632, which is associated with increased adhesion and enhancement of actomyosin contractility, has been used for the in vitro culture of CECs in corneal endothelial regenerative medicine for several years [12,[32][33][34]. Our results showed that bipolar, spindle-shaped HCEPs expressed p75NTR and SOX9, which are markers of neural crest cells [35,36]. Hara et al. [24] had reported that HCEPs had partially retained the properties of the neural crest and periocular mesenchyme and showed high expression of p75NTR, SOX9, and FOXC2. Another study had reported that bovine corneal endothelial precursor cells, isolated with a sphere-forming assay, expressed the neural crest stem cell marker, nestin, and that these cells had the potential to differentiate into CECs [19,24]. In our study, HCEPs also differentiated into hexagonal HCECs in a medium with fetal bovine serum. Thus, cultivating HCECs from isolated HCEPs with high proliferative potency could be a novel source of cells for treating corneal endothelial dysfunction, as demonstrated in the results of our study, as well as those of previous reports [19,24]. In a recent study, cultivated CECs with Y-27632, were directly injected into the anterior chamber of the eye, and this turned out to be a significant success in corneal regenerative medicine [18]. The ROCK inhibitor, Y-27632, increased the adhesion of injected cells onto the recipient cornea without any substrate, and thus helped in treating cor neal endothelial dysf unction in animal models [12,18,34]. Therefore, as a first step towards the use of HCECs cultivated by isolating HCEPs for cell injection treatments, we evaluated the in vitro effects of Y-27632 on these HCECs, with the aim of determining the optimal concentration of Y-27632. Consisting with previous findings [12,26,34,37,38], our results showed that ROCK inhibition with 10μM Y-27632 promoted the proliferation and survival of HCECs. ROCK inhibitors are thought to control the expression of cyclin D and p27 via PI 3-kinase signaling, thus promoting CEC proliferation [26]. Previous studies have revealed the role of ROCK signaling in the regulation of proapoptotic and antiapoptotic, as well as cell survival signaling [39,40]. However, to confirm whether Y-27632 treatment was sufficient to induce HCECs proliferation, further research is needed as the other studies demonstrated conflicting results [32]. In this study, Y-27632 caused a concentration-dependent change in HCEC morphology; as the cells became elongated, and actin filaments were redistributed to the periphery. Y-27632 also enhanced wound healing via cell migration and morphologic changes in HCECs, which was consistent with the results of previous studies [41,42]. The cell migration involved membrane protrusion through cytoskeleton modification. Pipparelli et al. [32] had revealed that the inhibition of ROCK signaling resulted in morphological changes in HCECs, characterized by loss of their polygonal shape and a remodeling of the cytoskeleton, as demonstrated by the redistribution of actin to the cell periphery. They proposed that the mechanism behind the enhanced wound-healing effect of Y-27632 was associated with the regulation of the actin cytoskeleton. Most studies have used a concentration around 10μM to 30μM of Y-27632 for the culture of CECs [32,33]; the optimal Y-27632 concentration is yet to be determined. With 30μM Y-27632, we observed a slight decrease in the cell survival and wound healing of HCECs, compared to those of the HCECs treated with 10μM Y-27632. Peh et al. [33] had shown a dose-dependent decrease in the attachment strength after the CECs were exposed to more than 30μM of Y-27632. Therefore, we suggested 10μM as the optimal concentration of Y-27632 for HCECs cultured by isolating HCEPs. In conclusion, our results showed that the ROCK inhibitor, Y-27632, enabled the isolation and expansion of HCEPs from peripheral corneal tissue. We observed optimal proliferation and survival of differentiated HCEPs with 10μM Y-27632. The ROCK inhibitor also induced morphological changes and enhanced wound healing in differentiated HCEPs. Further studies are required to evaluate the therapeutic effects of injecting transplantable differentiated HCEPs with Y-27632 in vivo.
2022-12-24T16:04:36.121Z
2022-12-22T00:00:00.000
{ "year": 2022, "sha1": "4fc4a8bd7a814a321c13fb8f37bc28c386d4e4dd", "oa_license": "CCBYNC", "oa_url": "https://www.ekjo.org/upload/pdf/kjo-2022-0133.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "df0534bd8414289c41ed43952151c92a7f839db1", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
17511254
pes2o/s2orc
v3-fos-license
Short- and long-lasting behavioral and neurochemical adaptations: relationship with patterns of cocaine administration and expectation of drug effects in rats Cocaine dependence is a significant public health problem, characterized by periods of abstinence. Chronic exposure to drugs of abuse induces important modifications on neuronal systems, including the dopaminergic system. The pattern of administration is an important factor that should be taken into consideration to study the neuroadaptations. We compared the effects of intermittent (once daily) and binge (three times a day) cocaine treatments for 1 (WD1) and 14 (WD14) days after the last cocaine injection on spontaneous locomotor activity and dopamine (DA) levels in the nucleus accumbens (Nac). The intermittent treatment led to a spontaneous increase in DA (WD1/WD14), and in locomotor activity (WD1) at the exact hour which rats were habituated to receive a cocaine injection. These results underline that taking into consideration the hours of the day at which the experiments are performed is crucial. We also investigated these behavioral and neurochemical adaptations in response to an acute cocaine challenge on WD1 and WD14. We observed that only the binge treatment led to sensitization of locomotor effects of cocaine, associated to a DA release sensitization in the Nac, whereas the intermittent treatment did not. We demonstrate that two different patterns of administration induced distinct behavioral and neurochemical consequences. We unambiguously demonstrated that the intermittent treatment induced drug expectation associated with higher basal DA level in the Nac when measured at the time of chronic cocaine injection and that the binge treatment led to behavioral and sensitization effects of cocaine. Introduction Cocaine is a widely abused drug that possesses a significant health concern with major social and economic ramifications. Cocaine addiction is a process that generally starts with recreational use and deteriorates over time into a compulsive and chronic-relapsing drug taking disorder. 1 Despite longstanding efforts to identify compounds capable of selectively inhibiting the addictive effects of cocaine, there are currently no approved medications for the treatment of cocaine abuse or toxicity. Nevertheless, addiction is a complex pathology, multifactorial, which may explain why the clinical trials performed to evaluate the therapeutic potential of pharmacotherapies are unsatisfactory. Several parameters can and should be taken into considerations, including the pattern of administration, and the transient or long-lasting neurochemical and behavioral abnormalities following drug administration. To achieve a good understanding of the neuroadaptations induced by cocaine, the use of animal models provides enormous potential, through a coordinated analysis of brain function and behavior. As previously mentioned, the pattern of administration and the kinetics of the neuroadaptations are important factors that should be taken into consideration. Several studies reinforce this hypothesis. [2][3][4][5] In addition, our laboratory has already studied the effects of manipulating the frequency of available injections by using a discrete trials procedure, showing that two different patterns of morphine treatment induced distinct behavioral and neurochemical consequences with different time courses. 6 Cocaine exerts its pharmacological action through the monoamine systems. In particular, dopamine (DA) projections to the striatum have been implicated greatly in reinforcement and motor behaviors produced by cocaine. 7 Thus, the first aim of our study was to investigate and compare the spontaneous behavioral and neurochemical consequences of two 14-day chronic cocaine pretreatment regimens (intermittent versus binge), both of which are used extensively in laboratories to mimic patterns of cocaine abuse in humans. Extracellular DA levels were evaluated 1 (WD1) and 14 (WD14) days after the last cocaine injection. In parallel to this, the locomotor activity of the animals was measured during 24 h on WD1 and WD14. These experiments were performed either exactly at the hours at which rats were habituated to receive a cocaine injection or at another hour of the day. We showed that intermittent cocaine (IC) and binge cocaine (BC) treatments induced distinct behavioral and neurochemical consequences with different time courses. The second aim of this study was to investigate the behavioral and neurochemical responses to an acute cocaine challenge following chronic cocaine pretreatment. Here again, we showed that the patterns of chronic cocaine administration induced distinct consequences, with behavioral and neurochemical sensitization only observed following the binge treatment. Materials and methods Animals. Male Sprague-Dawley rats (Janvier, Le Genest-Saint-Isle, France) weighing 275-300 g at the beginning of the treatments were housed on a 12-h light/dark cycle in a temperature-(22 ± 1 1C) and humidity-controlled (50 ± 5%) environment and had access to food and water ad libitum. Animals were treated in accordance with the NIH Guidelines for the Care and Use of Laboratory Animals 8 (1996) and in agreement with the local ethical committee. The number of animals used and their suffering was minimized in all experiments designed. Cocaine treatments. Animals were treated by an IC or a BC administration profile, consisting in intraperitoneal (i.p.) injections of 20 mg kg À 1 cocaine hydrochloride (Francopia, Anthony, France), dissolved in saline (0.9% (w/v) NaCl) during 14 days, once daily at 1000 hours for the intermittent profile and three times a day at 1000, 1300 and 1600 hours for the binge profile. All animals received 1 ml kg À 1 of their bodyweight. Control groups were treated with saline in the same conditions (intermittent saline (IS) or binge saline (BS)). Immediately after injection, rats were returned to their home cages. Behavioral study. Locomotor activity was evaluated on WD1 and WD14 after the end of the treatments (see Supplementary Figure 1), in an actimeter (Immetronic, Pessac, France) composed of eight cages (34  21  19 cm) under low illumination (o5 lux). One rat was placed in each box to record its movements. Displacements were measured by photocell beams located across the long axis and above the floor. Vertical and horizontal activity was recorded and expressed in scores (mean±s.e.m.) as the total number of interruption of the photocell beams. The 12/12 h light/dark cycle was respected. For the spontaneous locomotor activity on WD1: animals were placed during 24 h in the actimeter 2 h after the last injection of the chronic treatment; and on WD14: the same protocol was used, animals were placed during 24 h in the actimeter in day 27 plus 2 h. In other groups of animals, we have evaluated the locomotor activity response to cocaine (20 mg kg À 1 , i.p.) or saline challenge; animals were challenged out of the usual hours of the injection treatment (on WD1 and WD14) and immediately placed in the actimeter during 1 h. Neurochemical study Surgery: Rats were anesthetized by an i.p. injection of a mixture of ketamine/xylazine (80/10 mg kg À 1 ) and placed in a stereotaxic apparatus (Unimé canique, Eaubonne, France). A guide cannula (CMA 12, Phymep, Paris, France) was stereotaxically implanted in the nucleus accumbens core (Nacc). The coordinates, taken from the atlas of Paxinos and Watson (1998), 9 were þ 1.6 mm anterior to the interaural, þ 1.4 mm lateral to the midline and À 6.0 mm under the skull surface. Animals were used for experiments after a recovery period of 1 week. Judgments about cannula placements in the Nacc were made by an observer who was blind to the results obtained with individual rats. Only the rats implanted into the Nacc were analyzed in the study (the fail rate for the implantation was 5/ 100, Supplementary Figure 2). Microdialysis procedures were done on WD1 and WD14 after the end of the treatments (Supplementary Figure 1b). Rats were gently restrained, the stylus was removed from the guide cannula and the probe (CMA 12 Ellit, 2 mm, Phymep) was implanted and perfused at 2 ml min À 1 . The perfusate consisted in artificial cerebrospinal fluid containing (in mM) NaCl 140, KCl 4, MgCl 2 1, NaH 2 PO 4 0.1, Na2HPO 4 1.9 and CaCl 2 1.2 (pH ¼ 7.4). After 2 h equilibration of the microdialysis membrane, samples were collected every 30 min in tubes containing 5 ml HClO 4 of 0.4 M to prevent DA oxidation and conserved at À 80 1C until the quantification. For the measure of the DA level at injection hours and evaluation of the neurochemical response to a cocaine challenge, the samples collection occurred on WD1 and WD14. For the DA basal level, two samples was collected out of the usual hours of chronic injection treatment, then two other samples was collected at the exact hour when rats were habituated to receive cocaine without any injection. We have evaluated DA release to a cocaine challenge; few hours later, out of the usual hours of injection chronic treatment, two samples were collected following i.p. challenge injection (cocaine or saline). To exclude effect of association drug-spatial cues, drug treatment was performed in a different room of the microdialysis room. DA analysis: DA content was determined as previously described 10 using HPLC apparatus coupled to electrochemical detector (Coulochem III, ESA, Chelmsford, MA, USA). Statistical analysis. The neurochemical studies in drugfree conditions were analyzed by a two-tailed t-test. A oneway analysis of variance (one-way ANOVA) was used to analyze locomotor activity in response to a cocaine challenge. A two-way ANOVA was used for the analysis of the locomotor activity measured during 24 h in drug-free conditions (treatment  time), and for the analysis of DA levels in the Nacc following a cocaine challenge injection (treatment  microdialysate). The Bonferroni test was used for post-hoc comparisons. Results Spontaneous neurochemical and behavioral consequences of chronic cocaine treatment Spontaneous locomotor activity. The measure during 24 h of locomotor activity 2 h after the last injection of treatment showed differences in behavioral regulation (Figure 1). A twoway ANOVA showed a significant treatment  time interaction for the intermittent profile on WD1 (F (1,20) Unlike intermittent-treated animals, the locomotion of binge animals on WD1 or on WD14 was not changed during the 24 h of measurement in comparison with the saline (Figures 1c and d). However, significant treatment  time interactions were observed, which were mainly due to a higher locomotor activity measured during the first hour, in the cocainetreated animals in comparison with saline group (WD1 As shown in Figure 1d, we observed an increase in locomotor activity at day 27 after the beginning of the treatment in the BC group as compared with the saline group between time point þ 4 and þ 10 h. However, this increase was not statistically significant. Basal extracellular level of DA. At a time that did not match the hours of injection, the levels of extracellular DA was evaluated in the Nacc using microdialysis in awake and freely moving rats. As shown in Figure 2a, two tailed t-test revealed a decrease of basal DA levels, 1 day after the end of the treatment, for the IC and BC groups in comparison with their respective saline control group (Figure 2a Injection hour extracellular level of DA. As shown in Figure 3, two tailed t-test revealed a significant increase of extracellular DA levels in the Nacc core, 1 day after the end of treatments as compared to the basal levels, at the time corresponding to the injection hours for both treatment profiles, IC (n ¼ 20, t (18) Behavioral and neurochemical responses to a cocaine challenge (out of the usual hours of injection treatment) Locomotor activity. As shown in Figure 4, the locomotor activity responses observed for 1 h immediately following saline or cocaine administration on WD1 or WD14 days were not modified in IC group in comparison with the control group. One-way ANOVA showed a significant effect of cocaine challenge, with an increase in the locomotor activity in both cocaine-and saline-treated animals (WD1, Figure 4a For the binge treatment, Figure 4c clearly showed a significant sensitization to the hyperlocomotor effect induced by cocaine on WD1 (F (3, 41) ¼ 40.89, Po0.0001; Bonferroni BS challenge cocaine vs BC challenge cocaine Po0.01). On WD14, the locomotor activity was not modified in comparison with the control group. One-way ANOVA showed a significant effect of cocaine challenge, with an increase in the locomotor activity in both cocaine-and saline-treated animals (F (3, 40) ¼ 20.21, Po0.0001). No differences in the locomotor activity responses were observed after a cocaine challenge, between the cocaine-and saline-treated animals. Figure 5a, cocaine challenge on WD1 induced an increase in the extracellular DA levels in IC and IS-treated rats as compared with animals that received a saline challenge treatment (interaction: F (3,21) In the BC profile on WD1, the two-way ANOVA showed a significant interaction (F (3, 24) ¼ 2.37, Po0.05), treatment effect (F (3,24) Discussion In the present study, the short-and long-term behavioral and neurochemical consequences of two distinct cocaine pretreatment regimens ('intermittent' versus 'binge') were compared. The main finding of this study is that IC and BC treatments induce different behavioral and neurochemical adaptations, which may be long-lasting. The results indicated that chronic cocaine administration led to a significant lowering of the basal DA levels in the Nacc for both intermittent and binge treatments. Interestingly, Figure 2 Basal extracellular level of dopamine (DA) in the nucleus accumbens core (Nacc). Two dialysis samples were collected every 30 min on WD1 (a) and on WD14 (b) to determine basal level of DA in the Nacc out of the usual hours of injection treatment. Each column represents the extracellular dopamine levels in ng ml À 1 (mean ± s.e.m.). *Po0.05 two-tailed t-test (n ¼ 6-10). Figure 3 Injection hour extracellular level of dopamine (DA) in the nucleus accumbens core (Nacc). Two dialysis samples were collected every 30 min on WD1 (a) and on WD14 (b) to determine the basal level of DA in the Nacc, then two dialysis samples were collected exactly at the time of the usual hours of injection for each treatment without a new injection. Each column represents the extracellular dopamine levels in ng ml À 1 (mean ± s.e.m.). *Po0.05, **Po0.01, two-tailed t-test (n ¼ 6-14). Pattern of cocaine treatment and neuroadpatations S Puig et al this effect was long-lasting, as 14 days after the last injection a lower DA level was observed in IC group as compared with the IS group. However, it is interesting to note that the basal level for the BC treatment on WD14 seems be the same with the other cocaine groups. The decrease in DA level in the BS group observed on WD14 is complex to explain, although it is highly reproducible. BS rats received saline injection three times daily for 14 days, which may induce stress, well known to induce an inhibition of DA release in the Nacc. 11,12 The decrease of DA levels observed in cocaine-treated groups as compared with saline groups are consistent with other studies showing a decrease in basal DA levels in Nacc following a chronic 'binge' cocaine injection paradigm, 13 in rats injected with cocaine twice daily for 9 or 14 days, 14,15 in rats that received a single daily injection of cocaine 10 16 or 18 days 17 and also in two mouse strains that received cocaine injection three times a day for 14 days. 18 However, it is also important to note that some studies have shown no change, 19,20 whereas others have shown an increase in the Nacc DA levels in rats that received a single daily injection of cocaine 10 days on WD1, WD3 and WD7 2 or on WD2 but not on WD12 and WD22. 21 In the majority of these studies, cocaine was administered at doses ranging from 10 to 30 mg kg À 1 , and in most of the cases, extracellular DA level was measured 24 h after the last injection of cocaine, but with few indications regarding the specific hour of the day at which the microdialysis experiments were performed. However, our results demonstrate for the first time that this parameter is crucial and may explain the differences of results of the previous studies. Thus, when microdialysis experiments were performed at the exact hours, which rats were habituated to receive a cocaine injection, an increase in DA levels was observed. This effect was long-lasting in the intermittent group, as this regulation was still observed 14 days after the last injection of cocaine. Therefore, although a low extracellular DA level was observed in the Nacc following BC and IC treatments, an increase in the phasic release was observed at the usual hours of cocaine injection. Several hypotheses may be found in the literature regarding these regulations. For instance, the lower DA levels are correlated to an increase in the density of the DAT binding sites and to a supersensitivity of D2 autoreceptors in the DA terminals. 22 On the other hand, several lines of evidence suggest that phasic release of DA 10,[23][24][25][26] Strikingly, behavioral analysis indicated an increase in locomotor activity at the hours of cocaine injection in the IC group. This behavioral response may be in good agreement with the increase on DA release observed at the same hours, as it is well established that DA has a key role in motor function. Cocaine may induce a memory trace, which can be evaluated at behavioral and neurochemical levels during the anticipation phase of the drug administration. Interestingly, the 'neurochemical memory' was long lasting (WD1 and WD14) than the 'behavioral memory' (only WD1). This is the first demonstration of a neurochemical memory without cues associated to cocaine administration. This memory observed, by choosing the right time point for behavioral and neurochemical testing, can be due to a disturbance of the circadian rhythm. Indeed, it has been demonstrated that chronic exposure to drugs of abuse affects the circadian rhythm of physiological functions and behaviors. [27][28][29] Moreover, psychostimulant-induced effects, such as behavioral sensitization and conditioned place preference, have been shown to follow circadian variations in their intensities. [30][31][32] The second aim of our study was to examine the effects of acute cocaine challenge on locomotor activity and DA release. In animal models, a hallmark feature associated with chronic exposure to drugs of abuse, including cocaine, is locomotor sensitization. 22,33,34 In some cases, this sensitized behavioral response is correlated with enhanced drug-induced DA responses in the Nac. 20,[33][34][35] In this study, we demonstrated that the expression of a specific cocaine-induced behavior sensitization was related to the profile of administration. The binge treatment led to sensitization of locomotor effects of cocaine, whereas the intermittent treatment did not. These results are in good agreement with the literature, showing that patterns of administration are of particular importance, as they could determine duration and intensity of sensitization. 6,22,36 Interestingly, the locomotor sensitization observed in the binge group was associated to a DA release sensitization in the Nacc after a cocaine challenge on WD1. These results are supported by the reported increase in DA release in the Nacc in response to a subsequent challenge drug injection in animals previously exposed to cocaine. 22,[33][34][35] Regarding the neurochemical and behavioral consequences of the two patterns of cocaine administration important differences were observed. Interestingly, it clearly appeared from the experiments performed to evaluate the spontaneous behavior and basal extracellular levels of DA in the Nacc that the ability of the DA system to adapt to an event was maintained in the intermittent group, whereas it was impaired in the binge group. Indeed, exactly at the hours at which rats were habituated to receive a cocaine injection, a spontaneous increase in DA on WD1 until WD14, with an increase in locomotor activity on WD1, were observed in the former group, whereas only a spontaneous DA release on WD1 was measured in the latter group. Moreover, following acute cocaine challenge behavioral and neurochemical sensitizations were only observed in rats previously exposed to binge treatment. It is well known that chronic cocaine exposure induces enduring neuroadaptations that collectively result in the loss of control over drug-seeking behavior. In particular, locomotor sensitization, the augmented response to cocaine following repeated exposure, has been shown to have predictive validity for other indicators of addiction. Thus, in our experimental conditions, the binge pattern may be associated with the development of important behavioral and neurochemical modifications. In contrast, the intermittent pattern allowed rapid locomotor and DA increase when animals anticipated the drug administration, and did not lead to locomotor sensitization, suggesting that the addicted state was not completely installed and the system is still able to adapt to a specific event. In conclusion, the results obtained clearly illustrate that the pattern of administration is a crucial parameter that should be taken into consideration while exploring the behavioral and neurochemical alterations in animals. Another very interesting result was that scheduling a cocaine injection on a daily basis led to expectation and anticipation in our experimental model. This circadian process could explain learned drug-taking patterns in which individuals search for drugs of abuse at specific hours of the day generally in a specific environment.
2017-10-30T13:04:20.976Z
2012-10-01T00:00:00.000
{ "year": 2012, "sha1": "cb22042c456f2a5474e4e74cc7caaf33126a7d65", "oa_license": "CCBYNCND", "oa_url": "https://www.nature.com/articles/tp2012103.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "cb22042c456f2a5474e4e74cc7caaf33126a7d65", "s2fieldsofstudy": [ "Psychology", "Biology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
12604664
pes2o/s2orc
v3-fos-license
Metrics to Characterize Airport Operational Performance Using Surface Surveillance Data Detailed surface surveillance datasets from sources such as the Airport Surface Detection Equipment, Model-X (ASDE-X) have the potential to be used for the analysis of airport operations, in addition to their primary purpose of enhancing safety. This paper describes how surface surveillance data can be used to measure airport performance characteristics in three different ways: (1) Characterization of surface flows, including identification of congestion hotspots, queue dynamics and departure throughput; (2) Development of metrics to evaluate the daily operational performance; and (3) Development of metrics to gauge long-term performance across different runway configurations and operating conditions. The proposed metrics have been developed with active feedback from operations personnel at Boston Logan International Airport, and are therefore evaluated and discussed using this airport as an example. These metrics can provide useful feedback on operational performance to airport operators, and therefore have the potential to improve the efficiency of surface operations at airports. I. INTRODUCTION I.A Motivation Airports form the critical nodes of the air transportation network, and their performance is a key driver of the capacity of the system as a whole [1], [2], [3].With several major airports operating close to their capacity for large parts of the day [4], the smooth and efficient operation of airports has become essential for the efficient functioning of the air transportation system.Operational efficiency of airports is influenced to a great extent by the characteristics of surface movement of aircraft taxiing out for departure, or taxiing to their gates after landing. Largely driven by the availability of data, studies of airport operations have traditionally focused on airline operations and on the aggregate estimation of airport capacity envelopes [5] [6] [7] [8].Most of this research is based on data from a combination of the Aviation System Performance Metrics (ASPM) [9] and the Airline Service Quality Performance (ASQP) databases.These databases provide the times at which flights push-back from their gates, their takeoff and landing times, and the gate-in times, as reported by the airlines.ASPM also provides airport-level aggregate data, including records of runway configuration use and counts of the total number of arrivals and departures in 15 min increments.Such data can be used to develop queuing models of airport operations [10] [11], [12] or empirically estimate airport capacity envelopes [7], [8].However, the level of detail in these datasets has been insufficient to investigate other factors that affect surface operations, such as interactions between taxiing aircraft, runway occupancy times, queue sizes on the airport surface, etc.The recent deployment of Airport Surface Detection Equipment, Model-X (ASDE-X) at major airports has made it possible to continuously track each aircraft on the surface, and has thereby enabled the analysis of surface operations in greater detail than ever before.This paper proposes, for the first time, techniques by which ASDE-X data can be leveraged to characterize airport surface operations, and to develop metrics for formally measuring day-to-day as well as long-term airport operational performance.These metrics have been developed with the active support of the air traffic controllers and the operations manager at Boston Logan International Airport, and therefore address those areas of airport performance that are most relevant to operational personnel. The purpose of this work is twofold.Firstly, it emphasizes the hitherto unexplored value of using surface surveillance data for post-hoc performance analysis, in addition to its primary intent as a real-time safety tool.If the vast repository of ASDE-X data is fully exploited, important insights can be obtained regarding the characteristics of an airport.Some indications of these characteristics already exist in the form of anecdotal rules of thumb.It is now possible to corroborate or refute these ideas based on empirical evidence. The second purpose of this work is to propose a set of standardized and generic performance measures for airports.Since ASDE-X systems are now installed at several major airports in the United States, these metrics can be used to make comparisons across airports.The algorithms described in this paper are fully automated: once the raw data is obtained, a shell script executes all tasks including data extraction, parsing, analysis, and result storage, without supervision.The proposed analysis methods and operational metrics are illustrated for the example of Boston Logan International Airport (BOS), based on data from August 2010 to May 2012.Daily performance plots of the type presented in this paper have been compiled by the authors, and shared with the Operations Manager at the Boston Airport Traffic Control Tower (ATCT) on a regular basis since November 2010. I.B Overview of ASDE-X data ASDE-X is primarily a safety tool designed to improve situational awareness and to mitigate the risk of runway collisions [13].It incorporates real-time tracking of aircraft on the surface to detect potential conflicts.There is potential, however, to use the data generated by it for the analysis of surface operations and airport performance, and for the prediction of quantities such as taxi times.ASDE-X data is generated by sensor fusion from three primary sources: (i) Surface movement radar, (ii) Multilateration using onboard transponders, and (iii) GPS-enabled Automatic Dependent Surveillance Broadcast (ADS-B).Reported parameters include each aircraft's position, velocity, altitude and heading.The update rate is about 1 Hz for each individual track.This study uses ASDE-X data from Boston Logan International Airport (BOS), as collected by the MIT Lincoln Laboratory's Runway Status Lights (RWSL) system.This system was first installed at Boston in late 2009, and a regular data feed has been available to MIT starting from mid-2010. Departing flights are tracked starting from the time when the onboard transponder is turned on: while the timing of this event varies from airport to airport, analysis shows that at BOS, transponder capture typically occurs while the flight is still in the ramp area, on average about 5 min after the aircraft is cleared for push-back [14].The push-back clearance at Boston Logan is given by the air traffic controller for all aircraft apart from those in Terminal A. Aircraft start their engines after push-back is completed, and typically start their transponders when calling for taxi clearance.At Terminal A, the ramp towers controls the aircraft up to the spot, which is when control is passed to the ATC Tower.However, transponders are still started as soon as the aircraft starts to taxi within the ramp.The ASDE-X coverage extends approximately 18 nmi out from the airport, which is when the flight track for departures ends.All the results presented in this work are based on ASDE-X data, both because of its superior detail and also to ensure a valid comparison between the different performance metrics. I.C Data analysis methods In order to parse ASDE-X data into measurable quantities, several analysis algorithms were developed.The high level of detail in these data results in large file sizes, with each day's data from Boston Logan occupying approximately 2 gigabytes of space.This requires the development of highly efficient methods for analysis.Moreover, raw ASDE-X tracks contain a substantial amount of noise as well as exogenous hits such as ground vehicles and inactive aircraft being towed on the surface.Additional complications are added by the existence of multiple flights with the same call-sign and aircraft tail number during the duration of a day's operations.Therefore, the data needs to be preprocessed before it can be used for analysis. Mitigation of the effects of noise is done by a multimodal unscented Kalman filter that produces smoother estimates of aircraft position, velocity and heading [15].The filtering algorithm uses models of aircraft dynamics in order to correct for intermittent errors in the flight tracks.Such errors can leak into the raw data from the fusion algorithm which handles three separate data sources, or from radar detection issues with aircraft at very low speeds.Additional error handling is carried out by a second algorithm, which includes (1) separation of continuing flights with the same call-sign, (2) detection and tagging of off-nominal operations such as cancelled takeoffs, go-arounds and aircraft absorbing Traffic Flow Management (TFM) related delay in active movement areas, and (3) removal of irrelevant tracks such as ground vehicles and helicopters using dedicated and separate routing.The combined filtering technique significantly improves the reliability of the detected flight tracks, which are then transferred to various analysis algorithms.Each flight track is tagged with a departure/arrival time and runway.By tracking each aircraft from push-back to wheels-off, various airport states such as the active runway configuration, location and size of the departure queue, and departure/arrival counts are measured.In addition, airport-level performance metrics such as average taxi-out times, runway usage and departure spacing statistics are also tracked.The definitions of these metrics and the algorithms proposed for measuring them are described in subsequent sections of this paper. II. CHARACTERIZATION OF SURFACE OPERATIONS II.A Departure queue characteristics Visualization of the filtered ASDE-X data yields insights into the dynamics of airport operations.It can help identify locations on the surface where aircraft typically queue up for departure, for different runway configurations and operational procedures.For example, Figure 1 shows the layout of BOS with the different runways.Figure 2 shows the departure queue (aircraft icons) formed on September 09, 2010 in the 22L, 27 | 22R configuration, i.e., when Runways 22L and 27 (on the east side) were being used for arrivals and 22R (on the west side) was being used for departures.In Figure 2, the departure queue can be seen forming at the threshold to runway 22R.In this figure, some aircraft are seen in a separate queue on the west side of the runway, waiting for crossing clearance.This is due to construction on the taxiway to the west of the runway during 2010.Such subtleties could be observed by using the ASDE-X data to generate animations in Google Earth ® .Observation of operations over several days of data was used to identify typical queue formation areas for each configuration at Boston.These areas were then designated in the analysis codes for automatic tracking of departure queues and calculation of statistics such as time spent by individual aircraft in the queue and the variation of queue length over the course of each day.Note that each departing aircraft was tagged as being in queue if it was within the designated box for its departure runway, and below a certain threshold velocity.Some runways required the definition of more than one queuing area. It was found that this separation of multiple queues was instructive.For example, aircraft that are absorbing delay on the surface due to TFM initiatives usually occupy the same queuing area, while flights without delay occupy another.Therefore, high occupancy of the former queue is typically indicative of bad weather in the NAS. Figure 3 shows the mean time spent in the departure queue as a function of the queue length, based on data from all runway configurations.The queue length is defined as the number of aircraft in the departure queue as seen by a new aircraft just joining it.It can be seen that on average, an additional aircraft in queue entails a penalty of 83 s for each of the aircraft behind it.This value seems reasonable when one considers standard departure separations, as discussed in Section III.The standard deviation of the time in queue increases from approximately 20 sec for the lower queue sizes, to approximately 60 sec for the larger queue sizes.It is likely that the value is somewhat enhanced by the possibility of swaps within the queue, which could disproportionately increase or decrease the time spent in queue for some of the aircraft. II.B Departure throughput characteristics Departure queue characterization offers an insight into surface operations from an aircraft's perspective.In order to analyze airport-level operational performance, the variation of departure throughput (defined as the number of takeoffs from the airport in a 15 min interval) with the number of active departing aircraft on the surface is considered.An aircraft is defined to be active from the time of first transponder capture (first detection with ASDE-X) until its wheels-off time.Previous studies have shown using ASPM data that the departure throughput increases with the addition of aircraft to the surface, until the maximum sustainable throughput is reached, and the airport surface saturates [11], [4].This observation is further corroborated by the results produced using ASDE-X data.Figure 4 shows the departure throughput curve for a specific configuration at BOS.Only aircraft with jet engines are counted in this analysis, because propeller-driven aircraft are fanned out via separate departure fixes at BOS, and do not affect the jet departure process [14].The throughput curves for other configurations are similar in nature to III. METRICS TO CHARACTERIZE DAILY OPERATIONAL PERFORMANCE Building upon the results described in the previous section, metrics to measure the daily operational performance of an airport are presented here.The objective is not to evaluate individual air traffic controller performance, but to look for systemic inefficiencies and to identify opportunities for improvement.While each airport has particular rules, regulations and procedures that must be considered in order to be consistent across different configurations and time periods, the basic concepts used to define these metrics can be generalized to any airport.For example, there are subtle variations in the strategies used by different airports to accommodate both arrivals and departures on the same runway or on intersecting runways.At BOS, arrivals have to cross the departure runway in the 22L, 27 | 22R configuration, while in the 27 | 33L configuration, it is the departures that have to cross the arrival runway.Arrivals are controlled by the Boston TRACON or the ZBW Air Route Traffic Control Center, and not the Air Traffic Control Tower (ATCT).Inter-arrival separations and arrival sequences are not under the control of the tower, and are not indicative of ATCT performance.This paper proposes three metrics of day-to-day operational performance that account for practical complexities while keeping the computational effort at a reasonable level.These metrics are important for identifying the effects of off-nominal operations, which can be lost by looking at only aggregate-level data. III.A Average taxi-out times The most natural performance measure from the point of view of passengers is the average taxi-out time for departures.The taxi-out time is defined in this paper to be the difference between the time of first transponder capture and the wheels-off time.At Boston Logan, aircraft transponders are usually turned on just before taxi clearance is given by ATC.Therefore, while the transponder capture time might not correspond exactly to the push-back time, it is a good measure of the time taken for actual taxi. The taxi-out time is an important quantity that affects not only flight delays but also taxiout fuel consumption [16].In general, taxi-out times are highest during the peak congestion periods.At BOS, these are the morning departure push between 0600 h and 0800 h local and the evening push between 1900 h and 2000 h local time.Figure 5 shows the variation of average taxiout times on a sample day.The averages are calculated over 15 min intervals for the entire day.Each bar in the upper plot represents the average taxi-out time experienced by the aircraft pushing back in that 15 min interval.The number of pushbacks in the corresponding interval is shown in the lower plot.The peaks in both pushbacks and taxi-out times around 0700 h and 1900 h can be seen clearly. Note that in the calculation of this metric, long delays absorbed by flights on the surface have been removed.These aircraft usually have specified departure times known as Expected Departure Clearance Times (EDCTs), decided by constraints elsewhere in the National Airspace System (NAS).While it is desirable to have aircraft absorb these delays at the gate, it is not always possible because of conflicts with arrivals that are scheduled to park at the same gate.In such cases, aircraft at Boston Logan absorb the delay in a separate designated area (called the Juliet pad), or in a separate runway queue as explained in Section II A. Figure 6 shows a plot similar to the one in Figure 5, but with flights with EDCTs included in the data.A comparison of the two plots illustrates the effect of excluding these flights.Section IV.B will discuss how the average taxi-out time metric can be extended to assess the long-term performance of an airport. III.B Runway utilization The most capacity constrained element in departure operations is the runway, as shown in [2].Therefore, it is important to ensure that an airport's runway system is used as efficiently as possible.To look at the current usage characteristics of runways at BOS, a metric called Runway Utilization was defined.The utilization is expressed as a percentage, calculated for every 15 min interval.It is given by the fraction of time in the 15 min interval for which a particular runway is being used for active operations.The types of active operations that are accounted for in calculating the runway utilization are as follows: 1. Departure: An aircraft on the runway, between the start of its takeoff roll and wheels-off.2. Hold: A departing aircraft holding stationary on the runway, waiting for takeoff clearance.3. Approach: Counted from the time an aircraft is on short final (within 2.5 nmi of runway threshold) to the time of touchdown.4. Arrival: Counted from the moment of touchdown to the time when the aircraft leaves the runway.5. Crossings/Taxi: Counted when aircraft are either crossing an active runway, or taxiing on an inactive runway. Analysis algorithms detected each of these operating modes by using the filtered states from ASDE-X tracks for each aircraft.The 'approach' phase was included in the runway utilization because no other operations can be carried out on a runway when an aircraft is on short final.Even though, technically, the aircraft is not on the runway, ignoring this operational constraint would give an erroneously low utilization figure for an arrival runway.The approach areas corresponding to runways 15R/33L and 9/27 at Boston are shown in Figure 7. Since the ASDE-X system is tuned for surface surveillance, altitude information far away from the airport is highly unreliable.Therefore, the approach areas shown in the figure are two-dimensional and do not include an altitude restriction.This is not a severe limitation, because high-altitude overflights are not captured by ASDE-X.False alarms (aircraft that are within the approach areas but are not landing at the airport) in the designated areas are thus largely non-existent.Figure 8 and Figure 9 show the utilization plots for a sample good-weather day, for three different runways.The topmost plot in each figure shows the breakup of utilization for the runway over the course of the day.The plot in the middle shows operational counts in each 15 minute interval, for both ends of the runway.Finally, the lowermost plot shows the variation of queue length over the day. It should be noted that the queue length is calculated for every second, while the top two plots are aggregate figures over the 15 min interval.Details such as configuration changes can be seen immediately from the utilization plots.For example, it can be seen from Figure 9 that departure operations shifted from runway 22R to runway 33L at 1000 h.The departure runway utilization (Figure 9 [Top]) was nearly 100% from 1800 h to 2000 h (apart from two notable instances which are discussed below), the period of peak evening demand.The split in utilization between the approach mode and arrival mode in Figure 8 shows that aircraft spend a roughly equal amount of time on final approach, as compared to the actual touchdown and roll-out.The formation of a runway crossing queue can also be seen during the peak evening period.This is a common feature at Boston Logan when a combination of runways 33L and 27 are in use. The value of acknowledging the coupling between operation counts, utilization and queue dynamics can be seen in Figure 9 [Top].The presence of two heavy arrivals on runway 33L can be noted in the evening period between 1800 and 2000 h local.This is a common request at Boston Logan because of the longer length available on runway 33L.The disruption caused by these events is clearly seen in the utilization plots, where the figure drops to approximately 60% in both cases.The arrival at 1830 h also causes a large drop in the number of departures in that time interval.It also has an effect on the average inter-departure spacing achieved, as discussed in the next section.By contrast, Figure 10 and Figure 11 show the runway utilization on October 27, 2012, when the 4L, 4R | 9, 4R configuration was active.This was one day prior to the arrival of Hurricane Sandy at Boston, and the evening was affected by bad weather.It can be seen that while operational counts and runway utilization appears normal in the morning (total departure rate across runways 9 and 4R of 10 per 15 minutes), both values are significantly lower in the evening.Ideally, it is desirable for the utilization value to be 100% for all active runways in times of peak demand.The sample figures show that while this value is achieved for much of the peak period, it is difficult to sustain.Disruptions may be caused by off-nominal events such as runway closures due to foreign objects, arrivals requesting a departure runway for landing, or gaps in the arrival sequence.It should be noted that the utilization for a departure runway is always higher than that for an arrival runway.This is because the departures can be packed close together, with the next aircraft in queue holding on the runway while the previous aircraft starts its climb-out.On the other hand, tightly packed arrivals would increase the risk of frequent go-arounds caused by aircraft not being clear of the runway quickly enough to allow the next arrival to land.Therefore, the arrival stream has a buffer in addition to the minimum spacing that is dictated by FAA regulations. III.C Departure spacing efficiency As noted earlier, departures can be spaced with a smaller safety buffer as compared to arrivals.However, the target departure spacing is still governed by a set of standards, customized to each airport depending on the runway and airspace layout.It is generally recommended to maintain a minimum spacing of 120 s for a departure following a heavy aircraft [17].At BOS, the target separations based on a combination of regulatory requirements such as these, rules of thumb followed by the controllers, and average performance as measured using ASDE-X data, are as shown in Table 1.To compare the actual inter-departure separation with these target values, a metric called the Departure Spacing Efficiency was defined.As with runway utilization, this metric is calculated for each 15 min interval.However, the Departure Spacing Efficiency is not runwayspecific, but addresses departure operations at the airport as a whole.To calculate it, the difference between the wheels-up times of each pair of consecutive departures is compared to the target level of separation for that pair, based on the aircraft classes of the leading and trailing aircraft.Each additional second more than the target level is counted as a second lost.Time is counted as 'lost' only if there are other aircraft in queue, waiting for departure.This ensures that the efficiency figure does not fall simply because of low demand.Note that Miles-in-Trail (MIT) restrictions may also result in additional inter-departure separations, which will be captured by this metric.Inopportune separations between arriving aircraft on a crossing runway can also cause a drop in departure efficiency.On the other hand, controllers can also sometimes manage to depart aircraft with a separation less than the target level, depending on factors such as the availability of multiple runways for departure.In this case, each second less than the target separation level is counted as a second gained.Then, the Departure Spacing Efficiency, denoted , in each 15 min interval is given by It should be noted that the use of multiple runways does not always allow departures to take place with less than the target level of spacing.For example, at BOS, departure operations on runways 22R and 22L have to take place as on a single runway, because both sets of departures have to use the same departure fixes.However, when runways 4R and 9 are used for departures, aircraft can be spaced more closely, thus boosting the airport's efficiency [14]. Accounting for the effect of arrivals There is, however, a caveat associated with calculation of the total time lost.As described previously, arrival spacing is not under the discretion of Boston Tower.In configurations where arrivals take place on a runway that is the same as or that intersects the departure runway, this can cause a dip in the efficiency.This effect is accounted for by discounting the idle time of a departure runway when an arrival is on short final (within 2.5 nmi of threshold) to an intersecting runway or to the same runway.Figure 13 and Figure 14 show the variation of Departure Spacing Efficiency with the time of day for December 09, 2010, with and without accounting for the arrival effect, respectively. In each figure, the plot on the top shows the efficiency in each 15 min interval, while the plot on the bottom shows the departure count in the corresponding intervals.The colored bars in the middle indicate the departure congestion level at the airport, calculated using a combination of the departure counts and queue lengths.Note that there are several time periods (for example, between 1630 and 2015 hours), when not including the impact of arrivals would lead to the erroneous conclusion that the efficiency was lower than it actually was.A comparison of Figure 13 and Figure 14 suggests that the efficiency during this time was above 85% when accounting for arrivals, whereas it was as low as 75% when their effect was ignored. Note the large dip in efficiency just prior to the configuration change at 1000 hours.On the other hand, a few intervals with net efficiency more than 1.0 can also be seen.These intervals correspond to spikes in the departure count, since consistent separation values less than the target level result in a large number of departures.The most notable high-efficiency interval is the one from 1945 to 2000 h, which is in the middle of a period with high demand.The bottom plot shows that the controllers managed to serve ten departures in this interval (nine on Runway 33L and one on Runway 27), while a comparison with Figure 8 shows that seven arrivals were also achieved.In this way, a combination of different performance metrics offers insights into the intricacies of surface operations that result in the net operational counts, which are the traditional measure of airport performance. IV. METRICS TO CHARACTERIZE LONG-TERM PERFORMANCE IV.A Long-term average departure spacing efficiency The metrics defined above, when consistently tracked over several months, can be used to measure the average operational performance of the airport.For example, Figure 15 shows the average departure spacing efficiency at BOS, sorted by configuration and demand levels.It is evident from the figure that for most configurations, the efficiency drops as demand increases.This conclusion is intuitive, since high demand usually means more operational complexity, more runway crossings, etc. It is also noted that the airport is most efficient when departures are taking place from Runways 4R and 9.As mentioned before, this configuration allows closely-spaced departures on the two crossing runways, which enhances the efficiency.An interesting observation is that the 33L | 27 configuration, with departures on Runway 27, is more efficient than the 27 | 33L configuration, even though the two are operationally very similar.One possible explanation for the increased departure spacing seen on Runway 33L is that it is longer than Runway 27.As a result, a departure from 33L takes longer to be completely clear of the runway, which is typically the cue for the air traffic controller to release the next aircraft.Either of these configurations can be used when winds are from the northwest.The operational implication of this result is that it is more desirable to use the 33L | 27 configuration when the departure demand is high, particularly in the mornings when the arrival demand is relatively low. IV.B Taxi-out time comparisons with historical data Beyond providing feedback on the daily variation of 15 min averaged taxi-out times, it is also possible to compare these times with the historical average.The historical average taxi-out time over the previous three months of operations is parameterized by the current configuration of the airport, and the 'congestion level' as described in Section III.C. Figure 16 shows a sample plot from December 09, 2010, the same day as in Figure 2. It is seen that average taxi-out times followed the historical variation through most of the day, but that there were some intervals of high taxi-out times during the evening peak period.The 27 | 33L configuration that was active at this time (Figure 8) is one of the lower efficiency configurations, as was seen in Figure 15. In addition, as seen in Figure 13, the spike in taxi-out times in the interval from 1600-1615 h corresponds to a 'moderate traffic' interval in the middle of generally low traffic intervals.No historical value is shown for intervals in which there were too few operations to define a configuration, or in which the airport was operating in a non-standard configuration. V. CONCLUSIONS This paper presented several novel ways in which surface surveillance data can be used for the detailed analysis of airport surface operations.In addition to the qualitative assessment of surface operations through visualization, the paper showed how to directly estimate quantities such as departure queue statistics and airport departure throughput characteristics.It also proposed three metrics to quantify day-to-day airport performance, namely, average taxi-out times, runway utilization and departure spacing efficiency. Metrics of long-term airport performance based on taxi-out times and departure spacing efficiency were also presented.The proposed metrics were discussed in detail for the case of Boston Logan International Airport, and it was shown that they could be used to gain insights into the performance of the airport.These insights can help to identify opportunities for the improvement of operational efficiency.For example, the results presented in Section IV.A showed that one of two symmetric configurations resulted in higher inter-departure spacing efficiency than the other.This fact could be leveraged to utilize the more efficient configuration when the choice was available. Other examples of salient feedback include an analysis of the different methods used by controllers for active runway crossing.If done correctly, arriving aircraft can cross the departure runway while the next departing aircraft is lining up for takeoff, significantly improving runway utilization figures.Different methods are effective to different degrees depending on the airport and runway configuration, and only a detailed analysis of empirical data can reveal the extent of their success. The assessment techniques proposed in this paper can be easily extended to the analysis of other airports, as has been demonstrated for the case of New York's LaGuardia (LGA) and Philadelphia (PHL) airports in recent work [18].In the future, monitoring algorithms can be developed to automatically flag off-nominal events in real-time, and display notifications to air traffic controllers.This research effort would need to include human factors studies as well as data mining algorithms for event identification.The fuel burn and emissions impact of any congestion control algorithms implemented at the airport can also be usefully analysed using the methods proposed in this paper, and this aspect is addressed in [14], [16]. Figure 1 : Figure 1: Layout of Boston Logan International Airport. Figure 2 : Figure 2: Visualization of queuing behavior at BOS. Aircraft icons represent departing aircraft queuing for departure from Runway 22R.The current queue size is 12 aircraft, with 7 aircraft on the east side and 5 aircraft on the west side of the runway. Figure 3 : Figure 3: Time spent in departure queue as a function of queue length when the aircraft enters it. Figure 4 , Figure 4, differing only in the point of saturation and corresponding maximum throughput.These differences can be attributed to configuration-specific procedures, such as closely spaced departures on intersecting runways. Figure 4 : Figure 4: Departure throughput as a function of the number of active departing aircraft on the surface. Figure 5 : Figure 5: Average taxi-out times by time-of-day at BOS on Dec. 09 2010, with excessive hold times removed. Figure 6 : Figure 6: Average taxi-out times by time-of-day at BOS on Dec. 09 2010, with all hold times included. Figure 7 : Figure 7: Approach areas for two runways at BOS. Aircraft within the white cones are assumed to be utilizing the runway. Figure 8 : Figure 8: Utilization of Runway 9/27 on December 09, 2010.Runway 27 was used for arrivals for the entire day.An occasional departure can be seen in the second plot, accompanied by a dip in the number of arrivals and the utilization for the time period.In the bottom plot, queue formation can be seen, composed of aircraft waiting to cross the runway for departure on 33L. Figure 9 : Figure 9: Utilization of [Top] Runway 15R/33L and [Bottom] Runway 4L/22R on December 09, 2010.Runway 33L was inactive until 1000 h, after which it was used for departures.Occasional arrivals seen in the top figure are Heavy aircraft that request this runway due to its greater length.Runway 22R (bottom figure) was used for departures during the peak morning period.Longer departure queue lengths are seen here than for Runway 33L, due to two closely spaced departure banks at 0600 h and 0745 h. Figure 10 : Figure 10: Utilization of [Top] Runway 9/27 and [Bottom] 4R/22L on October 27, 2012.On this day, the runway configuration is in the opposite direction to that of December 09, 2010.Runway 9 was used for departures throughout the day.Mixed departure and arrival operations are seen on Runway 4R, which is requested by Heavy aircraft because of its greater length. Figure 11 : Figure 11: Utilization of Runway 4L/22R on October 27, 2012.Mixed departure and arrival operations are seen through the day, but only props are allowed to use Runway 4L for departures. Table 1 : Target departure separations.The columns correspond to the weight class of the trailing aircraft, while the rows correspond to the weight class of the leading aircraft (P = Props, S = Small jets, L = Large jets, 757 = Boeing 757, H = Heavy jets).All figures are in seconds. Figure 12 demonstrates the calculation procedure for counting the time lost in a 15 min interval.The local time is shown on the x-axis.Each spike denotes the wheels-off time for a departure, with the height of the spike corresponding to the weight class of the aircraft.The spike then tapers off, reaching the 'clear to release' line when the target separation interval elapses.The gap from this point to the next departure spike counts towards the total number of seconds lost in the current 15 min interval. Figure 13 : Figure 13: Departure spacing efficiency on December 09, 2010, accounting for the effect of arrivals on the same/crossing runway. Figure 14 : Figure 14: Departure spacing efficiency on December 09, 2010, not accounting for the effect of arrivals on the same/crossing runway. Figure 15 : Figure 15: Average departure spacing efficiency for frequently used configurations at BOS. Figure 16 : Figure 16: Variation of average taxi-out times by time-of-day on December 09, 2010.
2016-03-14T22:51:50.573Z
2013-04-01T00:00:00.000
{ "year": 2013, "sha1": "ccca2d28904669daaa8f482bbb7da89aeae893b9", "oa_license": "CCBYNCSA", "oa_url": "https://dspace.mit.edu/bitstream/1721.1/96905/1/Balakrishnan_Metrics%20to.pdf", "oa_status": "GREEN", "pdf_src": "Crawler", "pdf_hash": "b2ed8c2d4cd750abf0c0825aff7c92981e8d9dce", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Engineering" ] }
13939415
pes2o/s2orc
v3-fos-license
Internal Universes in Models of Homotopy Type Theory We begin by recalling the essentially global character of universes in various models of homotopy type theory, which prevents a straightforward axiomatization of their properties using the internal language of the presheaf toposes from which these model are constructed. We get around this problem by extending the internal language with a modal operator for expressing properties of global elements. In this setting we show how to construct a universe that classifies the Cohen-Coquand-Huber-M\"ortberg (CCHM) notion of fibration from their cubical sets model, starting from the assumption that the interval is tiny - a property that the interval in cubical sets does indeed have. This leads to an elementary axiomatization of that and related models of homotopy type theory within what we call crisp type theory. Introduction Voevodsky's univalence axiom in Homotopy Type Theory (HoTT) [39] is motivated by the fact that constructions on structured types should be invariant under isomorphism. From a programming point of view, such constructions can be seen as type-generic programs. For example, if G and H are isomorphic groups, then for any construction C on groups, an instance C(G) can be transported to C(H) by lifting this isomorphism using a type-generic program corresponding to C. As things stand, there is no single definition of the semantics of such generic programs; instead there are several variations on the theme of giving a computational interpretation to the new primitives of HoTT (univalence and higher inductive types) via different constructive models [9,13,6,5], the pros and cons of which are still being explored. As we show in this paper, that exploration benefits from being carried out in a typetheoretic language. This is different from developing the consequences of HoTT itself using a type-theoretic language, such as intensional Martin-Löf type theory with axioms for univalence and higher inductive types, as used in [39]. There all types have higher-dimensional structure, or "are fibrant" as one says, via the structure of the iterated identity types associated with them. Contrastingly, when using type theory to describe models of HoTT, being fibrant is an explicit structure external to a type; and that structure can itself be classified by a type, so that users of the type theory can prove that a type is fibrant by inhabiting a certain other type. As an example, consider the cubical sets model of type theory introduced by Cohen, Coquand, Huber and Mörtberg (CCHM) [13]. This model uses a presheaf topos on a particular category of cubes that we denote by , generated by an interval object I, maps out of which represent paths. The corresponding presheaf toposˆ has an associated category with families (CwF) [15] structure that gives a model of Extensional Martin-Löf Type Theory [27] in a standard way [19]. While not all types in this presheaf topos have a fibration structure in the CCHM sense, working within constructive set theory, CCHM show how to make a new CwF of fibrant types out of this presheaf CwF, one which is a model of Intensional Martin-Löf Type Theory with univalent universes and (some) higher inductive types [39]. Their model construction is rather subtle and complicated. Coquand noticed that the CCHM version of Kan fibration could be more simply described in terms of partial elements in the internal language of the topos. Some of us took up and expanded upon that suggestion in [30] and [10,Section 4]. Using Extensional Martin-Löf Type Theory with an impredicative universe of propositions (one candidate for the internal language of toposes), those works identify some relatively simple axioms for an interval and a collection of Kan-filling shapes (cofibrant propositions) that are sufficient to define a CwF of CCHM fibrations and prove most of its properties as a model of univalent foundations, for example, that Π, Σ, path and other types are fibrant. These internal language constructions can be used as an intermediate point in constructing a concrete model in cubical sets: the type theory of HoTT [39] can be translated into the internal language of the topos, which has a semantics in the topos itself in a standard way. The advantages of this indirection are two-fold. First, the definition and properties of the notion of fibration (both the CCHM notion [13] and other related ones [5,34]) are simpler when expressed in the internal language; and secondly, so long as the axioms are not too constraining, it opens up the possibility of finding new models of HoTT. Indeed, since our axioms do not rely on the infinitary aspects of Grothendieck toposes (such as having infinite colimits), it is possible to consider models of them in elementary toposes, such as Hyland's effective topos [16,38]. From another point of view, the internal language of the presheaf topos can itself be viewed as a two-level type theory [4,40] with fibrant and non-fibrant types, where being fibrant is classified by a type, and the constructions are a library of fibrancy instances for all of the usual types of type theory. Directed type theory [34] has a very similar story: it adds a directed interval type and a logic of partial elements to homotopy type theory, and using them defines some new notions of higher-dimensional structure, including co-and contravariant fibrations. However, the existing work describing models using an internal language [30,10,5] does not encompass universes of fibrant types. The lack of universes is a glaring omission for making models of HoTT, due to both their importance and the difficulty of defining them correctly. Moreover, it is an impediment to using internal language presentations of cubical type theory as a two-level type theory. For example, most constructions on higher inductive types, like calculating their homotopy groups, require a fibrant universe of fibrant types; and adding universes to directed type theory would have analogous applications. Finally, packaging the fibrant types together into a universe restores much of the convenience of working in a language where all types are fibrant: instead of passing around separate fibrancy proofs, one knows that a type is fibrant by virtue of the universe to which it belongs. In this paper, we address this issue by studying universes of fibrant types expressed in internal languages for models of cubical type theories. CCHM [13] define a universe concretely using a version of the Hofmann-Streicher universe construction in presheaf toposes [20]. This gives a classifier for their notion of fibration-the universe is equipped with a CCHM fibration that gives rise to every fibration (with small fibres) by re-indexing along a function into the universe. In this way one gets a model of a Tarski-style universe closed under whatever type-forming operations are supported by CCHM fibrations. Thus, there is an appropriate semantic target for a universe of fibrant types, but neither [30], nor [10] gave a version of such a universe expressed in the internal language. This is for a good reason: [32,Remark 7.5] points out that there can be no internal universe of types equipped with a CCHM fibration that weakly classifies fibrations. We recall in detail why this is the case in Section 3, but the essence is that naïve axioms for a weak classifier for fibrations imply that a family of types, each member of which is fibrant, has to form a fibrant family; but this is not true for many notions of fibration, such as the CCHM one. To fix this issue, in Section 4 we enrich the internal language to a modal type theory with two context zones [33,14,36], inspired in particular by the fact that cubical sets are a model of Shulman's spatial type theory. In a judgement ∆ | Γ a : A of this modal type theory, the context Γ represents the usual local elements of types in the topos, while the new context ∆ represents global ones. The dual context structure is that of an S4 necessity modality in modal logic, because a global element determines a local one, but global elements cannot refer to local elements. We use Shulman's term "crisp" for variables from ∆, and call the type theory crisp type theory, because we do not in fact use any of the modal type operators of his spatial type theory, but just Π-types whose domains are crisp. Using these crisp Π-types, we give axioms that specify a universe that classifies global fibrations-the modal structure forbids the internal substitutions that led to inconsistency. One approach to validating these universe axioms would be to check them directly in a cubical set model; but we can in fact do more work using crisp type theory as the internal language and reduce the universe axioms to a structure that is simpler to check in models. Specifically, in Theorem 5.2, we construct such a universe from the assumption that the interval I is tiny, which by definition means that its exponential functor I _ has a right adjoint (a global one, not an internal one-this is another example where crisp type theory is needed to express this distinction). The ubiquity of right adjoints to exponential functors was first pointed out by Lawvere [23] in the context of synthetic differential geometry. Awodey pointed out their occurrence in interval-based models of type theory in his work on various cube categories [7]. As far as we know, it was Sattler who first suggested their relevance to constructing universes in such models (see [35,Remark 8.3]). It is indeed the case that the interval object in the topos of cubical sets is tiny. Some ingenuity is needed to use the right adjoint to I _ to construct a universe with a fibration that gives rise to every other one up to equality, rather than just up to isomorphism; we employ a technique of Voevodsky [41] to do so. Finally, we describe briefly some applications in Section 6. First, our universe construction based on a tiny interval is the missing piece that allows a completely internal development of a model of univalent foundations based upon the CCHM notion of fibration, albeit internal to crisp type theory rather than ordinary type theory. Secondly, we describe a preliminary result showing that our axioms for universes are suitable for building type theories with hierarchies of universes, each with a different notion of fibration. The constructions and proofs in this paper have been formalized in Agda-flat [2], an appealingly simple extension of Agda [3] that implements crisp type theory; see https: //doi.org/10.17863/CAM.22369. Agda-flat was provided to us by Vezzosi as a by-product of his work on modal type theory and parametricity [29]. Internal description of fibrations We begin by recalling from [32,10] the internal description of fibrations in presheaf models, using CCHM fibrations [13, Definition 13] as an example. Rather than using Extensional Martin-Löf Type Theory with an impredicative universe of propositions as in [32,10], here we use an intensional and predicative version, therefore keeping within a type theory with decidable judgements. 4 Our type theory of choice is the one implemented by Agda [3], whose assistance we have found invaluable for developing and checking the definitions. Adopting Agda-style syntax, dependent function types are written (x : A) B x, or {x : A} B x if the argument to the function is implicit; non-dependent function types are written (_ : A) B, or just A B. There is a non-cumulative hierarchy of Russell-style [25] universe types Set = Set 0 : Set 1 : Set 2 : Set 3 . . . Among Agda's inductive types we need identity types _ ≡ _ : {A : Set n } A A Set n , which form the inductively defined family of types with a single constructor refl : {A : Set n }{x : A} x ≡ x; and we need the empty inductive type ⊥ : Set, which has no constructors. Among Agda's record types (inductive types with a single constructor for which η-expansion holds definitionally) we need the unit type : Set with constructor tt : ; and dependent products (Σ-types), that we write as Σ This type theory can be interpreted in (the category with families of) any presheaf topos, such as the one defined below, so long as we assume that the ambient set theory has a countable hierarchy of Grothendieck universes; in particular, one could use a constructive ambient set theory such as IZF [1] with universes. We will use the fact that the interpretation of the type theory in presheaf toposes satisfies function extensionality and uniqueness of 22:5 identity proofs: Definition 2.1 (Presheaf topos of de Morgan cubical sets). Let denote the small category with finite products which is the Lawvere theory of De Morgan algebra (see [8,Chap. XI] and [37, Section 2]). Concretely, op consists of the free De Morgan algebras on n generators, for each n ∈ N, and the homomorphisms between them. Thus contains an object I that generates the others by taking finite products, namely the free De Morgan algebra on one generator. This object is the generic De Morgan algebra and in particular it has two distinct global elements, corresponding to the constants for the greatest and least elements. The topos of cubical sets [13], which we denote by , is the category of Set-valued functors on op and natural transformations between them. The Yoneda embedding, written y : → , sends I ∈ with its two distinct global elements to a representable presheaf I = yI with two distinct global elements. This interval I is used to model path types: a path in A from a 0 to a 1 is any morphism I → A that when composed with the distinct global elements gives a 0 and a 1 . The toposes used in other cubical models [9,6,5] vary the choice of algebra from the De Morgan case used above; see [11]. To describe all these cubical models using type theory as an internal language, we postulate the existence of an interval type I with two distinct elements, which we write as O and I: Apart from an interval, the other data needed to define a cubical sets model of homotopy type theory is a notion of cofibration, which specifies the shapes of filling problems that can be solved in a dependent type. For this, CCHM [13] use a particular subobject of Ω ∈ (the subobject classifier in the topos ), called the face lattice; but other choices are possible [32]. Here, we avoid the use of the impredicative universe of propositions Ω and just assume the existence of a collection of "cofibrant" types in the first universe Set, including at least the empty type ⊥ (in Section 6, we will introduce more cofibrations, needed to model various type constructs): We call ϕ : Set cofibrant if cof ϕ holds, that is, if we can supply a term of that type. To define the fibrations as a type in the internal language we use two pieces of notation. First, the path functor associated with the interval I is Secondly, we define the following extension relation Thus t x is the type of proofs that the partial element t : ϕ A extends to the (total) element x : A. We will use this when t denotes a partial element of A of cofibrant extent, that is when we have a proof of cof ϕ. Definition 2.2 (fibrations). The type isFib A of fibration structures for a family of types A : Γ Set n over some type Γ : Set m consists of functions taking any path p : ℘ Γ in the base type to a composition structure in C(A • p): Here C is some given function ℘ Set n Set 1 n (polymorphic in the universe level n) which parameterizes the notion of fibration. Then for each type Γ, the type Fib n Γ of fibrations over it with fibers in Set n consists of families equipped with a fibration structure and there are re-indexing functions, given by composition of dependent functions (_ • _) A CCHM fibration is the above notion of fibration for the composition structure CCHM : ℘ Set n Set 1 n from [13]: Thus the type CCHM P of CCHM composition structures for a path of types P : ℘ Set n consists of functions taking any dependently-typed path of partial elements p : (i : I) ϕ P i of cofibrant extent to a function mapping extensions of the path at one end p O a 0 , to extensions of it at the other end p I a 1 . When the cofibration is ⊥, this isFib Γ A expands to the statement that for all paths p : I → Γ, A(p O) → A(p I), so that this internal language type says that A is equipped with a transport function along paths in Γ. The use of cofibrant partial elements generalizes transport with a notion of path composition, which is used to show that path types are fibrant. Other notions of fibration follow the above definitions but vary the definition of C : ℘ Set n Set 1 n ; for example, generalized diagonal Kan composition [5]. Co/contravariant fibrations in directed type theory [34] also have the form of isFib for some C, but with ℘ being directed paths. Definition 2.2 illustrates the advantages of internal-language presentations; in particular, uniformity [13] is automatic. If Γ denotes an object of the cubical sets topos , then Fib 0 Γ denotes an object whose global sections correspond to the elements of the set FTy(Γ) of families over Γ equipped with a composition structure as defined in [13,Definition 13]. Our goal now is to first recall that there can be no universe that weakly classifies these CCHM fibrations in an internal sense, and then move to a modal type theory where such a universe can be expressed. 3 The "no-go" theorem for internal universes In this section we recall from [32, Remark 7.5] why there can be no universe that weakly classifies CCHM fibrations in an internal sense. Such a weak classifier would be given by the following data where for simplicity we restrict attention to fibrations whose fibers are in the lowest universe, Set = Set 0 . Here U is the universe 5 and El is a CCHM fibration over it which is a weak classifier in the sense that any fibration Φ : Fib 0 Γ can be obtained from it (up to equality) by re-indexing along some function code Φ : Γ U. (The word "weak" refers to the fact that we do not require there to be a unique function γ : We will show that the data in (11) implies that the interval must be trivial (O ≡ I), contradicting the assumption in (3). This is because (11) allows one to deduce that if a family of types A : Γ Set has the property that each A x has a fibration structure when regarded as a family over the unit type , then there is a fibration structure for the whole family A; and yet there are families where this cannot be the case. For example, consider the family P : I Set with P i = (O ≡ i). For each i : I, the type P i has a fibration structure π i : isFib (λ _ P i), because of uniqueness of identity proofs (2). But the family as a whole satisfies isFib I P ⊥, because if we had a fibration structure α : isFib I P , then we could apply it to (where ⊥elim : {A : Set} ⊥ A is the elimination function for the empty type) to get α id ϕ u p z : (Σ a 1 : P I , p I a 1 ) and hence O ≡I (fst (α id ϕ u p z)) : ⊥. From this we deduce the following "no-go" 6 theorem for internal universes of CCHM fibrations. (11) for CCHM fibrations is contradictory. More precisely, if IntUniv : Set 3 is the dependent record type with fields U, El, code and Elcode as in (11), then there is a term of type IntUniv ⊥. Proof. 7 Suppose we have an element of IntUniv and hence functions as in (11). Then taking P to be λi (O ≡ i) and using the family π i of fibration structures on each type P i mentioned above, we get: Using Elcode and function extensionality (1), it follows that there is a proof u : fst Φ ≡ P , namely u = funext (λ i cong (λ x fst x tt) (Elcode ((λ _ P i) , π i))), where cong is the usual congruence property of equality. From that and snd Φ we get an element of isFib I P . But we saw above how to transform such an element into a proof of ⊥. So altogether we have a proof of IntUniv ⊥. Remark 3.2. This counterexample generalizes to other notions of fibration: it is not usually the case that any type family A : Γ → Set for which A x is fibrant over for all x : Γ, is fibrant over Γ. The above proof should be compared with the proof that there is no "fibrant replacement" type-former in Homotopy Type System (HTS); see https://ncatlab.org/ homotopytypetheory/show/Homotopy+Type+System#fibrant_replacement. Theorem 5.1 below provides a further example of a global construct that does not internalize. Crisp type theory The proof of Theorem 3.1 depends upon the fact that in the internal language the code function can be applied to elements with free variables. In this case it is the variable i : I in code ((λ _ P i) , π i) tt; by abstracting over it we get a function I U and re-indexing El along this function gives the offending fibration (12). Nevertheless, the cubical sets presheaf topos does contain a (univalent) universe which is a CCHM fibration classifier, but only in an external sense. Thus there is an object U in and a global section El : 1 → Fib 0 U with the property that for any object Γ and morphism Φ : 1 → Fib 0 Γ, there is a morphism code Φ : Γ → U so that Φ is equal to the composition Fib 0 (code Φ) • El : 1 → Fib 0 Γ; see [13,Definition 18] for a concrete description of U. The internalization of this property replaces the use of global elements 1 → Γ of an object by local elements, that is, morphisms X → Γ where X ranges over a suitable collection of generating objects (for example, the representable objects in a presheaf topos); and we have seen that such an internalized version cannot exist. Nevertheless, we would like to explain the construction of universes like U ∈ using some kind of type-theoretic language that builds on Section 2. So we seek a way of manipulating global elements of an object Γ, within the internal language. One cannot do so simply by quantifying over elements of the type Γ, because of the isomorphism Γ ∼ = ( Γ). Instead, we pass to a modal type theory that can speak about global elements, which we call crisp type theory. Its judgements, such as ∆ | Γ a : A, have two context zones-where ∆ represents global elements and Γ the usual, local ones. The context structure is that used for an S4 necessitation modality [33,14,36], because a global element from ∆ can be used locally, but global elements cannot depend on local variables from Γ. Following [36], we say that the left-hand context ∆ contains crisp hypotheses about the types of variables, written x :: A. The interpretation of crisp type theory in cubical sets makes use of the comonad : → that sends a presheaf A to the constant presheaf on the set of global sections of A; thus A(X) ∼ = A(1) for all X ∈ (where 1 ∈ is terminal). Then a judgement ∆ | Γ a : A describes the situation where ∆ is a presheaf, Γ is a family of presheaves over ∆, A is a family over Σ( ∆) Γ and a is an element of that family. The rules of crisp type theory are designed to be sound for this interpretation. Compared with ordinary type theory, the key constraint is that types in the crisp context and terms substituted for crisp variables depend only on crisp variables. The crisp variable and (admissible) substitution rules are: The semantics of the variable rule, which says that global elements can be used locally, uses the counit ε A : A → A of the comonad mentioned above. In the substitution rule, stands for the empty list, so a and A may only depend upon the crisp variables from ∆. The other rules of crisp type theory (those for Π types, Σ types, etc.) carry the crisp context along. For our application we do not need a type-former for , but instead make use of crisp Π types (see, e.g. [14,28]), that is, Π types whose domain is crisp with βη judgemental equalities. In these rules, because the argument variable x is crisp, its type A, and the term a to which the function f is applied, must also be crisp. We also use crisp induction for identity types [36]-identity elimination with a family y :: A, p :: x ≡ y C(y, p) whose parameters are crisp variables, which is given by a term of type {A :: Set n }{x :: A}(C : (y :: A)(p :: x ≡ y) Set n )(z : C x refl)(y :: A)(p :: x ≡ y) C y p (15) together with a β judgemental equality. Remark 4.1 (Presheaf models of crisp type theory). Crisp type theory is motivated by the specific presheaf topos from Definition 2.1. However, it seems that very little is required of a category C for the presheaf topos C to soundly interpret it using the comonad = p * • p * , where p * takes the global sections of a presheaf and its left adjoint p * sends sets to constant presheaves. This preserves finite limits (because it is the composition of functors with left adjoints-p * is isomorphic to the functor given by precomposition with C → 1 and hence has a left adjoint given by left Kan extension along C → 1). Although the details remain to be worked out, it appears that to model crisp type theory with crisp Π types and crisp identification induction (and moreover a modality with crisp induction, which we do not use here), the only additional condition needed is that this comonad is idempotent (meaning that the comultiplication δ : → • is an isomorphism). This idempotence holds iff C is a connected topos, which is the case iff C is a connected category-for example, when C has a terminal object. If it does have a terminal object, then C is a local topos [21, Sect. C3.6] and has a right adjoint; in which case, conjecturally [36,Remark 7.5], one gets a model of the whole of Shulman's spatial type theory, of which crisp type theory is a part. In fact does not just have a terminal object, it has all finite products (as does any Lawvere theory) and from this it follows that is not just local, but also cohesive [24]. Remark 4.2 (Agda-flat). Vezzosi has created a fork of Agda, called Agda-flat [2], which allows us to explore crisp type theory. It adds the ability to use crisp variables 8 x :: A in places where ordinary variables x : A may occur in Agda, and checks the modal restrictions in the above rules. For example, Agda-flat quite correctly rejects the following attempted application of a crisp-Π function to an ordinary argument wrong : (A :: Set n )(B : Set m )(f : (_ :: while the variant with x :: A succeeds. This is a simple example of keeping to the modal discipline that crisp type theory imposes; for more complicated cases, such as occur in the proof of Theorem 5.2 below, we have found Agda-flat indispensable for avoiding errors. However, Agda-flat implements a superset of crisp type theory and more work is needed to understand their precise relationship. For example, Agda's ability to define inductive types leads to new types in Agda-flat, such as the modality itself; and its pattern-matching facilities allow one to prove properties of that go beyond crisp type theory. Agda allows one to switch off patternmatching in a module; to be safe we do that as far as possible in our development. Installation instructions for Agda-flat can be found at https://doi.org/10.17863/CAM.22369. Universes from tiny intervals In crisp type theory, to avoid the inconsistency in the "no-go" Theorem 3.1, we can weaken the definition of a universe in (11) by taking code and Elcode to be crisp functions of fibrations Φ (and implicitly, of the base type Γ of the fibration). For if code has type {Γ :: Set}(Φ :: Fib 0 Γ)(x : Γ) U, then the proof of a contradiction is blocked when in (12) we try to apply code to Φ = ((λ _ P i) , π i), which depends upon the local variable i : I. Indeed we show in this section that given an extra assumption about the interval type I that holds for cubical sets, it is possible to define a universe with such crisp coding functions which moreover are unique, so that one gets a classifying fibration, rather than just a weakly classifying one. Recall from Definition 2.1 that in the cubical sets model, the type I denotes the representable presheaf yI ∈ on the object I ∈ . Since has finite products, there is a functor _ × I : → . Pre-composition with this functor induces an endofunctor on presheaves (_ × I) * : → which has left and right adjoints, given by left and right Kan extension [26, Chap. X] along _ × I. Hence by the Yoneda Lemma, for any F ∈ and X ∈ naturally in both X and F . It follows that the exponential functor ℘ = I _ : → is naturally isomorphic to (_ × I) * and hence not only has a left adjoint (corresponding to product with I) but also a right adjoint. The significance of objects in a category with finite products that are not only exponentiable (product with them has a right adjoint), but also whose exponential functor has a right adjoint was first pointed out by Lawvere in the context of synthetic differential geometry [23]. He called such objects "atomic", but we will follow later usage [42] and call them tiny. 9 Thus the interval in cubical sets is tiny and we have a right adjoint to the path functor ℘ that we denote by √ : → . So for each Given Γ and A : Γ Set in , from Definition 2.2 we have that fibration structures 1 → isFib Γ A correspond to sections of fst : (Σ p : ℘ Γ , C(A • p)) → ℘ Γ and hence, transposing across the adjunction ℘ √ , to morphisms making the outer square commute in the right-hand diagram below: We therefore have that fibration structures for A correspond to sections of the pullback π 1 : R Γ A → Γ of √ fst along the unit η Γ : Γ → √ (℘ Γ) of the adjunction at Γ (which is the adjoint transpose of id : ℘ Γ → ℘ Γ). This characterization of fibration structure does not depend on the particular definition of C, so should apply to many notions of fibration. We will show how it leads to the construction of a universe U = R Set id and family π 1 : R Set id → Set which is a classifier for fibrations. However, there are two problems that have to be solved in order to carry out the construction within type theory: First, for Elcode in (11) to be an equality (rather than just an isomorphism), one needs the choice of R Γ A to be strictly functorial with respect to re-indexing along Γ (and hence to be a dependent right adjoint in the sense of [12]). Secondly, one cannot use ordinary type theory as the internal language to formulate the construction, because the right adjoint to ℘ does not internalize, as the following theorem shows. Proof. It is an elementary fact about adjoint functors that such a family of natural isomorphisms is also natural in B. Note that ℘ ∼ = . So if we had such a family, then we would also have isomorphisms B ∼ = ( Therefore √ would be isomorphic to the identity functor and hence so would be its left adjoint ℘. Hence I _ and _ would be isomorphic functors → , which implies (by the internal Yoneda Lemma) that I is isomorphic to the terminal object , contradicting the fact that I has two distinct global elements. We will solve the first of the two problems mentioned above in the same way that Voevodsky [41] solves a similar strictness problem (see also [12,Section 6]): apply √ once and for all to the displayed universe and then re-index, rather than vice versa (as done above). The second problem is solved by using the crisp type theory of the previous section to make the right adjoint √ suitably global. The axioms we use are given in Fig. 1. The function R gives the operation for transposing (global) morphisms across the adjunction ℘ √ , with inverse L (the bijection being given by RL and LR); and R℘ is the naturality of this operation. The other properties of an adjunction follow from these, in particular its functorial action √ ' : {A :: Set n }{B :: Set m }(f :: Fig. 1 assumes that the right adjoint to I (_) preserves universe levels. The soundness of this for relies on the fact that this adjoint is given by right Kan extension [26, Chap. X] along _ × I : → and hence sends a presheaf valued in the nth Grothendieck universe to another such. Theorem 5.2 (Universe construction 10 ). For fibrations as in Definition 2.2 with any definition of composition structure C (e.g. the CCHM one in (10)), assuming axioms (1)-(4) and a tiny (Fig. 1) interval, there is a universe U equipped with a fibration El which is Proof. Consider the display function associated with the first universe: We have C : ℘ Set 0 Set 1 and hence using the transpose operation from Fig. 1, R C : Set 0 √ Set 1 . We define U : Set 2 by taking a pullback: Transposing this square across the adjunction ℘ √ gives pr 1 • L π 2 = C • ℘'π 1 : ℘ U Set 1 . Considering the first and second components of L π 2 , we have L π 2 ≡ C • ℘'π 1 , υ for some υ : (p : ℘ U) C(℘'π 1 p); hence υ is an element of isFib U π 1 and so we can define So it just remains to construct the functions in (16). Given Γ :: Set and Φ = (A, α) :: Fib 0 Γ, we have α :: isFib Γ A = (p : ℘ Γ) C(A • p). So the outer square in the left-hand diagram below commutes: Elt 1 Transposing across the adjunction ℘ √ , this means that the outer square in the right-hand diagram also commutes and therefore induces a function code Φ : Γ U to the pullback. So there are proofs of π 1 • code Φ ≡ A and π 2 • code Φ ≡ R C • ℘'A , α . Transposing the latter back across the adjunction gives a proof of L π 2 • ℘'(code Φ) ≡ C • ℘'A , α ; and since L π 2 ≡ C • ℘'π 1 , υ , this in turn gives a proof of υ • ℘'(code Φ) ≡ α. Combining this with the proof of The above theorem can be generalized by replacing the particular universe id : Set Set by an arbitrary one E 0 : U 0 Set. So long as the composition structure C lands in U 0 , one can use the above method to construct a universe of fibrant types from among the U 0 types. 11 The application of this generalization we have in mind is to directed type theory; for example one can first construct the universe of fibrant types in the CCHM sense and then make a universe of covariant discrete fibrations in the Riehl-Shulman [34] sense from the fibrant types (repeating the construction with a different interval object). Remark 5.4. The results in this section only make use of the fact that the functor √ : → is right adjoint to the exponential I (_) and we saw at the beginning of this section why such a right adjoint exists. It is possible to give an explicit description of presheaves of the form √ Γ, but so far we have not found such a description to be useful. Applications Models. Theorem 5.2 is the missing piece that allows a completely internal development of a model of univalent foundations based upon the CCHM notion of fibration, albeit internal to crisp type theory rather than ordinary type theory. One can define a CwF in crisp type theory whose objects are crisp types Γ :: Set 2 , whose morphisms are crisp functions γ :: Γ Γ, whose families are crisp CCHM fibrations Φ = (A, α) :: Fib 0 Γ and whose elements are crisp dependent functions f :: (x : Γ) A x. To see that this gives a model of univalent foundations one needs to prove: (a) The CwF is a model of intensional type theory with Π-types and inductive types (Σ-types, identity types, booleans, W -types, . . . ). (b) The type U :: Set 2 constructed in Theorem 5.2 is fibrant (as a family over the unit type). (c) The classifying fibration Φ :: Fib 0 U satisfies the univalence axiom in this CwF. Although we have yet to complete the formal development in Agda-flat, these should be provable from axioms (1)-(4) and Fig. 1, together with some further assumptions about the interval object and cofibrant types listed in Fig. 2. Part (a) was carried out in prior work, albeit in the setting with an impredicative universe of propositions [32]. In the predicative version considered here, we replace the impredicative universe of propositions with axioms asserting that being cofibrant is a mere proposition (isPropcof), that cofibrant types are mere propositions (cofisProp) and satisfy propositional extensionality (cofExt). These axioms are satisfied by provided we interpret cof : Set Set as cof A = ∃ϕ : Ω , ϕ ∈ Cof ∧ A ≡ {_ : | ϕ}, using the subobject Cof Ω corresponding to the face lattice in [13] (see [32,Definition 8.6]). Axioms cofO, cofI, cofOr, cofAnd, cof∀I and strax correspond to the axioms ax 5 -ax 9 from [32]; in strax, ∼ = is the usual internal statement of isomorphism. cofAnd is the dominance axiom that guarantees that cofibrations compose. Note that axiom cofOr uses an operation sending mere propositions ϕ and ψ to the mere proposition ϕ ∨ ψ that is the propositional truncation of their disjoint union; the existence of this operation either has to be postulated, or one can add axioms for quotient types [18, Section 3.2.6.1] to crisp type theory, (of which propositional truncation is an instance), in which case function extensionality (1) is no longer needed as an axiom, since it is provable using quotient types [39, Section 6.3]. Since in this paper we have taken a CCHM fibration to just give a composition operation for cofibrant partial paths from O to I and not vice versa, in Fig. 2 we have postulated a path-reversal operation rev; this and the other axioms for I in that figure suffice to give a "connection algebra" structure on I [32, axioms ax 3 and ax 4 ]. Part (b) can be proved using a version of the glueing operation from [13], which is definable within crisp type theory as in [32,Section 6] and [10,Section 4.3.2]. The strictness axiom strax in Fig. 2 is needed to define this; and the assumption that cofibrant types are closed under I-indexed ∀ (cof∀I) is used to define the appropriate fibration structure for glueing. Part (c) can be proved as in [31, Section 6] using a characterization of univalence somewhat simpler than the original definition of Voevodsky [39, Section 2.10]. The axiom strax gets used to turn isomorphisms into paths; and the axiom cof∀I is used to "realign" fibration structures that agree on their underlying types (see [31,Lemma 6.2]). Remark 6.1 (The interval is connected). Fig. 2 does not include an axiom asserting that the interval is connected, because that is implied by its tinyness (Fig. 1). Connectedness was postulated as ax 1 in [32] and used to prove that CCHM fibrations are closed under inductive type formers (and in particular that the natural number object is fibrant). The proof [32,Thm 8.2] that the interval in cubical sets is connected essentially uses the fact that is a cohesive topos (Remark 4.1). However it also follows directly from the tinyness property: connectedness holds iff (I B) ∼ = B, where B = + is the type of Booleans. Since we postulate that I _ has a right adjoint, it preserves this coproduct and hence (I B) ∼ = (I ) + (I ) ∼ = + = B. Remark 6.2 (Alternative models). We have focussed on axioms satisfied by and the CCHM notion of fibration in that presheaf topos. However, the universe construction in Theorem 5.2 also applies to the cartesian cubical set models [5], and we expect it is possible to give proofs in crisp type theory of its fibrancy and univalence as well. In this paper we only consider "cartesian" path-based models of type theory, in which a path is an arbitrary function out of an interval object, or in other words, the path functor is given by an exponential. The models in [22] and [9] are not cartesian in that sense-the path functors they use are right adjoint to certain functorial cylinders [17] not given by cartesian product. 12 However, those path functors do have right adjoints (given by right Kan extension to suitable "shift" functors on the domain category of the presheaf toposes involved) and universes in these models can be constructed using the method of Theorem 5.2. (Our Agda proof of that theorem does not depend upon the path functor being an actual exponential.) A proof in crisp type theory that those universes are fibrant and univalent may require a modification of our axiomatic treatment of cofibrancy; we leave this for future work. Universe hierarchies. Given that there are many notions of fibration that one may be interested in, it is natural to ask how relationships between them induce relationships between universes of fibrant types. As motivating examples of this, we might want a cubical type theory with a universe of fibrations with regularity, an extra strictness corresponding to the computation rule for identity types in intensional type theory; or a three-level directed type theory with non-fibrant, fibrant, and co/contravariant universes. Towards building such hierarchies, in the companion code 13 we have shown in crisp type theory that universes are functorial in the notion of fibration they encapsulate: when one notion of fibrancy implies another, the first universe includes the second. Proposition 6.3. Let C 1 , C 2 : ℘ Set n Set 1 n be two notions of composition, isFib 1 and isFib 2 the corresponding fibration structures, and U 1 and U 2 the corresponding classifying universes. A morphism of fibration structures is a function f Γ,A : isFib 1 Γ A isFib 2 Γ A for all Γ and A, such that f is stable under reindexing (given h : ∆ Γ, and φ : . Then a morphism of fibrations f induces a function U 1 U 2 , and this preserves identity and composition. Conclusion Since the appearance of the CCHM [13] constructive model of univalence, there has been a lot of work aimed at analysing what makes this model tick, with a view to simplifying and generalizing it. Some of that work, for example by Gambino and Sattler [17,35], uses category theory directly, and in particular techniques associated with the notion of Quillen model structure. Here we have continued to pursue the approach that uses a form of type theory as an internal language in which to describe the constructions associated with this model of univalent foundations [32,10]. For those familiar with the language of type theory, we believe this provides an appealingly simple and accessible description of the notion of fibration and its properties in the CCHM model and in related models. We recalled why there can be no internal description of the univalent universe itself if one uses ordinary type theory as the internal language. Instead we extended ordinary type theory with a suitable modality and then gave a universe construction that hinges upon the tinyness property enjoyed by the interval in cubical sets. We call this language crisp type theory and our work inside it has been carried out and checked using an experimental version of Agda provided by Vezzosi [2].
2018-01-26T11:50:58.000Z
2018-01-23T00:00:00.000
{ "year": 2018, "sha1": "692d4168cbc2d19203845609cdf94f8ef472d821", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "2edcd205c239fd845c8279fa545fede379beabd0", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
250113779
pes2o/s2orc
v3-fos-license
Six-meson amplitude in QCD-like theories We calculate the relativistic six-meson scattering amplitude at low energy within the framework of QCD-like theories with $n$ degenerate quark flavors at next-to-leading order in the chiral counting. We discuss the cases of complex, real and pseudo-real representations, i.e. with global symmetry and breaking patterns $\text{SU}(n)\times\text{SU}(n)/\text{SU}(n)$ (extending the QCD case), $\text{SU}(2n)/\text{SO}(2n)$, and $\text{SU}(2n)/\text{Sp}(2n)$. In case of the one-particle-irreducible part, we obtain analytical expressions in terms of 10 six-meson subamplitudes based on the flavor and group structures. We extend on our previous results obtained within the framework of the $\text{O}(N+1)/\text{O}(N)$ non-linear sigma model, with $N$ being the number of meson flavors. This work allows for studying a number of properties of six-particle amplitudes at one-loop level. I. INTRODUCTION Quantum chromodynamics (QCD), the fundamental theory of the strong interaction, becomes nonperturbative at low energy and it is therefore impractical for phenomenology in that regime. From the largedistance perspective, the fundamental quark and gluon degrees of freedom are effectively replaced by composite colorless states, the lightest of which are the mesons. These can be approximately interpreted as the Nambu-Goldstone bosons of the associated spontaneous breaking of the chiral symmetry of massless QCD. With appropriate explicit symmetry breaking added to account for quark masses and non-strong interactions, the resulting effective field theory (EFT) is known as chiral perturbation theory (ChPT) [1][2][3] and is commonly used with great success for low-energy hadron phenomenology. See Refs. [4,5] for modern introductions to ChPT. There has been recent interest in the 3 → 3 meson scattering amplitude driven by advances in lattice QCD [6][7][8][9][10][11][12][13][14][15][16][17][18]. While many ChPT observables are known to high loop level, the six-meson amplitude was only recently calculated to one-loop level [19], and then only for two quark flavors, i.e. a meson spectrum of only pions. The case of three or more flavors is largely unexplored; the tree-level part is known up to next-tonext-to-next-to-leading order (N 3 LO) in the massless case [20]. The leading-order (LO) massless pion case was initially done with current algebra methods and predates ChPT [21,22]. While QCD is the canonical example, strongly coupled gauge theories can have different patterns of spontaneous symmetry breaking. These were first discussed in the context of technicolor theories [23][24][25]. When the gauge group is vector-like and all fermions have the same mass, only three patterns show up as discussed in Ref. [26]; earlier work can be traced from there. If all n fermions are in a complex representation, the global cally to zero. In this particular kinematic setting, we plot the flavor-stripped amplitudes with respect to p and show the results in Sec. IV. Our conclusions are shortly discussed in Sec. V, followed by several technical appendices that fix the notation and explain further subtleties and broader context. Explicit expressions for our main result -the NLO six-meson amplitude -in terms of deorbited group-universal subamplitudes can be found in Appendix D. The analytical work in this manuscript was done both using Wolfram Mathematica with the FeynCalc package [40][41][42] and a FORM [43] implementation. The numerical results use LoopTools [44,45]. II-A. Lagrangian We consider a theory of n fermions with some symmetry group G, which is spontaneously broken to a subgroup H. This gives rise to an EFT whose degrees of freedom are pseudo-Nambu-Goldstone bosons transforming under the quotient group G/H. In analogy with the QCD case, we will refer to these as 'mesons'. We choose G/H from the patterns of symmetry breaking present in the QCD-like theories described in the introduction. The mesons are parametrized through a flavor-space matrix field u, also called the Nambu-Goldstone boson matrix. In addition, the Lagrangian can be extended in terms of vector, axialvector, scalar and pseudoscalar external fields [2,3]. These correspond to vector and scalar sources for conserved and broken generators in general. The symmetry may be explicitly broken by introducing quark masses in the scalar external field. Except for the definition of the decay constant and the introduction of quark masses, we do not need external fields in this work. The Lagrangian for the meson-meson scattering at NLO relevant for all the discussed theories can be written as L = L (2) + L (4) , separating the LO and NLO terms in the chiral counting. 1,2 These take the form L (4) = L 0 u µ u ν u µ u ν + L 1 u µ u µ u ν u ν + L 2 u µ u ν u µ u ν + L 3 u µ u µ u ν u ν + L 4 u µ u µ χ + + L 5 u µ u µ χ + Above, · · · denotes a flavor-space trace over n × n matrices for the SU and 2n × 2n matrices for the SO 1 The next-to-next-to-leading-order (NNLO) terms L (6) [46] and N 3 LO terms L (8) [47] are also known but not used here. 2 Recall that the chiral counting order of an -loop diagram with n k vertices from L (k) is m = 2 + 2 + k n k (k − 2). Thus, NLO (m = 4) diagrams have either one loop or one vertex from L (4) . and Sp cases. Moreover, Under G, both u µ and χ ± transform as X → hXh † , where h ∈ H. Above, as usual, χ ≡ 2B 0 M, with M = s − ip, where s(p) are the (pseudo)scalar external fields and B 0 is a parameter related to the scalar singlet quark condensate 0|qq|0 (not to be confused with the integral B 0 in Appendix A). For our application and in the case with all the mesons having the same (lowestorder) mass M , we can simply put χ = M 2 1. The Nambu-Goldstone boson matrix u can be parametrized as where φ a denote the pseudoscalar meson fields and t a are Hermitian generators of G/H normalized to t a t b = δ ab . Besides the 'exponential' parametrization (6), there are other options available in the literature. For practical calculations, it is useful to employ several different parametrizations in parallel. This serves as a neat cross-check since, as anticipated, the final amplitude should be parametrization-independent. We discuss the most general reparametrization in Appendix B. In the case of the six-meson amplitude at NLO, 18 free parameters appear in the expansion of u in terms of φ a t a . We have checked that all our physical results are independent of these parameters. II-B. Flavor structures Each meson φ a carries a flavor index a, which appears in the amplitude carried by a G/H generator residing in a flavor-space trace. When a pair of fields is Wick contracted, the corresponding flavor indices are summed over; under SU, the resulting expressions are evaluated using the Fierz identities where A and B are arbitrary flavor-space matrices. The analogous identities for SO and Sp are quite similar, so we will use the abbreviation S O p in correlation with ±: SO is paired with +, and Sp with −. Thus, for S O p one uses Here, B must be a string of generators t a or the unit matrix, so † effectively denotes reversal. 3 Note that the implicitly summed index a has different dimensions in Eqs. (7) and (8), corresponding to the number of mesons: n 2 − 1 under SU and 2n 2 ± n − 1 under S O p. Note also, that due to the formally identical Lagrangians, Eqs. (7) and (8) are the only source of formal dissimilarity between the amplitudes for the different cases. II-C. Low-energy constants and renormalization At LO, we have two low-energy parameters: the mass M (related to the aforementioned B 0 ) and decay constant F . At NLO, 9 more constants (LECs) L i accompanying additional allowed chirally symmetric structures (operators) relevant for our application appear, as shown in Eq. (3). These constants contain UVdivergent parts represented by coefficients Γ i , which are uniquely fixed from the requirement that physical NLO amplitudes should be finite, and UV-finite parts L r i ≡ L r i (µ), renormalized at a scale µ, that are free parameters in the theory: Above, d is the space-time dimension in the vicinity of 4 and c is such that Consequently, in terms of = 2 − d/2 and one writes (to NLO) with κ ≡ 1 16π 2 . The extra '+1' term in Eq. (11) with respect to the standard MS renormalization scheme is customary in ChPT. Studying and renormalizing the four-meson amplitude at NLO (i.e. considering one-loop diagrams with vertices from L (2) and tree-level counterterms from L (4) ) determines all the Γ i except for one: The divergent part of L 7 remains unset. It can, however, be fixed from the six-meson amplitude. Using the heatkernel technique, all NLO divergences were derived in Ref. [30]. For the reader's convenience, we list the Γ i here in a group-universal form: Above, ξ and ζ ≡ (1 + ξ 2 ) parametrize the groups as follows: Another check on our calculation is that, with the expressions in Eq. (13), all our results are finite. II-D. Mass and decay constant The Z factor used for the wave-function renormalization is related to the meson self-energy Σ as with −iΣ being represented by a tadpole graph with two external legs plus counterterms stemming from the Lagrangian (3). Note that in our application the physical mass of all mesons is equal and denoted as M π . At NLO, the LO vertex and propagator are extended in terms of the replacements at the given order equivalent to the standard M 2 Here, we again present the group-universal form. Above and later on, we use L ≡ κ log M 2 µ 2 . Needless to say, in the final result one only retains the terms relevant at order O(p 4 ). Thus, in the rest of the NLO expressions one simply takes M → M π and F → F π . Note that we recalculated the results of Eq. (17) and that they agree with Refs. [27,29,30]. III. THE AMPLITUDES In terms of Feynman diagrams, loop integrals, etc., the present calculation proceeds along the same lines as the one performed in Ref. [19]. However, the result is considerably more cumbersome, largely because the flavor indices are carried by more structures beyond Kronecker δ's. There is also the matter of treating the SU, SO and Sp variants in parallel without tripling the amount of material to present. We will therefore devote much of this section to simplifying the amplitude expressions. These are discussed in a more formal and general way in the next subsection. In order to formalize the structure seen in Eqs. (19) and (20), we follow the notation of Ref. [20] and define a k-particle flavor structure as where R = {r 1 , . . . , r |R| } with r i = k is a flavor split: The flavors are split across |R| traces, each containing r i indices. Without loss of generality, we may impose r 1 ≤ r 2 ≤ · · · ≤ r |R| . For a permutation σ that maps i → σ i , . . , b σ k ) and denote by Z R the group of permutations that preserve F R : The group Z R is, of course, related to the symmetries in Eq. (21). In general, an amplitude can be decomposed as where σ is summed over all permutations that do not preserve F R , i.e. S k /Z R . It follows from Bose symmetry that A σ R (p 1 , . . . , p k ) = A id R (p σ1 , . . . , p σ k ) where id is the identity permutation. It is therefore sufficient to work with A R ≡ A id R , the stripped amplitude, for all R; the full amplitude follows from Eq. (23). The stripped amplitude is easily obtained from the full amplitude by taking the coefficient of F R . In SU, it is guaranteed to be unique, as was proven in Ref. [20]. This carries over to SO and Sp; the ambiguity created by X = X † is easily resolved by applying X → In a four-meson amplitude, the stripped amplitudes A {4} and A {2,2} are the functions called B(s, t, u) and C(s, t, u), respectively, in Eq. (19). For six mesons, one has A {6} , A {2,4} , A {3,3} and A {2,2,2} , which correspond to D, E, F and G in Eq. (20), respectively. In the SU(n = 2) case (equivalent to the O(4)/ O(3) case treated in Ref. [19]), the Cayley-Hamilton theorem allows all the trace structures to be reduced to R = {2, 2, 2}. When n = 2, 3, 4, 5 for SU and n = 1, 2 for S O p, respectively, the F R satisfy a number of linear relations (see Ref. [48] for explicit expressions), which in turn relate the A R to each other. Otherwise, F R are linearly independent for different R. 6 As follows from its definition, A R inherits Z R symmetry (acting on {p 1 , . . . , p k }) from F R . We must also consider another permutation of the external particles, which we dub trace-reversal (TR). It is the permutation which reverses the product of generators in each trace: Under SU, this is not a symmetry of F R , but CP invariance nevertheless requires 5 Z R is the cyclic group Z k when R = {k}, hence the notation. In general, it combines cyclic symmetry of each trace with exchanging the contents of same-size traces. It is Abelian as long as all r i are different. 6 They are in fact orthogonal in a certain sense, as shown in Ref. [20]. it to be a symmetry of A R : Charge conjugation maps t a → (t a ) T , and thus t a t b · · · t c → t aT t bT · · · t cT = t c · · · t b t a . This is why Eqs. (19) and (20) pair each trace with its reverse (except for the reversal-symmetric t a t b ). We will denote the general symmetry group of A R , i.e. Z R plus TR, by Z +tr R . Under S O p, Z +tr R is a symmetry also of F R ; in fact, t a t b · · · t c = t c · · · t b t a makes F R symmetric under the reversal of any single trace (CP only requires symmetry under the simultaneous reversal of all traces). This enhanced symmetry is inherited by A R , and is very important for the relation between the amplitudes of the different QCD-like theories (see Appendix E). The size of the amplitude expressions can be further reduced by writing them in terms of a quantityà R such that This clearly exists (consider e.g.à R = A R /|Z +tr R |) but is not unique. A method for obtaining a minimal-length A R , the deorbited stripped amplitude, is described in Appendix C. III-C. Group-universal formulation One can expect the amplitudes of the SU, SO and Sp theories to have many similarities, since the only differences relevant to the amplitude are the variations of the Fierz identity, Eqs. (7) and (8), and the substitution n → ζn. In fact, comparison of the amplitudes suggests that one might introduce four subamplitudes A (i) such that where (ξ, ζ) = (0, 1) for SU and (±1, 2) for S O p, as defined in Eq. (14). This decomposition is clearly redundant: Three amplitudes are expressed as a combination of four subamplitudes. However, we find it natural and choose it for its simplicity and clarity; very few terms appear in more than one subamplitude, and A (ξ 2 ) is a relatively short expression. The decomposition (25) can be combined with stripping and deorbiting, allowing the amplitude to be formulated using the very concise quantitiesà (i) R . Furthermore, many of these are actually zero. The patterns for which (R, i) combinations are allowed, what LECs, loop integrals and powers of n may appear where, etc., are studied in Appendix D and explained in Appendix E. III-D. The four-meson amplitude The notation of the previous sections allows the fourmeson amplitude to be written very compactly. We will use the ordinary Mandelstam variables (18). At LO, there is a single nonzero subamplitude, stemming from The single LO four-meson diagram, with the vertex stemming from L (2) . In formulae we refer to it as iM (2) LO or, after NLO mass and decay-constant redefinitions (17) are applied, iM the diagram in Fig. 1 At NLO, one has one-loop diagrams (two topologies of four one-loop diagrams in total) combined with counterterms, as shown in Fig. 2. Moreover, one needs to take into account NLO wave-function renormalization (Z 1/2 − 1) applied for every external leg, and mass and decay-constant redefinitions [at the given order based on Eq. (17)] applied to the LO graph M LO . (27) Note that while the above combination is parametrization-independent and UV finite, the separate terms are not. Altogether, the nonzero stripped and deorbited group-universal NLO subamplitudes read (Recall κ and L from Sec. II;J is defined in Appendix A.) This is identical to the results given in Ref. [31]. III-E. Poles and factorization The six-meson amplitude has a simple pole whenever an internal propagator goes on-shell, i.e. p 2 ijk = M 2 π with p ijk = p i + p j + p k for any indices i, j and k. As in Ref. [19], the amplitude can therefore be separated into a part containing the pole and a nonpole part, where the pole part can be factorized in terms of fourmeson amplitudes: Above, P 10 represents the 10 distinct ways of distributing the indices 1, . . . , 6 into two triples i, j, k and , m, n, and b o is the flavor of the off-shell leg, i.e. the propagator. This factorization can also be done at the strippedamplitude level. With Eq. (30) schematically summarized as A with each A (pole) R summed over Z R instead of P 10 , causing some symmetry factors. 8 In Eq. (30), the four-pion subamplitude is defined as usual, although s + t + u = 3M 2 π + p 2 ijk since one leg is off-shell. The residue at the pole is unique (since 8 Note that A the on-shell four-meson amplitude is), but the extrapolation away from p 2 ijk = M 2 π is not. Correspondingly, the distribution of terms between the parts in Eq. (29) is not unique. We choose to express A 4π in exactly the form (28), which in turn fixes A However, the distribution of contributions from individual one-particlereducible (1PR) diagrams remains parametrizationdependent, while one-particle-irreducible (1PI) diagrams only contribute to A (nonpole) 6π . By suitably deforming A 4π , it is in fact possible to make the tree-level A vanish. This is the principle underlying Britto-Cachazo-Feng-Witten recursion [49,50] and similar techniques, wherein many-particle amplitudes are recursively built up from smaller ones. This technique was used for the first published calculation of the NLO tree-level six-meson amplitude [33], but at least its standard configuration suffers from convergence problems at NNLO [20]. Significant work has been done on the topic of looplevel recursion techniques but is typically limited to loop integrands rather than complete amplitudes; see Ref. [38,39] and references therein. We make no use of such techniques here. III-F. The six-meson amplitude The nonpole LO six-meson amplitude contains only two nonzero subamplitudes, stemming from the 1 + 10 tree diagrams in Fig. 3: The pole part is given by Eq. (30). Note that Eq. (32) is group-invariant, i.e. equal for SU and S O p up to n → ζn. This is true for all analogous LO k-meson amplitudes, and indeed all tree-level contributions, as is proven in Appendix E. The NLO amplitude, which is the main result of this work, stems from the diagrams in Fig. 4. Even when maximally simplified, it is rather lengthy, so we leave its explicit expressions to Appendix D. Let us now describe briefly the renormalization procedure for the NLO six-meson amplitude analogously to Eq. (27). Regarding the 1PI diagrams [Figs. 4(a), 4(d), 4(g) and 4(i)] which only contribute to the nonpole part, we can again write, schematically, The discussion of the 1PR part is a bit more involved. The double-pole part M 2-pole stemming from the contributions represented by diagrams depicted in Figs. 4(c) and 4(f) cancels with the piece due to the NLO propagator mass renormalization in the LO pole contribution M prop , and consequently This, together with the LO contribution itself, M which is the equivalent of two up-to-NLO ππ scatterings [analogous to A as from Eqs. (26) and (27)] connected with the propagator, i.e. precisely the structure of Eq. (30). Choosing the particular form of A (pole) 6π as discussed earlier, the remainder with respect to A . What we call the nonpole part of the six-meson amplitude is thus the combination of such a remainder and the contributions of the 1PI diagrams from Eq. (33). III-G. Zero-momentum limit In what follows, we choose a symmetric 3 → 3 scattering configuration given by the four-momenta with E p = p 2 + M 2 π . These only depend on a single parameter p, the modulus of three-momenta of all the mesons. In this kinematic setting, the zero-momentum limit of the stripped nonpole amplitudes up-to-andincluding NLO take a simple group-universal form Due to the Adler zero lim →0 A(p 1 , . . . , p i , . . .) = 0, which holds for any i in the massless case [51,52], the zero-momentum limit is proportional to M 2 π also in the general case. It seems that Eq. (37) is valid also for general momentum configurations rather than just Eq. (36); this is explained in the next section. However, it is specifically the zero-momentum limit of A R (p 1 , . . . , p 6 ) where particles 1, 2, 3 are in the initial state and 3, 4, 5 in the final state. Different assignments of initial-and final-state particles will yield different zero-momentum limits. Af-ter accounting for Z +tr R symmetry, time-reversal symmetry, and the freedom to exchange particles within the initial and final states (which changes the stripped amplitude but not its zero-momentum limit 9 ), there are 10 distinct limits, produced by the following: Note that the first line reproduces Eq. (37); in the interest of space, we do not reproduce the other cases. Also note that this is for 3 → 3 scattering, and that different limits will be obtained for 2 → 4 scattering. IV. NUMERICAL RESULTS We only present a few numerical results here since the full analysis of the finite volume and the subtraction of the two-body rescatterings is very nontrivial; see Refs. [53,54] and references therein. The numerical inputs we use are M π = 0.139570 GeV , µ = 0.77 GeV , F π = 0.0927 GeV , n = 3 . For LECs, we use the p 4 fit from Table 1 of Ref. [55]: For n = 3, we use L r 0 = 0. Throughout this section, we use the kinematic setting of Eq. (36). The resulting plots are shown in Fig. 5. Interestingly, vanishes in this kinematic setting and hence does not contribute to the top right panel of Fig. 5. We use the following shorthand notation for the value at p = 0.1 GeV: Note that is dimensionless. Using this notation, values for general n are given in Table I. As Fig. 5 shows, the relative sizes of these values are representative across a broader energy range. The pole part is clearly the dominant contribution, but in a sense the nonpole part is the interesting one, since it is not directly related to the previously known four-meson amplitude. The NLO part is smaller than the LO part, but not by much; perturbative convergence is understandably poor, with breakdown expected at a scale of 4πF π / √ n [28], i.e. ≈ 5M π at n = 3. (41), using the momentum configuration (36). Note that we only quote the real part, and that we multiply by a suitable power of Fπ to make the result dimensionless. We omit subamplitudes that are identically zero. stripped amplitudes of the three theories will become equal in this limit. Besides the kinematic configuration (36), we have numerically evaluated the amplitude for a random sample of 3 → 3 scattering events generated with the RAMBO algorithm [56]. These samples confirm that A is not generally zero. We obtained zero-momentum limits by uniformly scaling the random three-momenta by a factor → 0 while keeping the particles on-shell. 10 This consistently resulted in the same numerical values as Eq. (37), allowing us to conclude that Eq. (37) is the general uniform zero-momentum limit of A R (p 1 , . . . , p 6 ) in 3 → 3 scattering, rather than a special case for the configuration (36). The same is true for the other limits described in Eq. (38). Our main result is the six-meson amplitude, which can be written in terms of four independent flavor-stripped amplitudes [for detailed structure, see Eq. (20)], as compared to a single amplitude in the O(N + 1)/ O(N ) case studied in Ref. [19]. We split the whole amplitude into pole and nonpole parts; see Eq. (29). The pole part is given in Eq. (30), where we chose to employ the off-shell four-meson amplitude in the form of Eqs. (26) and (28) generalizing (beyond n = 3) the amplitude given in Refs. [57,58] and exactly matching that in Ref. [31]. The expression for the nonpole part is rather lengthy. We thus further divide the four flavor-stripped amplitudes into group-universal subamplitudes in order to account for all the three QCD-like theories in a concise way. By employing symmetries through the deorbiting procedure described in Appendix C, we obtain the resulting 10 non-vanishing subamplitudes presented in Appendix D. The nontrivial choice of a redundant but highly symmetric basis of tensor triangle loop integrals (for details, see Appendix A) and of kinematic invariants (Appendix C) allows for a fairly compact expression. While the result is still too lengthy and complicated to be grasped fully, the division into subamplitudes along with further analysis in Appendix E allows many of its features to be understood. In the kinematic setting of Eq. (36), we present the analytical results for the zero-momentum limit in Eq. (37). Some numerical results for this particular momentum configuration are presented in Sec. IV. In the process of our calculations, we devised a systematic procedure (deorbiting) for simplifying amplitudes beyond what is possible with stripping alone. Previous work, e.g. Refs. [19,31,35], manually structure their results in similar ways, but this quickly becomes difficult with larger numbers of kinematic variables and more complicated symmetries. These issues are, at least partly, resolved by our simplification scheme, which should be applicable also beyond the present scope. We see limited interest in computing the NNLO counterpart of this result. Several LECs (terms 49-63 in L (6) [46]) that do not appear in lower-multiplicity amplitudes enter here and are so far undetermined. All relevant two-loop integrals are known (see e.g. Ref. [31]) except for the five-propagator sunset topology, which we expect to be very difficult. There is also the matter of expressing the two-loop integrals in a symmetrycompliant way like in Appendix A. We believe that our techniques would make the NLO eight-meson amplitude accessible, but such a calculation is currently not motivated by lattice developments. Besides the larger number of diagrams and longer expressions, the main technical hurdles would be extending Appendix A to a similar treatment of box integrals, and extending Appendix C to Z +tr {8} , Z +tr {2,6} , Z +tr {2,3,3} , etc. Work is in progress to combine our results with the methods for extracting three-body scattering from finite volume in lattice QCD. We expect that our results may also be of interest for the amplitude community. ACKNOWLEDGMENTS The authors thank R. Frederix and A. Lifson for suggestions about random momentum sampling. This work is supported in part by the Swedish Research Council grants contracts no. 2016-05996 and no. 2019-03779. Appendix A: Conventions for the loop integrals Throughout the paper, we treat the momenta (p 1 , . . . , p 6 ) as incoming, and we introduce the following independent combinations of momenta: Note that we use the same notation and conventions as in Ref. [19]. In particular, the integrals are defined in Appendix A therein; here, we restate them along with some clarifications. The functions we use to represent our results are very closely related to the standard Passarino-Veltman oneloop integrals A 0 , B 0 and C 0 . To fix our notation, let us present explicitly the simpler integrals with one and two propagators. In what follows, we use the compact notation for the Feynman denominators with loop momentum l, and that we, like in Sec. II, set the integrals read We employ the standard definition forJ(q 2 ): with β ≡ β(q 2 ) = 1 − 4M 2 q 2 . The terms L andJ(q 2 ) we use to express our results thus absorb the factors of 1 16π 2 . Let us emphasize that it is the tensor triangle oneloop integrals of higher ranks which generate lengthy expressions upon reduction to the scalar ones. It therefore turned out to be more convenient to use a specific basis for the tensor integrals with particular symmetry properties. Regarding the rank-3 integrals, the combination C 3 (p 1 , p 2 , . . . , p 6 ) = 1 3 has more symmetries than the first term only and is UV finite. It is antisymmetric under the interchange of the momenta inside each pair [pairs being here (p 1 , p 2 ), (p 3 , p 4 ) and (p 5 , p 6 )] and antisymmetric under the interchange of two pairs. The rank-2 integral can be defined as . (A9) It is antisymmetric under the interchange p 1 ↔ p 2 and symmetric under (p 1 , p 2 ) ↔ (p 3 , p 4 ) and p 5 ↔ p 6 . The integral with one product l · r i in the numerator can be defined as . (A10) It is antisymmetric under the interchange p 5 ↔ p 6 , (p 1 , p 2 ) ↔ (p 3 , p 4 ) and is symmetric under p 1 ↔ p 2 and p 3 ↔ p 4 . Owing to the symmetries, other integrals of ranks 2 and 1 can be expressed in terms of C 21 and C 11 and integrals with lower ranks, respectively, so we only need those to write out our final result. Finally, we define which is symmetric under p 1 ↔ p 2 and under all pair interchanges. It is related to C 0 as C(p 1 , p 2 , . . . , p 6 ) = κC 0 (q 2 1 , q 2 2 , q 2 3 , M 2 , M 2 , M 2 ) , (A12) in which case the mentioned symmetries are seen trivially due to equal masses. We express the amplitude in terms of C 3 , C 21 , C 11 and C. As already mentioned, the former three can be expressed in terms of C, but the expressions are cumbersome and lead to a very long expression for the amplitude. We have therefore kept all these four, among which only C 21 contains a UV-infinite part: C 21 (p 1 , p 2 , . . . , p 6 ) = κ r 1 · r 2 4 1 + C 21 (p 1 , p 2 , . . . , p 6 ) . Appendix B: General parametrization In this appendix, we show how to parametrize a special unitary matrix u in full generality. Special cases of what follows give parametrizations such as the four used in Ref. [62]. A special unitary matrixÛ can always be written as an exponential of a Hermitian traceless matrix φ: The obvious way to write a general parametrization iŝ with unitarity conditions relating the b m . As proven in Ref. [63], the only generally valid solution where b m are c-numbers isÛ = exp(iφ), although sufficiently low order, other such parametrizations are valid and useful; see e.g. Refs. [35,64]. Generally, however, b m are functions of traces of powers of φ, as seen in Ref. [62]; this complicates the unitarity conditions. Here, we take the alternative approach of redefining φ to φ (φ) with φ † = φ and φ = 0, and keepinĝ U = exp(iφ ). Under the unbroken (vector) part of the chiral transformation, φ → g V φg † V , and we want The redefined φ is thus a series in φ and traces of powers of φ: Further restrictions follow from using intrinsic parity, i.e. employing φ → −φ if φ → −φ, thus allowing only for odd values of m, and applying φ † = φ , which requires the a i0...ij to be real. The condition φ = 0 determines all the a 0i1...ij (those with i 0 = 0) to be a 0i1...ij = −a i1...ij /ζn. Hence, all terms relevant for the six-meson amplitude discussed in this work, introducing 18 extra unconstrained parameters, 11 are in groupuniversal form The presence of traces in all terms except the first confirms and generalizes the conclusions of Ref. [63]. Taking this general form to define u via Eq. (6) with φ ≡ φ a t a / √ 2F and plugging it into the Lagrangian (1) 11 In general, the number of free parameters at order φ m is equal to the number of ways to partition m − 1 into positive integers, i.e. Online Encyclopedia of Integer Sequences (OEIS) sequence A058696 starting with 2, 5, 11, 22, 43, 77, 135. Let us briefly sketch the proof of this. Let i 0 ≥ 1 since a 0i 1 ...i j = −a i 1 ...i j /ζn. Then, rewrite each term in Eq. (B3) as In total, there are m factors of φ, of which all but the first can be arbitrarily partitioned like m − 1 → i 1 , . . . , i j , 1, . . . , 1. Since φ = 0 and i 1 ≥ i 2 ≥ · · · i j ≥ 2, we can unambiguously associate the 1's with φ's outside traces and the other elements of the partition with i 1 , . . . i j . This demonstrates the one-toone correspondence between independent parameters a i 0 i 1 ,...i j and partitions of m − 1. adds an extra cross-check of one's calculations, since the physical amplitude cannot depend on a i0...ij . Appendix C: Deorbiting and closed bases of kinematic invariants In this appendix, we briefly describe the method used for the final simplification step of reducing A with the property (24). This is a development of an ad hoc technique used in Ref. [20]. Recall thatà (i) R is not unique and that our aim is to make its expression as short as possible. Consider some class of objects x and a group G (in our case, x are products of one-loop integral functions and kinematic invariants, and G = Z +tr R ). In standard nomenclature, the set of objects obtained by acting with G on x is called the orbit of x, denoted G · x; formally, (C1) Consider then an expression X composed of a sum of objects x. Reducing it toX such that X = g∈G g ·X (where g ·X indicates acting with g on each term inX) is done using the following algorithm: 1. Start withX = 0. 2. Select the first term x in X, under some arbitrary but consistent ordering of the terms. 12 3. Compute the orbit G · x and the symmetry factor S = |G|/|G · x| (this is always an integer). 4. Add x toX, and subtract 1 S g∈G g · x from X. (Now, no element of G · x appears in X.) 5. Repeat from step 2 until X = 0. The symmetry factor compensates for how each element of G · x appears S times in the sum g∈G g · x. Optimally, each object x that appears in any orbit should be a single term, not a sum of other objects. We will call this property being closed under G. Without this property, the algorithm may yield poor results or not terminate at all. However, if the class of objects is closed under G, it is easy to see that no orbits overlap and that the algorithm results in anX that is the shortest possible subexpression of X, granted that there are no additional symmetries that are not taken into account. In the context of our amplitudes, we therefore need to carefully choose our basis of kinematic invariants. We define them in terms of generalized Mandelstam variablesŝ 12 In practice, we use the internal ordering of FORM, with some modifications. 13 This basis is valid for 5 or more space-time dimensions; with 4, the correct number of kinematic degrees of freedom is 8, not 9. However, the 9th variable is related to the other 8 through the nonlinear Gram determinant relation, so for the sake of simplicity we ignore this and use 9-element bases. For a k-particle process in d dimensions, a similar basis of generalized Mandelstam variables will have k(k−3)/2 elements, as is easily found by counting products p i · p j and accounting for p 2 i = M 2 π and i p µ i = 0. This is redundant when k < d−1; then, with only d − 1 independent components in each p i , the number of kinematic degrees of freedom after accounting for No basis is needed for R = {2, 2, 2} here due to the simplicity of A {2,2,2} . References [19,20] provide two different R = {2, 2, 2} bases. There is no need to apply similar considerations to the loop integralsJ and C X , since the inherent symmetries of these functions are much simpler than those imposed onŝ by the kinematics. C (C 3 ) is (anti)symmetric under Z {2,2,2} acting on its arguments, while C 11 and C 21 are symmetric or antisymmetric under various subgroups thereof. The symmetries of C X and those of the stripped amplitudes interplay nontrivially, giving rise to several orbits. Denoting by i · · · j(p) the orbit that has p distinct elements including C X (p i , . . . , p j ), the orbits of C and C 3 under various Z R are Only those marked in bold actually appear in the amplitude. This can be understood from the limited arrangements of legs around the diagram Fig. 4(i) that produce F R (b 1 , . . . , b 6 ) as a flavor structure, as is clarified by the technology of Appendix E. 14 This is a new basis; the one in Ref. [20] is not closed under trace-reversal. It was obtained using similar methods to the {3, 3} and {2, 2, 2} bases derived in that paper (note that Z +tr {2,4} , unlike Z {2,4} , is non-Abelian). 15 This is quite different from the one used in Ref. [20] and is much simpler -it is just one of the nonets formed under Z {3,3} . The reason for it being overlooked can be traced back to Ref. [37], where an effort was made to include the elementŝ 123 in the basis. • In this appendix, we derive the features described in the previous section using the technique we here dub diagrammatic flavor-ordering, wherein modified Feynman diagrams allow direct calculation of stripped amplitudes without going through the full amplitude. Simpler cases of the technique have been used for a long time [21,22,35], but the extension beyond LO and R = {k} is more recent [20,37]. A somewhat similar approach can be found in Ref. [33]. In the preparation of this paper, we refined the technique and performed the first loop calculations using it, but it turned out that the proliferation of diagrams caused by the inclusion of loops and nonzero masses -nearly 200 distinct topologies compared to 9 without flavor-ordering -outweighed any efficiency advantages the technique had over standard Feynman diagrams, rendering it impractical for our purposes. Nevertheless, the manifest relation between kinematics and flavor structure in flavor-ordered diagrams can be used to illuminate some features that are obscured with the standard approach. E-1. Diagrammatic flavor-ordering Here, we give a brief summary of this technique; see Ref. [20] for a detailed version, and Ref. [37] for one including loops. By 'flavor-ordered', we mean a quantity whose flavor structure is F R for some R, i.e. whose flavor indices are in natural order (up to Z R ). Such a quantity is invariant under Z R acting simultaneously on its flavor indices and momenta. The stripped amplitude is obtained by keeping only the flavor-ordered parts of the amplitude and then dropping the flavor structure. Diagrammatic flavor-ordering is based on the observation that the Fierz identity, Eqs. (7) and (8), generally preserves flavor-ordering: if two sub-diagrams contain t a A and t a B , joining them will result in AB , A B or AB † , all of which keep the (possibly reversed) order of flavor indices in A and B. Therefore, a diagram is flavor-ordered only if its sub-diagrams, all the way down to the vertices, are flavor-ordered. We now desire a set of modified diagram-drawing rules that make diagrams inherently flavor-ordered, with the flavor structures manifest from the graphical shape of the diagram. We think of each external leg as labeled by an index i, corresponding to momentum p i and flavor index b i . We will call two legs flavorconnected if their flavor indices reside in the same trace in the flavor structure. For single-vertex diagrams, we indicate the flavor structure by adding gaps in the vertex between the groups of legs that are not flavor-connected, as is done in Figs. 6(a) and 6(c). We treat the vertices in multivertex diagrams like Fig. 6(b) similarly. Since the two terms on the right-hand side of Eq. (7a) treat flavor structures differently, we represent them by different propagators as if there were two species of particles: ordinary (solid line) and singlets (dashed line), the latter of which carry a factor of − 1 ζn . 19 When combined with Eq. (7b), these rules allow the 19 The name "singlet" stems from how adding a singlet field φ 0 , whose associated generator t 0 = 1 √ ζn commutes with all t a , results in the removal of the 1/n terms from the Fierz identity flavor structure of any SU diagram to be read off, as illustrated in Fig. 7. Two legs are flavor-connected if and only if the following conditions hold: • They are joined by an uninterrupted path through the diagram (vertex gaps and singlet propagators interrupt it). • They can be joined by a line that does not intersect the diagram at any point (it can pass through vertex gaps and singlet propagators). This is illustrated in Fig. 7(b). To read the indices, follow the outline of each flavorconnected set of legs, keeping the diagram to the right of the path (thus reading the indices of tree diagrams in clockwise order), as illustrated in Fig. 7(c). 20 The starting point is arbitrary due to Z R symmetry. When a loop is 'empty', like those in Fig. 8, a factor of 1 = ζn is added. For the purposes of momentum flow, flavor-ordered diagrams are treated just like ordinary diagrams, and the two kinds of propagators are kinematically identical. However, flavor-ordered diagrams are typically sensitive to the order in which legs are arranged around a vertex (see e.g. Fig. 9). This, along with the combination of singlet and ordinary propagators, leads to the proliferation mentioned earlier. All diagrams must be summed over Z R (with appropriate symmetry factors) and added up to obtain A R . The above rules hold for SU, and to a large extent also for S O p. In fact, the cases where they are equivalent (up to the substitution n → ζn) exactly correspond to A (1) of Eq. (25). E-2. Differences between the groups We will now discuss all contexts in which differences between SU and S O p may arise. The following fully accounts for the patterns seen in the six-meson amplitude: a. Tree diagrams. View a S O p diagram as being built by adding vertices one by one. With A belonging to the partially completed diagram and B to the vertex, t a A t a B → 1 2 [ AB + AB † ] gives one flavorordered term and one that is discarded. Adding a structurally identical diagram but with some indices permuted so that B is reversed gives t a A t a B † → since e.g. t 0 A t 0 B = 1 ζn A B . Thus, the 1/n terms can be interpreted as the subtraction of diagrams with internal singlet lines, allowing other contractions to be done using only the nindependent terms. The singlet decouples from the other fields in uµu µ , so LO singlet vertices stem from χ + and therefore depend on the mass but not on the momenta [this is easiest to see in the exponential parametrization (6)]. This simplifies LO and NLO singlet diagrams and causes them to vanish in the massless limit. 20 These rules become more complicated at two-loop level and above, where non-planar diagrams may appear. However, all diagrams can be drawn without self-intersections on a surface of sufficiently high topological genus (planar diagrams on a sphere, non-planar two-loop diagrams on a torus, etc.). One must then imagine the diagram drawn on such a surface (but not one of higher genus than necessary) when determining flavor-connectedness or assigning indices. {3,3} . The multiplicities refer to permutations in Z {3,3} . All diagrams contain a single factor 1/n from singlets; in (c) the 1/n from the second singlet propagator is canceled by AB † = 1 = n (such a factor appears whenever a loop is not flavor-connected to any external leg). In their contributions to A (ζ) , all n-dependence is canceled by A B = A 1 for (a,b) and A B = 1 1 for (c). 1 2 [ AB † + AB ]: Again, one term is kept and one discarded. However, t a B = t a B † under S O p, so the kinematic structure of the vertex must be invariant under that index permutation. Thus, the two flavor-ordered terms are identical and add up to the same AB given by SU. This proves, to all orders in the chiral counting, that Eqs. (7a) and (8a) fail to introduce any differences between SU and S O p. In other words, SU and S O p are equivalent at tree level (up to n → ζn), so all tree diagrams go into A (1) . The only caveat is if any differences are introduced at the Lagrangian level, but this happens first at NNLO (see below). b. Loops. Viewing the loop as being formed by joining two legs of a tree diagram, we see that Eqs. (7b) and (8b) must be applied if those legs are part of the same trace. The term t a At a B → A B is the same in SU and S O p, up to a factor 1 ζ , giving rise to A (ζ) . The term t a At a B → AB † is unique to S O p and contains ±, giving rise to A (ξ) . 21 Since this term gives a single trace, it almost always results in R = {6}, which explains why A instead. When B = 1 in the A (ζ) case, we get a factor of n = 1 ; this corresponds graphically to an 'empty' loop as mentioned above. This is the only source of positive powers of n, and explains why they only appear in A (ζ) . Those diagrams still contribute to A (ξ) without a factor of n. c. Singlets. The 1 ζn terms of the Fierz identity are the same for SU and S O p, so singlet propagators behave the same in both cases. When a singlet is part of a loop, one can let the singlet propagator 'close' the (a) 9× (b) 9× (c) 9× Figure 9: A few of the flavor-ordered diagrams that contribute to A (ξ 2 ) {3,3} . The multiplicities are for SU; S O p permits more permutations, which is exactly why they give A loop, thereby avoiding all differences stemming from Eqs. (7b) and (8b). Therefore, such diagrams go into A (1) . This does not apply when a singlet is outside a loop, but for our amplitude this only happens with R = {3, 3} diagrams like in Fig. 8 and variations thereof. The 'empty' loop cancels the n-dependence in their contributions to A (1) and A (ζ) . Therefore, negative powers of n, which only arise from singlets, only show up in A (1) and (due to these diagrams) A (ξ) {3,3} . d. Trace-reversal. The greatest differences come from the fact that t a t b · · · t c = t c t b · · · t a under S O p but not SU. Therefore, A R is invariant under reversal of individual traces under S O p, but only under simultaneous reversal of all traces under SU. Among the cases considered here, these types of reversal are only inequivalent when R = {3, 3}. 22 For instance, in the diagrams in Fig. 9, the "inside" of the loop can be read both clockwise and counterclockwise under S O p, but only one way under SU, and the momentum dependence will be correspondingly different. Such disorganized difference is what gives rise to A (ξ 2 ) {3,3} . In all R = {3, 3} diagrams involving singlets, at least one trace can be reversed as a symmetry of the diagram (i.e. reversing a single-trace vertex), so they do not contribute to A (ξ 2 ) {3,3} . E-3. More particles, higher orders The patterns discussed here are straightforward to generalize. At N LO (i.e. loops), Eq. (25) becomes since each loop can give another factor of 1 ζ . Most of the features discussed above remain, although some are softened: Negative powers of n can appear in A (ζ j ) and A (ξ 2 ) , and positive powers of n in most subamplitudes except A (1) . A (ξ 2 ) R exists for R = {2, 3, 3}, {4, 4}, etc. In an N LO amplitude, A R with |R| > + 1 requires singlets breaking loops, which severely restricts the structure of that subamplitude; specifically, if R = {2, . . . , 2}, it will be similarly simple to our A (1) {2,2,2} . As mentioned above, the equivalence of SU and S O p at tree level can only be broken by Lagrangian effects. The first such effect is in the NNLO Lagrangian L (6) [46], where the 59th term u µ u ν u ρ u µ u ν u ρ and the 61st term u µ u ν u ρ u ρ u ν u µ are distinct under SU but equal under S O p. (The N 3 LO Lagrangian L (8) [47] contains several such cases.) This only results in additional relations between the LECs; the functional form of A is retained, and Eq. (E1) remains valid, albeit a bit more redundant.
2022-06-30T01:15:58.473Z
2022-06-28T00:00:00.000
{ "year": 2022, "sha1": "5b387d4592376cfa9ecc23fe70d0fbb1cd728a7b", "oa_license": "CCBY", "oa_url": "http://link.aps.org/pdf/10.1103/PhysRevD.106.054021", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "5b387d4592376cfa9ecc23fe70d0fbb1cd728a7b", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
119301935
pes2o/s2orc
v3-fos-license
Unified entropic measures of quantum correlations induced by local measurements We introduce quantum correlations measures based on the minimal change in unified entropies induced by local rank-one projective measurements, divided by a factor that depends on the generalized purity of the system in the case of non-additive entropies. In this way, we overcome the issue of the artificial increasing of the value of quantum correlations measures based on non-additive entropies when an uncorrelated ancilla is appended to the system without changing the computability of our entropic correlations measures with respect to the previous ones. Moreover, we recover as limiting cases the quantum correlations measures based on von Neumann and R\'enyi entropies (i.e., additive entropies), for which the adjustment factor becomes trivial. In addition, we distinguish between total and semiquantum correlations and obtain some relations between them. Finally, we obtain analytical expressions of the entropic correlations measures for typical quantum bipartite systems. I. INTRODUCTION Quantum correlations lie at the heart of the difference between classical and quantum worlds.There are at least two paradigms to address this issue beyond the usual entangled-separable distinction [1].For instance, steering correlations have recently been formulated in a operational way in [2], although their origins can be found in the seminal works by Einstein, Podolski and Rosen [3] and Schrödinger [4].These correlations intermediate between entanglement and nonlocality [5] (i.e., a violation of Bell inequalities).On the other hand, it possible to identify quantum correlations even in separable states.This has been firstly observed by Ollivier and Zurek and Henderson and Vedral, who derived the quantum discord as a signature of quantum correlations in bipartite systems [6,7].The original definition of discord relies on the difference between two extensions of the classical mutual information to the quantum case.A generalization of discord using other entropic forms by a direct replacement of von Neumman entropy [8] by general entropies, like Rényi [9] or Tsallis [10] ones, as proposed in [11], fails as it has been shown in [12,13]. Here, we aim to obtain quantum correlations measures by using general entropic forms, namely (q, s)-entropies (or unified entropies) [14,15].To avoid the difficulty discussed in [12,13], we follow an alternative approach inspired by the work of Luo [16].We propose as quantum correlations measures the minimal change in unified entropies induced by a local rank-one measurement, divided by a factor that depends on the generalized purity only in the case of nonadditive entropies (this adjusting factor becomes trivial for additive entropies).Several quantum correlations measures discussed in the literature, like [16][17][18][19][20][21][22][23][24][25][26][27], among others, are particular cases of (or close to) our proposal (see [28] for a recent review of quantum correlations).Indeed, the case of trace form entropies [29], which are nonadditive entropies (except the von Neumann case), has been dealt in [19,20,24] and deserves a particular mention.These entropic quantum correlations measures artificially increase when an uncorrelated ancilla is appended to the system (the geometric discord [22] has the same issue, as it has been pointed out in [30]).The nonadditivity of trace form entropies is the cause of this problem.We solve this in the case of (q, s)-entropies by introducing a generalized purity factor, similarly to what has been done with the geometric discord, that is dividing it by the purity [25].In this way, we obtain a family of (q, s)-entropic measures of quantum correlations that are invariant under the addition of an uncorrelated ancilla, both in the cases of additive and non-additive entropies.In addition, the computability of our entropic quantum correlations measures remain equal to the previous ones [19,20,24], since the adjustment factor is simply the trace of a power of the density operator. The outline of this work is as follows.Our proposal and main results are given in Sec.II.In II A, we review the notion and some properties of (q, s)-entropies and majorization, and we introduce a family of entropic measures of disturbance due to a projective measurement.In II B, we introduce the general entropic quantum correlations measures by quantifying disturbances due to local projective measurements, distinguishing between total and semiquantum correlations.Besides, we provide basic properties that justify our proposal.In II C, we find a lower bound of the entropic quantum correlations in terms of generalized entanglement entropy.In II D, we establish some interesting relationships between total and semiquantum measures.Then, in Sec.III we present some typical examples where we apply our correlations measures.Finally, some conclusions are drawn in Sec.IV. II. ENTROPIC MEASURES OF QUANTUM CORRELATIONS A. Unified entropies, majorization and (q, s)-disturbances Let a quantum system be described by a density operator ρ, that is, a trace-one positive semidefinite operator acting on an N -dimensional Hilbert space, H N .The quantum unified (q, s)-entropies of the state are defined as [14,15], for entropic indexes q > 0, q = 1 and s = 0. Notice that the quantum Tsallis entropies [10] are obtained for s = 1, being an interesting case q = 2, S (2,1) ≡ S 2 (ρ) = 1 − Tr ρ 2 , which is directly related to the purity of the state.On the other hand, von Neumman entropy [8] is recovered in the limiting case q → 1, whereas Rényi entropies [9] are recovered in the limiting case s → 0, A feature of (q, s)-entropies is their nonadditive character [14], which is reflected in the sum rule for product states ρ A ⊗ ρ B acting on a Hilbert space H N A ⊗ H N B , S (q,s) (ρ A ⊗ ρ B ) = S (q,s) (ρ A ) + S (q,s) (ρ B ) + (1 − q)s S (q,s) (ρ A )S (q,s) (ρ B ). ( Notice that in the cases q = 1 or s = 0, one recovers the additivity of von Neumann and Rényi entropies. A closed related concept to entropy is majorization (see e.g.[31]).Let us consider two density operators ρ and σ, and the corresponding probability vectors p and q formed by the eigenvalues of ρ and σ, respectively, sorted in decreasing order.Then, ρ is majorized by σ, denoted as ρ ≺ σ, means that q i for all n = 1, . . ., N − 1, and where N = max {rank ρ, rank σ} and rank denotes the rank of a density operator.Notice that if rank ρ ≤ rank σ we complete the vector p with 0 entries to have the same length of q, and vice versa.This has no impact in the value of unified entropies due to the expansibility property. It can be shown that (q, s)-entropies preserve the majorization relation (see e.g.[15,32]), that is, with equality if and only if ρ and σ have the same eigenvalues.We observe that the reciprocal does not hold in general, which means that majorization is stronger (as an order relation) than a single choice of the entropic indexes.Now, using the Schur-concavity it is straightforward to show that (q, s)-entropies are lower and upper bounded: where the first inequality is attained for pure states, whereas the second one for the maximally mixed state ρ * = I N . On the other hand, it can be shown that the eigenvalues of a density operator ρ are invariant under arbitrary unitary transformations U , in other words ρ and U ρU † have the same eigenvalues.Hence, we have that (q, s)-entropies are invariant under unitary transformations Moreover, we will see in the next subsection that the change of in entropy due to local measurements plays a key role in order to quantify quantum correlations.Before that, we recall the action of any bistochastic map over an arbitrary state.A bistochastic (or completely positive, trace-preserving unital) map E can be written in the Kraus form as k with both sets of positive operators E † k E k (completely positive) and E k E † k (unital) summing the identity (see e.g.[33]).Notice that this map leaves invariant the maximally mixed state (i.e., E(ρ * ) = ρ * ).It can be shown that if and only if E is a bistochastic map [34], in other words for bistochastic maps the final state E(ρ) is more disordered (in terms of majorization) than the initial state ρ.As a consequence of (10) and the Schur-concavity of the (q, s)-entropies, we have where the equality is attained if and only if E(ρ) = U ρU † .Hereafter, we are only interested in rank-one projective measurements without postselection, that is, a set orthogonal rank-one projectors Π = {P i = |i i|}, (i.e., P i P i = δ ii P i and N i=1 P i = I) with {|i } an orthonormal basis of H N .The state after a rank-one projective measurement Π is equal to As projective measurements are particular cases of bistochastic maps, we have also an inequality similar to (11) for Π.Thus, we propose to use the difference of (q, s)-entropies between the final and initial states (rescaled by a factor depending of the generalized purity) as a signature of the disturbance of the state of a system due to the measurement, that is For any choice of the entropic indexes this quantity is nonnegative and vanishes if only if the measurement does not disturb the state (i.e., Π(ρ) = ρ), which happens when measuring in the basis that diagonalises ρ.Notice that the rescaling factor plays no role for von Neumann and Rényi entropies (additive entropies), on the contrary it does for nonadditive entropies.In the next subsection, we will clarify the importance of the rescaling (Tr ρ q ) s when dealing with quantum correlations measures based on nonadditive entropies.Finally, notice that two interesting cases arise from the definition (12).The first one consists in considering the von Neumann entropy, in this case the disturbance can be recast as the quantum relative entropy (or quantum Kullback-Leibler divergence) between ρ and Π(ρ), that is where S(ρ σ) = Tr (ρ(ln ρ − ln σ)) is the quantum relative entropy.The second one comes from evaluating (12) at Tsallis entropy with entropic index equal to 2, for which the disturbance expresses in terms of the Hilbert-Schmidt distance between ρ and Π(ρ) divided by the purity of ρ, where A = √ Tr A † A is the Hilbert-Schmidt norm. B. Quantum correlations from disturbance due to a local projective measurement Let us consider a bipartite quantum system AB with density operator ρ AB acting on a product finite dimensional Hilbert space, [16], we consider the local rank-one projective measurements (without postselection), , where P A i and P B j are set of orthogonal rank-one projectors that sum to the identity, I A and I B , respectively.Then, the resulting states after these measurements are where According to [16], these states are called classical-quantum (CQ), quantum-classical (QC) and classical-classical (CC) correlated states with respect to the local measurements Π A , Π B and Π AB , respectively.A state is said CQ correlated if there is a local projective measurement over A that does not disturb it, i.e., Π A (ρ AB ) = ρ AB (analogously for QC and CC correlated states).All these states are separable (i.e., nonentangled), as they are convex combinations of product states [1], although not all separable states are of the forms ( 15)- (17).Moreover, the sets formed by all CQ, QC and CC correlated states, denoted as Ω A , Ω B and Ω AB , respectively, are not convex in contrast to the set of separable states.Notice that Ω A and Ω B are the sets of zero quantum discord states with respect to H N A and H N B respectively, and [22,35].In the sequel, for sake of brevity, we will use L to denote either A or B, and K to denote A, B or AB.Now, we can use (12) to quantify the disturbance due to the local projective measurement Π K , We denote D Π L (q,s) as unilocal disturbances, whereas D Π AB (q,s) as bilocal disturbances.In order to obtain a measurement-independent signature of quantum correlations, one takes the minimum of the disturbances (18) over the set of local measurements, that is The following properties justify our proposal (19) as measures of quantum correlations: (i) nonnegativity: D K (q,s) (ρ AB ) ≥ 0 with equality if and only if ρ AB ∈ Ω K .Accordingly, D L (q,s) are semiquantum correlations measures (with respect to H N L ), whereas D AB (q,s) are total quantum correlation measures; (ii) invariance under local unitary operators: , where U and V are a unitary operations over A and B respectively; and (iii) invariance when an uncorrelated ancilla is appended to the system: ) for bipartitions A|BC or B|AC (for the bipartition AB|C the quantum correlations measures naturally vanish). The first property is a direct consequence of majorization relation between the states after and before local projective measurements.The second one can be proved from the definition of our measure, Eq. ( 19), noting that and recalling the invariance of (q, s)entropies under unitary transformations.The third property is more subtle and it is related to the sum rule (5) of the (q, s)-entropies.Indeed, the generalized purity factor Tr(ρ AB ) q s plays a crucial role to fulfill this property in the case of nonadditive entropies, without affecting the complexity of computability of the measures.In general, this property has not been taking into account in the literature of nonadditive entropic measures of quantum correlations.For instance, entropic quantum correlations measures based on the difference of trace form entropies 1 , i.e., S φ (ρ) = Tr φ(ρ) with φ concave and φ(0) = 0 [29], have been dealt in Refs.[19,24].However, these measures are not invariant when an uncorrelated ancilla is appended to the system, except for the von Neumann case.This is direct consequence of nonadditivity of trace form entropies.For a more general discussion about necessary and reasonable conditions of quantum correlations measures, see [36].Moreover, our semiquantum correlations measures can be also interpreted as a quantum deviation from the Bayes rule in a way similar to that discussed in [24]. We remark that our quantum correlations measures include some important cases already discussed in the literature.The first one consists in evaluating (19) for the von Neumann entropy.In this case we reobtain the so-called information deficit [18], which can be rewritten in terms of the minimal relative entropy over the sets Ω K [21], The second one arises when evaluating (19) for the Tsallis entropy with entropic index equal to 2. This case is close to the geometric discord [22], Indeed, using the expression of D K G in terms of local projective measurements given in [23], we obtain Notice that D K G is not invariant when an uncorrelated ancilla is appended to the system [30].The purity rescaled factor solves this issue [25], although there is not the unique way to do it (see e.g [25,37]).Finally, notice that in the case of Rényi entropies, which has recently been introduce in [27], our measure fulfills the desired invariance property when appending an uncorrelated ancilla to the system. C. Lower bound and its relation with entanglement First, let us note that since QC, CQ and CC correlated states ( 15)-( 17) are separable, they fulfill some general entropic separability inequalities (see e.g.[32]), On the other hand, the corresponding final reduced states are where ρ L diag denotes the diagonal of ρ L in the basis underlying by {P L i }.Since ρ L diag ≺ ρ L [33] and due to the Schur-concavity of the (q, s)-entropies, inequality (23) reduces to Thus, plugging ( 26) into (18) to lowerbound D Π K (q,s) (ρ AB ) and taking the minimum, we obtain that the quantum correlations measures are lower bounded, as follows Notice that this lower bound could be nontrivial only for entangled sates; indeed, the right hand side of ( 27) is negative for separable states.A similar result has already been obtained in the case of trace form entropies [19].Now, let us consider a pure state ρ AB = |Ψ AB Ψ AB |.Let us suppose that is the Schmidt decomposition of |ψ AB (n ≤ min({N A , N B } and {|k L } are a orthonormal set).Thus, it can be shown that the reduced states ρ A = Tr B |Ψ AB Ψ AB | and ρ B = Tr A |Ψ AB Ψ AB | have the same unified entropy and, as a consequence, the lower bound (27) reduces to S (q,s) (ρ A ) = S (q,s) (ρ B ) for pure states ρ AB .Moreover,this bound is saturated when the local measurements are taking in the Schmidt basis.After these measurements, i.e., choosing the local projectors as P L k = |k L k L | (completed to obtain N L projector), the state is given by Π K (ρ AB ) = k λ k P A k ⊗ P B k , with unified entropies S (q,s) (Π K (ρ AB )) = S (q,s) (ρ A ) = S (q,s) (ρ B ). Therefore, we obtain that for pure states the entropic quantum correlations measures becomes a generalization of the entanglement entropy, which for the von Neumann entropy reduces to the standard one [38]. D. Relationships between total and semiquantum correlations It is possible to find some interesting relationships between total and semiquantum correlations when bilocal disturbances, D Π AB (q,s) (ρ AB ), are rewritten in terms of unilocal disturbances, where π Π (q,s) = Tr(Π(ρ AB )) q Tr(ρ AB ) q s (for sake of brevity, we omit the dependence of this factor on the state).This quantity, π Π (q,s) , is nonnegative but it can take values below or above 1, depending on the value of the entropic parameter q. As Π(ρ) ≺ ρ, we have that Π(ρ) q ≺ ρ q if q ≥ 1, whereas, ρ ≺ Π(ρ) holds if 0 ≤ q < 1.Thus, π Π (q,s) ∈ (0, 1] if q ≥ 1, else π Π (q,s) ≥ 1.In particular, for Rényi entropies the factor is always equal to 1. Now, let us consider two possible measurement scenarios: 0 is a bilocal measurement that minimizes the total quantum correlation measure, i.e., D , where Π L 1 , optimize the unilocal disturbances, i.e., D Applying Eqs. ( 30)- (31) to both scenarios, we obtain and Using that D AB (q,s) (ρ AB ) ≤ D Π AB 1 (q,s) (ρ AB ) (and the analogous relations for the unilocal disturbances) on Eqs. ( 32)-( 33) respectively, it can been shown that D AB (q,s) (ρ AB ) is lower and upper bounded as follows, In particular, given that the nonoptimal unilocal disturbances in (34) are nonnegative, we naturally obtain that total quantum correlation are greater than or equal to the semiquantum ones, This result can be also obtained more directly from the fact that S (q,s) (Π AB 0 (ρ AB )) ≥ S (q,s) (Π L 1 (ρ AB )).Notice that ( 36) is in accordance with the inclusion relations among the sets of CQ, QC and CC correlated states, i.e., ) we can deduce from Eqs. ( 32)-( 33) the following inequality for the sum of semiquantum correlations: where we defined the quantities , with i = 0, 1.Notice that for CQ and QC correlated states, one has ∆ 1 = 0, Π L 1 being defined by the set {P L i } so that it does not disturb the joint state.Finally, notice that for CC correlated states, all quantities in (37) vanish.Therefore, from these observations together with (36), we obtain Furthermore, a triangle-like inequality between total and semiquantum correlations, is trivially satisfied for CQ, QC and CC correlated states.The validity of the triangle-like inequality (38) in the general case relies on the sign of ∆ 1 .If ∆ 1 ≥ 0 ∀ρ AB , the inequality is generally true.On the contrary, if ∆ 1 < 0 for some ρ AB then it could be the case that the inequality does not hold for those states. Although the most general conditions for the validity of the triangle-like inequality (38) are hard to analyze, we can link the validity of ( 38) with a kind of local contractivity property of the unilocal disturbances.Specifically, let us assume as valid the following inequalities: Then, we have and, replacing any of these relations in (33), we obtain Finally, recalling that D AB (q,s) (ρ AB ) ≤ D Π AB 1 (q,s) (ρ AB ), it follows the triangle-like inequality (38).Thus, we are able to link the validity of the triangle-like inequality, for all states and any entropic indexes, with the assumption of contractivity of unilocal disturbances under local projective measurements.In the case of von Neumman entropy, inequalities (39)-( 40) are particular cases of the contractivity of the quantum relative entropy under tracepreserving completely positive maps [39].Otherwise, for Tsallis entropy of entropic index 2, inequalities (39)-( 40) are particular cases of the contractivity of the Hilbert-Schmidt distance under projective measurements [40].Therefore, in both cases the triangle-like inequality is satisfied (notice that for the latter, this result has been proved in alternative way [26]).Unfortunately, the local contractivity is not valid for general entropic functionals.Indeed, we show that is the case for a wide range of the entropic index of the Rényi and Tsallis entropies in Fig. 1. III. EXAMPLES A. Mixtures of a pure state and the maximally mixed one An interesting example where the computations can be carried out analytically involves the family of pseudopure states, given by mixtures of an arbitrary pure state, |ψ AB ∈ H N A ⊗ H N B , with the maximally mixed state, yielding Figure 1.Minimal differences between D Π A (q,s) (ρ AB ) and π Π A (q,s) D Π A (q,s) (Π B (ρ AB )) computed for 10 3 random local projective measurements Π A(B) , using Tsallis entropies (left figure) and Rényi entropies (right figure).Each line corresponds to a random two-qubit state.Notice that a wide range of values of the parametric index, q, yields negative values for these differences, implying a violation of the contractivity property under local projective measurements (see relations ( 39)-( 40) and text for details).For q = 1 both measures converge to the von Neumann-based one, which fulfills the contractivity property.The same happens for Tsallis with q = 2, corresponding to the Hilbert-Schmidt distance.Interestingly, in the Tsallis case we have been unable to find a counterexample to the mentioned contractivity for q ∈ (1, 2) (shaded region of the left figure). with 0 ≤ p ≤ 1 (remind that N AB = N A N B ).The spectrum of ρ AB is given by the eigenvalue (1 − p)/N AB + p, with multiplicity 1, and the eigenvalues (1 − p)/N AB , with multiplicity N AB − 1.The measurements that optimize both the unilocal and the bilocal quantifiers are unique (do not depend on the entropic form) and are given by the local Schmidt basis [19].This entropic-independent measurement fact is not a universal property, but depends on the particular states.In this case, measuring in the Schmidt basis yields a final spectrum that is majorized by any other spectrum corresponding to any other measurement, implying the entropic-independent optimization.After the measurement, the spectrum is given by the eigenvalue (1 − p)/N AB , with multiplicity N AB − n, and the eigenvalues (1 − p)/N AB + pλ k with 1 ≤ k ≤ n, where n is the Schmidt number and λ k the square of Schmidt coefficients (28).Using Eq. ( 19), we obtain for the generalized quantum correlations of pseudopure states.It is remarkable that, in this particular case and given the collapse of the semiquantum and total quantifiers, the triangle-like inequality (38) holds for the most general (q, s)-entropic forms. In particular, when |ψ AB is a maximally entangled state, with N A = N B = N , states ρ AB p constitutes a family of isotropic states, ρ I p .In that case, ∀k, λ k = N −1 , n = N , and the generalized quantum correlations are Specializing this for Tsallis and Rényi entropies one obtains, respectively, B. Werner and isotropic states Although isotropic states are particular cases of Eq. (44), i.e., mixtures of a pure state and the maximally mixed one, we aim to show that both isotropic [41] and Werner states [1], due to their symmetries, are independent of the local measurements performed.A Werner state is a N × N dimensional bipartite quantum state that is invariant under local unitary transformations of the form U ⊗ U , with U an arbitrary unitary acting on N dimensional systems, that is, ρ W = U ⊗ U ρU † ⊗ U † .On the other hand, an N × N -dimensional isotropic state is invariant under arbitrary local unitaries of the form U ⊗ U * , that is, ρ I = U ⊗ U * ρU † ⊗ (U * ) † .They can be parametrized, respectively, as with Notice that both definitions of isotropic states -the one derived from Eq. ( 44) and the one given by Eq. (50)-coincide under the identification p = N 2 y−1 N 2 −1 and |ψ AB |ψ + .To see that any local measurement yields the same disturbance over these families of states, let us consider Π A 1 as the optimal unilocal measurement over A. Any other local measurement is achieved by a unitary transformation over , with V an arbitrary unitary over A. Then, using the invariance properties of Werner states, the action of Analogous results holds for isotropic states and measurements over B. Invoking the unitary invariance of (q, s)-entropies one has that the minimum in ( 19) is attained for any local projective measurement.To prove that nothing changes when considering bilocal measurements, it is sufficient to observe that after any local measurement the state becomes a CC correlated state.Thus, given that the total disturbance can be computed via the partial disturbances (see Eqs. ( 30)-( 31)), the total quantum correlations are equal to the semiquantum ones. IV. CONCLUDING REMARKS In this work we address the problem of quantifying quantum correlations beyond discord.Specifically, following [16], we obtain entropic measures of bipartite quantum correlations by quantifying the system's states disturbance under local measurements.Our measures are based on very general entropic forms given by the (q, s)-entropies.As a consequence, we obtain quantum correlations measures, which include as particular cases or are close to several other measures previously discussed in the literature [16][17][18][19][20][21][22]27].Our main contribution is to propose such quantum correlations measures based on quantum unified (q, s)-entropies that are: (i) nonnegative and vanishes only for QC, CQ and CC correlated sates, (ii) invariant under local unitary operators, and (iii) invariant under the addition of an uncorrelated ancilla.Regarding with the last property, we show that for q → 1 or s → 0, that is when the (q, s)-entropies are nonadditive, it is necessary to rescale the disturbances by a generalized purity factor in order to avoid undesirable effects of previous entropic based correlation measures [19,20,24]. Moreover, we distinguish between total and semiquantum correlations, and we naturally obtain that the former are greater than the latter.In addition, we show that a triangle-like inequality is fulfilled for certain families of sates, namely QC, CQ and QQ correlated states, as well as, Werner and Isotropic states, for any entropic measures.In the general case, we only proof this for the von Neumann and Tsallis with entropic index of order 2, which follows from the contractivity property under a projective measurement of quantum relative entropy and Hilbert Schmidt distance, respectively.We provide numerical counterexamples where the local contractivity property of unilocal disturbances fails in a wide range of the entropic index of Rényi and Tsallis entropies, but it remains open if the triangle-like inequality is fulfilled for other entropic measures. Finally, we provide analytical expressions of the entropic correlations measures for pseudopure, Werner and isotropic states.For these families of states, the optimal measurement of unilocal and bilocal disturbances are independent of the entropic form.
2016-04-01T17:00:10.000Z
2016-04-01T00:00:00.000
{ "year": 2016, "sha1": "3e144f2b83ebd6c64fc1a792cefe85dcda4355e0", "oa_license": "CCBYNCSA", "oa_url": "https://ri.conicet.gov.ar/bitstream/11336/66366/2/CONICET_Digital_Nro.43a1c554-d9d9-4bb5-b659-de758bbc0ecb_A.pdf", "oa_status": "GREEN", "pdf_src": "ArXiv", "pdf_hash": "3e144f2b83ebd6c64fc1a792cefe85dcda4355e0", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
247792835
pes2o/s2orc
v3-fos-license
Clinical outcomes and revision rates following four-level anterior cervical discectomy and fusion Studies on outcomes after four-level anterior cervical discectomy and fusion (ACDF) are limited in the literature. The purpose of this study was to report on clinical outcomes and revision rates following four-level ACDF. Patients operated with four-level ACDF were identified in a prospectively accrued single institution database. Outcome scores included the Neck Disability Index (NDI) and Visual Analogue Scale (VAS) for neck and arm pain. Reoperation rates were determined. Any complications were identified from a review of the medical records. Twenty-eight patients with a minimum of 12 months follow up were included in the analysis. The mean age at surgery was 58.5 years. The median radiographic follow up time was 23 (IQR = 16–31.25) months. Cervical lordosis was significantly improved postoperatively (− 1 to − 13, p < 0.001). At the median 24 (IQR = 17.75–39.50) months clinical follow up time, there was a significant improvement in the NDI (38 to 28, p = 0.046) and VAS for neck pain scores (5.1 to 3, p = 0.012). The most common perioperative complication was transient dysphagia (32%) followed by hoarseness (14%). Four (14%) patients required revision surgery at a median 11.5 (IQR = 2–51) months postoperatively. The results of this study indicate that patients who undergo four-level ACDF have a significant improvement in clinical outcomes at median 24 months follow up. Stand-alone four-level ACDF is a valid option for the management of complex cervical degenerative conditions. Patient-reported questionnaire. Patient reported outcome measures (PROMs) were collected prior to surgery (baseline) as well as at 3-6 months and 1-2 years after surgery. PROMs were periodically collected thereafter, depending on the clinical situation and need for further follow-up, as determined on a case-by-case basis. The last available PROMs were used in this analysis. The patient reported questionnaire contained the Neck Disability Index (NDI) 20 and the Visual Analogue Scale (VAS) 21 for neck and arm pain. The Neck Disability index (NDI) 20 , with its 10-item scaled questionnaire, is a modification of the Oswestry Disability Index (ODI) 22 . It is widely used as a self-reported questionnaire for assessment of disability in patients with neck pain. The total NDI score ranges from 0 (no disability) to 100 (maximal disability). The Visual Analogue Scale (VAS) 21 is one of the most common and widely used assessment tools in the measurement of pain. It ranges from 0 (no pain) to 10 (worst possible pain). Operative technique. Operative technique was standardized across the 2 surgeons. Patients were placed supine with the occiput resting on the donut and a bump placed transversely under the scapula providing appropriate neck extension. The shoulders were taped. Somatosensory-evoked potential and EMG monitoring were used in all cases. Motor evoked potential (MEP) monitoring was used on a select basis in higher risk cases with significant spinal cord compromise. A right-sided transverse incision was performed and the anterior cervical spine was exposed using a Smith-Robinson approach 4 . A thorough removal of all disc material using microscopic visualization, removal of cartilage with microsurgical curettes as well as decortication with high-speed burr and careful attention to not violate the endplates was performed. Pre-contoured lordotic machined allografts were used in most cases. In cases where a corpectomy was performed, fibula allograft was used as necessary. The operative vertebral bodies were spanned with a lordotic titanium plate in a locking fashion. A Jackson Pratt type drain was used in most cases. Statistical analysis. Results were presented as the median with Interquartile Range (IQR) and mean ± standard deviation, where applicable. A Mann-Whitney U test was used for group comparisons and a Wilcoxon signed-rank test was used for within group comparisons. Missing data was handled with case-by-case exclusion, using pairwise deletion in the analyses. Statistical significance was set at p < 0.05. IBM SPSS statistical software version 23 was used to perform statistical analyses. Results Patient demographics and surgeon-reported data. A total of 36 patients who underwent a fourlevel ACDF at C3-C7 were identified in the database; 28 with a minimum of 12 months clinical follow-up were included in the analysis. Baseline PROMs for the variables of interest were available for 18 patients; 16 had complete data sets and 2 patients had data sets with missing values. The mean age at surgery was 58.5 (± 11), ranging from 41 to 79 years. There were 15 (53%) males. Five (18%) patients were smokers. Fifteen (54%) patients were classified as ASA III and 10 (35%) ASA II. Of the 28 patients, 6 (21%) underwent a hybrid procedure with one level corpectomy. The mean operative time was 257 min (± 59) and the mean estimated blood loss was 134 ml (± 76) (Tables 1 and 2 www.nature.com/scientificreports/ (38 to 28, p = 0.046) and VAS for neck pain scores (5.1 to 3, p = 0.012). VAS for arm pain was also improved, however this improvement was not statistically significant (Table 3). Table 2. The most common complication was transient postoperative dysphagia; it was observed in 9 (32%) patients. Spontaneous resolution of dysphagia was observed in all patients within 3-14 months after the surgery. Postoperative hoarseness was observed in 4 (14%) patients. In 3 patients, symptoms had resolved within 3 months after surgery. One patient underwent vocal cord assessment 11 months after surgery due to persistent hoarseness; the assessment showed resolving post-intubation laryngeal granuloma. Hoarseness had resolved at the last follow up 22 months after surgery. Perioperative complications. A summary of perioperative complications is presented in Postoperative C5 palsy with sensory loss and muscle weakness in the deltoid was observed in 1 (3.6%) patient. The patient made a full recovery at the last follow up 24 months after surgery. One patient (3.6%) sustained a right-sided C7 nerve root injury perioperatively that led to permanent sensory loss and motor deficit. Other complications included: wound dehiscence in one case (3.6%) that was treated conservatively with antibiotics and Horner syndrome in one patient (3.6%) with complete resolution of symptoms 7 months postoperatively. Reoperation rate. Four patients (14%) required revision surgery at a median of 11.5 months postoperatively (IQR = 2-51). The reasons for these reoperations are as follows: (1) graft extrusion and hardware failure at an early stage (2 cases), (2) new onset C2-C3 degeneration with early signs of myelopathy, and (3) asymptomatic partial screw backout on routine follow-up imaging ( Table 4). The median clinical follow up time for these patients was 40 months (IQR = 17.5-58.75), ranging from 15 to 60 months. Table 1. Summary of baseline characteristics for patients undergoing four-level ACDF. Descriptive data is presented as number (percentage) or mean (SD). ACDF anterior cervical discectomy and fusion, ASA American Society of Anesthesiologists. Discussion Few studies have reported on clinical outcomes after multilevel ACDF. The current study demonstrated a significant improvement in cervical lordosis, NDI and VAS for neck pain scores in patients treated surgically with four-level ACDF due to cervical spondylosis. A 65-year-old female who presented with severe progressive cervical spondylotic myelopathy. MRI scan of the cervical spine showed multi-level cervical spondylosis with quite severe anterior cord compression, particularly at the C4/C5 level where there was a large partially sequestered disk (A). There was also significant spondylosis at C3/C4 and at C5/C6, and there was fairly significant degeneration of the foraminal narrowing at C6/C7 (B,C). The patient underwent a multilevel ACDF with C4 corpectomy (D). At the last follow-up 2 years postoperatively, X-rays of the cervical spine showed good alignment and solid fusion across each disc segment (E). www.nature.com/scientificreports/ It is known that cervical sagittal malalignment secondary to degenerative changes can lead to pain, spinal cord compression and the development of myelopathy 18,23 . Thus, restoring the sagittal profile is of crucial importance as it has been shown to be associated with improved clinical outcome scores and decreased rates of adjacent segment degeneration 24 . In the present study, cervical lordosis was assessed as a parameter of sagittal profile and the results revealed a significant improvement at the median 23 months follow-up. Similar results were also reported in a recently published retrospective study by Li et al 25 ; in a cohort of 70 consecutive patients with four-level cervical spondylotic myelopathy, treated surgically with either anterior cervical corpectomy and fusion or anterior cervical decompression and fusion, the authors found a significant improvement in cervical lordosis after surgery. Recent studies have shown improvement in clinical outcomes after four-level ACDF 11,15,17 . Wang et al 17 reported satisfactory clinical outcomes with improvement in NDI, Neck and Arm pain and Japanese Orthopaedic Association (JOA) scores in a retrospective review of 32 patients who underwent four-level ACDF and had a minimum of 5 year follow up. Laratta et al 16 found a significant improvement in NDI, Neck pain and Arm pain at 2-year follow up in a retrospective analysis of 46 patients with symptomatic spondylosis. Our results are in line with these studies; we demonstrated that four-level ACDF surgery may provide a significant clinical improvement in patients with multilevel cervical spondylosis. We consider these findings of importance, given the fact that NDI and VAS Neck pain have been shown to be predictors of satisfaction two and five years following anterior spine surgery 26 . Although we found an improvement in VAS arm pain score, this difference was not statistically significant. There may be a couple of reasons for this finding. Firstly, patients with predominant radiculopathy represented the minority in this cohort. While all patients with symptomatic spondylosis are expected to have relief of their arm pain symptoms after adequate surgical decompression 17 , patients with predominant radiculopathy are more likely to have relief of their arm symptoms 27 . Secondly, the duration of symptoms that may have an impact on outcome was not investigated in this study. Recently, Tetrault et al 28 showed that increased duration of symptoms correlates with outcomes in patients with cervical myelopathy. In the setting of predominant cervical radiculopathy, the impact of longer duration of symptoms on clinical outcome has also been demonstrated. A recent study by Burneikiene et al 29 reported that patients with cervical radiculopathy who underwent 1 to 2 level ACDF surgery within 6 months of onset of symptoms demonstrated significantly greater reductions in VAS arm pain scores compared to those with symptoms for more than 6 months. Similarly, Tarazona et al 30 , in a retrospective analysis of 216 patients who underwent ACDF for radiculopathy, demonstrated that symptom durations of more than 2 years were predictive of higher neck and arm pain as compared to symptom durations of less than 6 months. Although reports on pseudarthrosis rates after multilevel ACDF vary considerably in the literature, the rate of 14% demonstrated in this study is in line with previous reports. Bolesta et al 10 demonstrated a pseudarthrosis rate of 53% among 15 patients treated surgically with three and four-level ACDF. More recently, Kreitz et al 15 reported a 31% rate of radiographic pseudarthrosis in a retrospective analysis of 25 patients who underwent fourlevel ACDF. Contrary to these findings are the results reported by De la Garza-Ramos et al 12 ; In this retrospective analysis of 71 patients who underwent three-level ACDF and 26 patients who underwent four-level ACDF, the pseudarthrosis rate was 5.6% and 15.4%, respectively. Similarly, Wang et al 17 reported a pseudarthrosis rate of 6% in a study of 32 patients undergoing four-level ACDF. In the present study, there were no significant differences in patient reported outcome measures between patients with and without pseudarthrosis. Recently published studies showed that, despite a high radiographic pseudarthrosis rate, patients who underwent four-level ACDF may achieve significant improvement in clinical outcomes with a low revision rate 15,31 . Our results are in agreement with these studies. Nevertheless, our results should be interpreted with caution given the small size of our cohort. A larger cohort study is necessary to address this knowledge gap. Pull out of the anterior cervical graft at the C3-4 and partial pull out of the fixation screws at C4 with shifting of the corpectomy graft www.nature.com/scientificreports/ Dysphagia and hoarseness are commonly observed in the early postoperative period after anterior cervical spine surgery 8,32 . Its incidence varies considerably and has been reported to be between 1-79% 11,12,14,33 . Moreover, the incidence of these complications increases with the number of ACDF levels performed 12,13,34 due to a more extensive soft tissue exposure and swelling 35 . Preventative strategies such as reduced endotracheal tube cuff pressure 36 , dynamic surgical retraction 37 , use of local steroids in the retropharyngeal region 38 and appropriate surgical dissection 39 have been reported to reduce the incidence of these complications. Although symptoms are transient in the majority of cases and resolve within 6 months after surgery 40 , dysphagia may persist 6-24 months postoperatively in about 5-7% of cases 33,40 . Postoperative dysphagia and hoarseness were the most common complications observed in this study with an incidence of 32% and 14%, respectively. Our results are in line with previous reports in the literature 11,12,14 . Overall, 4 patients (14%) required revision surgery; 2 out of 4 at an early stage due to graft extrusion and hardware failure. While ACDF has been shown to be an effective technique for preserving stability and lordosis of the cervical spine 41 , many authors have raised concerns on the efficacy of ACDF to achieve an adequate decompression in patients with multilevel spondylosis 8,42 ; in such cases, a significant endplate resection or cervical corpectomy may be needed. However, a more aggressive decompression can be challenging, especially in elderly patients with comorbidities 43 and low bone mineral density 44 , as it has been shown to be associated with www.nature.com/scientificreports/ a higher incidence of graft displacement or extrusion 41,45 ; especially at early stages after the primary operation 43 . In these cases, anterior-posterior fusion can be performed depending on the clinical situation and determined on a case-by-case basis 46 . Nevertheless, in the absence of high-quality prospective studies, the impact of the addition of posterior fusion on clinical outcomes is still unexplored. Interestingly, none of the revisions were due to pseudarthrosis. This is not surprising given the fact that in many cases pseudarthrosis can be asymptomatic 15 . Nevertheless, it has to be pointed out that the true significance of asymptomatic pseudarthrosis on revision rate has not been investigated given the short follow up time of this cohort. Future studies with long term follow up could address this question. In the current study, only one patient required further surgery due to adjacent segment disease, 61 months after index surgery. Adjacent segment disease may be a concern after ACDF surgery 47 . With an incidence of 2.9% annually it may affect more than 25% of all patients within ten years after index surgery 48 . While multilevel ACDF has been demonstrated to have a higher revision rate compared to single level cervical fusion, the risk of developing ASD has been shown to be significantly lower 48,49 . It seems that multilevel arthrodesis may have a protective effect against adjacent segment degeneration. However, given the small size of our cohort, limited evidence regarding the incidence of ASD after multilevel ACDF can be provided by the current study. There are some limitations to this study. First, it is retrospective in nature with a small number of patients and therefore results should be interpreted with caution. Secondly, all procedures were performed in a single center by two surgeons which may limit the generalizability of the results. There was no specific patient reported instrument used for the assessment of postoperative dysphagia. Resolution of the symptoms was based on clinical examination and patient history during the follow-ups. Further, bone fusion was assessed predominantly with X-rays as CT scans were not used routinely in follow-up. Finally, the follow-up time in this study may not be sufficient to capture longer-term outcomes after surgery. Conclusion This study showed improved clinical outcomes following four-level ACDF in patients with multilevel cervical spondylosis as compared to preoperative values. However, healthcare providers should be aware of the higher pseudarthrosis and reoperation rates as demonstrated in this study. www.nature.com/scientificreports/ Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
2022-03-31T06:22:56.880Z
2022-03-29T00:00:00.000
{ "year": 2022, "sha1": "b8cf105449cffb7b7f863cff7fd06179e31540f9", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-022-09389-1.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ed48b47e4fe33df5c45bdb6e55910ab7434c2a8d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
268533430
pes2o/s2orc
v3-fos-license
Tuberculosis/HIV Prevalence and Treatment Success among Children Receiving Care in Two Tertiary Health Facilities within Ogun State, Nigeria Background About 1 million children become ill with tuberculosis every year, representing 10-12% of all cases of tuberculosis notified globally. HIV infection in children is often due to transmission from mothers to children. HIV infection in children increases their risk of having tuberculosis. Sub-Sahara Africa has one of the highest TB incidences and HIV prevalence thus children in this region bear a huge burden of TB/HIV infection. In addition, the treatment success rate in many countries is rarely disaggregated to evaluate children. Thus, this study aims to determine the prevalence of TB/HIV coinfection and treatment success among children with tuberculosis attending clinics in two tertiary institutions in Ogun State, Nigeria. Methodology The study was a retrospective cohort study of routine programme data of all children diagnosed and treated for tuberculosis from January 2015 to June 2017 in two tertiary hospitals in OgunState, Nigeria. The hospitals were Olabisi Onabanjo University Teaching Hospital Sagamu and Federal Medical Centre Abeokuta, Ogun State. Data were retrieved from the facility TB register and analyzed using epi info. Results A total of 759 patients were registered for treatment at the two tertiary facilities between January 2015 and June 2017. Of these, 112(14.8%) were children 0-14 years of age. Most of the children (95.54%) had pulmonary tuberculosis. Treatment success was 81.3%. About half (46.4%) of the patients were HIV positive. Age, site of disease, bacteriological diagnosis, and weight at the commencement of treatment were significantly associated with HIV status while none of the socio-demographic variables were associated with treatment outcome Conclusion There is a need to look for ways to further improve the current treatment success rate of children with tuberculosis. There should be increased efforts also to find better ways of diagnosing childhood tuberculosis. The high HIV rate among children with TB is of concern and strategies should be put in place to prevent HIV transmission to children. Introduction Worldwide, tuberculosis infection still causes a high burden of disease.It is the ninth leading cause of death worldwide and the leading cause from a single infectious agent ranking above HIV/AIDS.1In 2016, an estimated 10.4 million people fell ill with TB.1 Most of these new cases of tuberculosis occurred in the South East Asia region (45%) and Africa region (25%).1About 1 million children become ill with tuberculosis every year.Children represent 10-12% of all cases of tuberculosis.In 2015, 170,000 children died of tuberculosis and there were an additional 40,000 tuberculosis deaths among children who were HIV positive.2A child usually gets a TB infection from being exposed to a sputum-positive adult.Young children below ten years of age are at risk of becoming infected with TB bacilli.They are also at high risk of developing active tuberculosis because the immune system of young children is less developed.3Itis known that tuberculosis in children can be treated and that most children tolerate the treatment well.2 The magnitude of the burden of childhood tuberculosis in most parts of the world is difficult to accurately estimate.This is because childhood tuberculosis is associated with dilemmas as regards diagnosis as well as the lack of precision with case definitions.Children rarely produce sputum and even when they do, examination of the sputum smear tends to produce negative results.Diagnosis is thus based on a combination of clinical symptoms and non-specific investigations.3Childhoodtuberculosis is usually paucibacillary and so does not contribute significantly to disease transmission.4However, it is a good pointer of recent or ongoing transmission in a community which may represent a failure of prevention measures to an extent.5 In addition, HIV infection is especially common in children due to transmission from mothers to children.As a result of the HIV infection children are at increased risk of tuberculosis.6Sub-Saharan Africa has one of the highest TB incidences and HIV prevalence.As a result, children in this region bear a huge burden of TB/HIV infection.7 The major challenge to tuberculosis control programmes in Africa is HIV/AIDS.A rise in HIV prevalence increases tuberculosis rates.Hence, the control of tuberculosis is partly dependent on the control of HIV transmission.8 Though WHO estimates that about 10-12% of total tuberculosis case notification is expected to be in children, most national tuberculosis programme reported much less.9In a study10 done in Nigeria, about 6-7% of the total tuberculosis case notification in Lagos were in children.This may be due to the difficulty in making a diagnosis of tuberculosis in children.Out of the children that were diagnosed, only 77.4% were successfully treated.10In another study involving three states in Nigeria, it was found that childhood tuberculosis treatment success rate was 83%.8 Treatment success has been identified as an indicator for the performance of the national tuberculosis control programme.It is also very important in preventing the spread of tuberculosis infection because a successful treatment implies that transmission of the infectious agent by the affected individual has been halted.11Therefore keeping surveillance on childhood tuberculosis is important and instrumental in defining its epidemiology of the disease in children and identifying the predictors of treatment outcome.12 Studies have shown that children with TB/HIV coinfection are more likely to experience higher morbidity and mortality7,13in addition to increased risk of rapid disease progression,14unsuccessful tuberculosis treatment15 and recurrence of tuberculosis infection15,16compared to those children who only had tuberculosis infection. Though tremendous progress has been made to reach the global target of a 90% treatment success rate among tuberculosis patients in the country, this figure is rarely disaggregated by the national programme to evaluate the outcome among children.Therefore, the effectiveness of the fixed drug combination among children is rarely documented.Few studies have been done to document treatment outcomes in children within Nigeria8,10 and this study will contribute to this objective. Thus, this study aims to determine TB/HIV prevalence and treatment success of children with tuberculosis attending clinics in two tertiary institutions in Ogun State, Nigeria, and to determine the factors associated with treatment success. Study Area The study was carried out in two tertiary hospitals in Ogun state.Ogun State is one of the 36 states in Nigeria.It was created in 1976 and has a total land area of 16,409.26sq.km.Its boundaries are Oyo and Osun states in the North, Ondo State in the east, Lagos State in the south, and the Republic of Benin in the West.The Capital is Abeokuta which lies about 100km north of Lagos, Nigeria's business capital.The projected population of the state is 3,728,098 according to the national census carried out in 2006.The state is divided into 3 senatorial districts and there are two public tertiary facilities located in the state.17 Olabisi Onabanjo University Teaching Hospital is a tertiary hospital located in Sagamu Ogun State, Nigeria.The town (Sagamu) is an urban area located about 50 km from the metropolitan city of Lagos with an estimated population of 253 421 as of the 2006 census.The teaching hospital was established in the year 1986 with the primary aim of providing healthcare Treatment of TB in children was free because drugs were provided by the State TB and leprosy programme.The duration of TB treatment was 6 months (except in children who had TB affecting the bone or the meninges).The treatment regimen consisted of 2 months intensive phase of Rifampicin, Isoniazid, Pyrazinamide, and Ethambutol followed by 4 months continuation phase of Rifampicin and Isoniazid.For children with tuberculosis of the bone or meninges, the treatment was extended for a duration of 12 months: 2 months intensive and 10 months continuation phase. HIV test was conducted for all presumptive tuberculosis patients in line with the national guidelines on HIV testing and counseling.The HIV rapid test kit used first was Determine (determine HIV-1/2 Alere DetermineTM, Japan 2012) and if this was positive then Uni-GoldTM(Trinity Biotech PLC, Wicklow, Ireland 2013) rapid test kit was used in series.A concordance result was regarded as positive.In cases of a discordant result, STAT-PAKR was used as the tiebreaker.For children less than 18 months, a DNA PCR test was carried out to diagnose HIV infection. The Nigeria National Tuberculosis guideline categorized tuberculosis treatment outcomes as follows. Cure: This was the proportion of patients among bacteriologically diagnosed patients that completed treatment and had at least two negative smears with an interval of at least 1 month, one of which should be obtained at the end of treatment. Treatment completed: This was the proportion of patients that completed treatment, but sputum examination results are not available. Died: The proportion of patients that died before the completion of treatment.Lost to follow up: This was the proportion of patients that did not take drugs for two consecutive months or more. Treatment failure: This was the proportion of patients who were bacteriologically diagnosed at the beginning of treatment and whose sputum smear or culture is positive at month five or later during treatment.Not evaluated: This was the proportion of patients for whom no treatment outcome was assigned.It also includes those transferred out to another treatment unit and where the treatment outcome is unknown to the reporting unit.Treatment success: Defined as the sum of the cases that were cured and that completed treatment. Ethical Consideration Ethical approval for this study was obtained from the Health Research and Ethics Committee of the Olabisi Onabanjo University Teaching Hospital, Sagamu Ogun State.Permission was also sought from the heads of the relevant facilities and clinics that were used.Strict confidentiality was maintained throughout the study.Data obtained was entered into a secured computer where only the researcher had access to it. Data collection and Analysis Niger Med J 2021; 62(1): 33 -39 January -February 2021 Data were extracted from the facility TB registers and were checked for completeness and accuracy.Data were entered into excel and exported to epi info for analysis.The necessary descriptive and inferential statistics were calculated.The level of significance was taken at p ≤ 0.05. Results A total of 759 patients were registered for treatment at the two tertiary facilities between January 2015 and June 2017.Of these, 112(14.8%)were children <15 years of age.Table 1 shows baseline characteristics of children treated for tuberculosis.About 43 % of the children treated for tuberculosis were less than 5 years of age.The mean age was 6.26± 4.3.Females made up 55.36% of the total number of children treated.Most of the children (95.54%) had pulmonary tuberculosis.However, only 6 (5.36%) of the patients were bacteriologically diagnosed.Table 2 shows the HIV status of children treated for tuberculosis.About half (46.4%) of the patients treated for tuberculosis were HIV positive.Of the positive patients, only 40.4% were on antiretroviral treatment.However, almost all of those who tested positive were on co-trimoxazole Preventive treatment (CPT). Table 3 shows factors associated with HIV status in children treated for tuberculosis.Age, site of disease, bacteriological diagnosis, and weight at the commencement of treatment were significantly associated with HIV status.Table 4 shows treatment outcomes of tuberculosis treatment in children.Treatment success in children treated for tuberculosis was 81.3%.A total of 7(6.3%) children died and 6.3% were not evaluated.Table 5 shows factors associated with the treatment outcome of tuberculosis in children.None of the variables were significantly associated with treatment success. Discussion The proportion of children with tuberculosis out of all the notified tuberculosis cases from this study was found to be 14.8%.This finding is similar to what was found in a study carried out in Ethiopia where notified childhood tuberculosis cases made up 13% of all notified cases.18Theproportion found from this study is however higher than what has been recorded in some other studies.Another study done to assess treatment outcomes among children in Lagos state showed that 6.3% of cases notified during the study period were childhood tuberculosis cases.10 The finding from this study is also above the 10-12% estimated by World Health Organization.This may be because this study was carried out in tertiary health facilities which have the capacity and technical skills to diagnose tuberculosis in children. This study also found that only 6% of the children treated for tuberculosis were bacteriologically confirmed.This is lower than what was found in the study done in Thailand where 32% of the cases were bacteriologically confirmed.19In Lagos state, 20.6% of the children were bacteriologically confirmed.10AnotherNigerian study also recorded that 27.8% of the registered childhood tuberculosis cases were bacteriologically confirmed.8Thelow proportion of bacteriologically diagnosed tuberculosis may be because a large proportion of the study population were children below the age of 5 years.These children can hardly produce sputum makes the diagnosis with sputum difficult.Also, the yield from samples collected via gastric lavage is low.Therefore, diagnosis of children relies on clinical presentation assisted by radiological and laboratory parameters.This also further highlights the challenge of diagnosing childhood tuberculosis in Nigeria. From this study, the prevalence of HIV infection in children with tuberculosis was46.3%.Most of the children who were HIV positive were ≤ 5years.This possibly reflects the state of the prevention of mother-to-child transmission of HIV in the region since children who are less than 5 years of age and who are HIV positive are likely to have acquired the infection from their mothers.The prevalence found from this study is higher than what was recorded in a study carried out in Thailand and in Lagos state where the prevalence of TB/HIV coinfection was 27% and 29% respectively.10,19Astudy done across three states in Nigeria also showed a TB/HIV co-infection rate of 14.9% in these groups of patients.8Thisstudy also found that only 40.4% of the patients with TB/HIV coinfection were on antiretroviral treatment and 98.1% of them were on cotrimoxazole preventive therapy.The percentage of children with TB/HIV coinfection on treatment as found from this study is higher than what has been estimated in low and middle-income countries where only about 34% of children less than 15yearswho need antiretroviral treatment are estimated to be receiving treatment.Nevertheless, it should be noted that this value is improving.20,21The low rate of patients on antiretroviral could be because the data obtained was from the tuberculosis registers which were possibly not updated as regards commencement of antiretroviral treatment by the patients at the retroviral disease clinics. The treatment success rate in this study was 81.3%.This study found that children who were less than 5 years of age were more likely to be successfully treated.This is different from what was found in similar studies done in Ethiopia where children less than 5 years old had a lower treatment success rate.8,21Astudy done in Lagos showed that children <1 year had the worst treatment outcomes.However, this study did not find age as a significant factor for treatment outcome.This study like the study done in Lagos state found that HIV status, gender, and type of TB were not associated with treatment success.10 Implications This study revealed that many of the patients had TB/HIV coinfection.There is therefore a need to intensify efforts for the prevention of tuberculosis among HIV-infected children.In addition, it was revealed that the majority of the children with TB/HIV coinfection were less than 5 years of age.This is highly suggestive of mother-to-child transmission of HIV with the implication of the need for more preventive programmesin the context of perinatal transmission of HIV.Although treatment success was 81.3% it may be argued that the clinics have not functioned optimally in ensuring clients' compliance with antituberculosis medications.More efforts, therefore, need to be put into patient care to ensure satisfactory outcomes for the patients. Strengths and Limitations The baseline information about the patients in this study was obtained from records.This could have eliminated the effect of recall bias as compared to when patients were asked directly at the time of the study.Multivariate analysis was also done to analyze factors associated with treatment success.However, the retrospective nature of the study makes it difficult to utilize any information which was omitted or not properly documented by the health workers in the different facilities. indigene of Ogun State and Nigeria as a whole.It is a 247-bed capacity hospital that caters to patients referred from hospitals located within and outside the state.Federal Medical Centre Abeokuta is a tertiary hospital in the state capital.It is a 250 bedded specialist hospital which was established on the 21st of April 1993.The hospital provides medical services to people of Ogun state and other neighboring states.It also serves as a referral centre for DOTS clinics in the state.Study DesignThe study was a retrospective cohort study of routine programme data of all children diagnosed and treated for tuberculosis from January 2015 to June 2017 in two tertiary hospitals in Ogun State, Nigeria.The hospitals are Olabisi Onabanjo University Teaching Hospital Sagamu and Federal medical Centre Abeokuta, Ogun State.TB programme in Ogun StateThe National Tuberculosis Programme defined childhood tuberculosis as Tuberculosis occurring in children less than 15 years of age, and any child with cough for ≥2 weeks is considered a presumptive Tuberculosis case.Samples were collected from children who could produce sputum on their own or through gastric lavage/washout for those who could not produce sputum for gene Xpert test or acid-fast bacilli (AFB) test (where gene Xpert test is unavailable).Where any of the tests was positive for Tuberculosis, the patient was classified as bacteriologically confirmed pulmonary Tuberculosis.However, where the result was negative, other diagnostic tests like chest radiograph, tuberculin test, erythrocyte sedimentation rate were performed to aid diagnosis of Tuberculosis.If the radiographic findings were consistent with the clinical signs and symptoms of Tuberculosis, the child was diagnosed as clinically diagnosed pulmonary Tuberculosis.For children who were too young to produce sputum for smear microscopy/gene Xpert test, the diagnosis was made by using a tuberculosis score chart according to the national Tuberculosis guideline.However, according to the guidelines, only a doctor is allowed to make a clinical diagnosis of tuberculosis where sputum results are unavailable. Table 1 : Baseline Characteristics of children treated for tuberculosis Table 2 : HIV status of children treated for tuberculosis Table 3 : Factors associated with HIV status in children treated for tuberculosis Table 4 : Treatment outcome of tuberculosis treatment in children Table 5 : Factors with treatment outcome of tuberculosis in children Niger Med J 2021; 62(1): 33 -39 January -February 2021 This is higher than what was obtained in some other studies with a success rate of 77% in Sadama and 78.9% in Ethiopia.22,23This is lower than what was obtained in studies done in Russia and Addis Ababa where treatment success rates were found to be 95.1% and 85.5% respectively.4,24Treatment success in HIV negative cases was 78.3% and 84.6% in HIV positive cases of childhood tuberculosis.This is similar to what was found in a study done in Lagos Nigeria which recorded a treatment success rate of 79.2% in HIV-negativechildren with tuberculosis and 73.4% in children who had TB/HIV coinfection.10The finding is also similar to what was found in a Nigerian study which found a treatment success rate of 83%.8
2024-03-21T05:04:42.860Z
2021-12-10T00:00:00.000
{ "year": 2021, "sha1": "98d8447b16b3d2cfd1e7c6c982780bb88e6f243d", "oa_license": "CCBYNCSA", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "98d8447b16b3d2cfd1e7c6c982780bb88e6f243d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
15969706
pes2o/s2orc
v3-fos-license
Coenzyme Q10 supplementation improves metabolic parameters, liver function and mitochondrial respiration in rats with high doses of atorvastatin and a cholesterol-rich diet Background The aim of this study was to evaluate the actions of coenzyme Q10 (CoQ10) on rats with a cholesterol-rich diet (HD) and high doses of atorvastatin (ATV, 0.2, 0.56 or 1.42 mg/day). Methods Two experiments were done, the first one without coenzyme Q10 supplementation. On the second experiment all groups received coenzyme Q10 0.57 mg/day as supplement. After a 6-week treatment animals were sacrificed, blood and liver were analyzed and liver mitochondria were isolated and its oxygen consumption was evaluated in state 3 (phosphorylating state) and state 4 (resting state) in order to calculate the respiratory control (RC). Results HD increased serum and hepatic cholesterol levels in rats with or without CoQ10. ATV reduced these values but CoQ10 improved even more serum and liver cholesterol. Triacylglycerols (TAG) were also lower in blood and liver of rats with ATV + CoQ10. HDL-C decreased in HD rats. Treatment with ATV maintained HDL-C levels. However, these values were lower in HD + CoQ10 compared to control diet (CD) + CoQ10. RC was lessened in liver mitochondria of HD. The administration of ATV increased RC. All groups supplemented with CoQ10 showed an increment in RC. In conclusion, the combined administration of ATV and CoQ10 improved biochemical parameters, liver function and mitochondrial respiration in hypercholesterolemic rats. Conclusions Our results suggest a potential beneficial effect of CoQ10 supplementation in hypercholesterolemic rats that also receive atorvastatin. This beneficial effect of CoQ10 must be combined with statin treatment in patient with high levels of cholesterol. Background Hypercholesterolemia is considered a risk factor for atherosclerosis and cardiovascular disease. The World Health Organization expectancy for 2020 is a death rate of 71% due to ischemic cardiomyopathy [1,2]. According to the 2012 National Health Survey [3], the prevalence of hypercholesterolemia in Mexico was 13% in adult population. As it is well known, statins constitute the current therapeutic tool for hyperlipidemia. Atorvastatin is widely used by clinicians due to its competitive action on HMG-CoA reductase that results in a decrement of plasma total cholesterol, low density lipoprotein cholesterol (LDL-C) and very low density lipoprotein cholesterol (VLDL-C). It also reduces apolipoprotein B and the triacylglycerol levels [4]. The hypocholesterolemic action of statins is well known in human beings and in animal models. Statins also produce lower levels of plasma cholesterol and triacylglycerol, and higher levels of high density lipoprotein cholesterol (HDL-C) [5]. Statins are generally well tolerated, however they may show undesirable effects such as myositis, rabdomyolysis and liver damage, but their beneficial actions exceed their collateral effects and so they continue being the first choice for the prevention of coronary cardiovascular disease [6]. Mevalonate is a precursor of endogenous cholesterol and other metabolites like ubiquinone, dolichol and other isoprenoids [7]. Ubiquinone is also known as coenzyme Q 10 (CoQ 10 ), it belongs to a family of compounds that share a common structure, the benzoquinone ring, and differ in their isoprenoid lateral chain length; CoQ 10 is a redox component of the mitochondrial respiratory chain that synthesizes ATP. The reduced form of CoQ 10 (ubiquinol) is a powerful lipophilic antioxidant that participates in tocopherol and ascorbate recycling as antioxidants [8]. Other reports suggest that statins reduce CoQ 10 biosynthesis in the liver. This reduced content could diminish oxygen consumption by the mitochondria and therefore affect the respiratory control. The aim of this study was to evaluate the role of CoQ 10 on metabolic parameters, liver function and mitochondrial respiration in rats with high doses of atorvastatin and a cholesterol-rich diet, a condition which is severely harmful [9] in rodents. It was employed a ATV 1 (0.2 mg/day) dose in rats (200 g body weight) because it is equivalent to a ATV dose of 60 mg/day in a human being (60 Kg body weight). ATV 2 (0.56 mg/day) and ATV 3 (1.42 mg/day) correspond to 2.8 and 7 fold higher doses. Results Body weight gain and liver weight as a percentage of body weight The HD rats showed a higher body weight gain compared to CD rats (p < 0.05), however, there were not significant differences in groups HD + ATV 1, 2, 3 compared with HD without CoQ 10 . HD + ATV 1 + CoQ 10 showed less weight gain (77 ± 28 g) with respect to HD + CoQ 10 (p < 0.05) ( Table 1). A similar pattern was observed between HD + ATV 3 + CoQ 10 and HD + CoQ 10 (p < 0.05). Moreover, there was an important diminution of body weight gain in all groups that received CoQ 10 supplementation in comparison with the same groups without CoQ 10 (p < 0.05). On the other hand, the liver percentage relative to body weight showed a significant decrement in CD + ATV 2 (2.7 ± 0.1%) compared to CD (2.9 ± 0.2%), and a significantly increment in HD (4.7 ± 1.3%) also in comparison with CD (p < 0.05). Besides, HD + ATV 2 + CoQ 10 (3.08 ± 0.13%) and HD + ATV 3 + CoQ 10 (3.2 ± 0.18%) showed a significant decrement relative to HD + CoQ 10 (3.9 ± 1.1%). The weight gain and the liver percentage relative to body weight were significantly different between the respective groups with and without supplementation of CoQ 10 in their diet. Biochemical parameters The administration of a cholesterol-rich diet produced elevated levels of plasma cholesterol in rats with or without CoQ 10 supplementation (249.3 ± 9.0 mg/dL, and 241.5 ± 26.4 mg/dL, respectively) compared with CD (72.2 ± 2.49 mg/dL) and with CD + ATV 2 (77.6 ± 3.6 mg/dL); on the other hand, the supplementation with CoQ 10 produced significant lower values of serum cholesterol in HD + ATV 1, 2, 3 + CoQ 10 compared with the same groups without CoQ 10 (p < 0.05) ( Figure 1A, B). HDL-C in serum showed a significant decrease in HD (16.1 ± 0.4 mg/dL) with respect to CD (26.0 ± 2.6 mg/dL) Figure 1E, F). Serum glucose levels significantly increased in HD and HD + CoQ 10 (110.2 ± 1.2 mg/dL and 99.6 ± 0.7 mg/dL, respectively) in comparison with CD and CD + CoQ 10 (88.8 ± 4.9 mg/dL; 53.4 ± 12.9, respectively). The treatment with HD + ATV 1 decreased glucose level. However, it was obtained a significant diminution in groups CD, CD + ATV 2 , HD and HD + ATV 1 treated with CoQ 10 supplementation ( Figure 1G, H) in comparison with HD not supplemented. Cholesterol and triacylglycerols from the liver The administration of a cholesterol-rich diet produced a significant increase in hepatic cholesterol in HD (12.43 ± 2.84 mg/g) and HD + CoQ 10 (6.42 ± 0.86 mg/g) compared with CD (2.71 ± 0.86 mg/g) or CD + CoQ 10 (2.30 ± 0.75 mg/g). ATV administration in HD + ATV 1,2,3 decreased cholesterol levels in comparison with HD and a more significant response was obtained when CoQ 10 was supplemented. Liver triacylglycerol levels were also increased in HD (10.66 ± 0.33 mg/g) and HD + CoQ 10 (10.7 ± 0.32 mg/g) in comparison with CD (5.58 ± 0.88 mg/g) and CD + CoQ 10 (4.30 ± 0.55 mg/g). On the other hand, HD + ATV 1, 2, 3 diminished TAG levels when compared with HD, but only with the highest dose the difference was significant. All the groups HD + ATV with CoQ 10 supplementation showed lower values in hepatic TAG levels than those observed in HD + CoQ 10 group (Table 3). Respiratory control The respiratory control (RC) was lessened in the liver mitochondria of HD rats (2.02 ± 0.5) in comparison with CD (2.98 ± 0.06). The treatment of ATV 1 or ATV 3 to HD rats induced a significant increase in oxygen consumption (2.93 ± 0.3 and 2.38 ± 0.35, respectively) in comparison to HD (2.02 ± 0.5). However, HD + ATV 2 showed a lower RC (1.85 ± 0.15) in comparison to HD (See figure on previous page.) Figure 1 Effect of a cholesterol-rich diet and atorvastatin given to rats with and without coenzyme Q 10 supplementation on biochemical parameters. Each bar is the mean ± S.E.M. of eight animals. *P < 0.05. Statistical analysis was done by One-way analysis of variance (ANOVA), followed by the Student-Newman-Keuls test and differences between the groups were determined by the Student's t test (Without CoQ 10 vs With CoQ 10 ). a Statistically different from CD; b statistically different from HD; 1 statistically different from the same group without CoQ 10 (p < 0.05). A, C, E, G without CoQ 10 ; B, D, F, H with Q 10 . HDL-C (High density lipoprotein-Cholesterol). (2.02 ± 0.5). All groups but HD + ATV 1 showed an increment in RC when they were supplemented with CoQ 10 (p < 0.05) in comparison with the same groups without CoQ 10 supplementation (Table 4). Discussion HD administered to rats induced an increment in serum cholesterol and triacylglycerols ( Figure 1A). These results are consistent with previous studies [9,10]. As expected, when ATV was administered, cholesterol and triacylglycerols showed a dose dependent decrement (p < 0.05). Supplementation with CoQ 10 increased the effects of ATV on cholesterol levels. Rats with HD showed a slight increased concentration of serum triacylglycerols. The administration of ATV did not reduce these TAG levels, however, all values were minor when CoQ 10 was supplemented ( Figure 1C, D). In our study it was observed a significant diminution of serum cholesterol in rats that received ATV + CoQ 10 in comparison with those groups that did not receive CoQ 10 . These results support a better hypolipidemic effect of ATV in the presence of CoQ 10 . This improvement in the effect of ATV by CoQ 10 has already been reported in Guinea pigs [10,11]. HDL-C showed a significant decrease in the HD group. The administration of ATV to HD rats did not increase HDL-C values but kept them similar to those observed in CD group. However, HDL-C values were higher in groups CD + CoQ 10 , HD + CoQ 10 and CD + ATV 2 + CoQ 10 compared to CD without CoQ 10 supplementation ( Figure 1E, F). These results confirm the beneficial effect of ATV on HDL-C levels and even the more beneficial effect of CoQ 10 supplementation on at least in groups CD + CoQ 10 , HD + CoQ 10 and CD + ATV 2 + CoQ 10 . It is well known that statins inhibit cholesterol biosynthesis in the liver, decrease the intracellular cholesterol content, augment low density lipoprotein-receptor (LDL-R) synthesis as well as the cholesterol uptake by the liver, and diminish serum total cholesterol concentration [12]. In addition, statins increment HDL-C levels throughout an increase of apoprotein A synthesis in the liver [13] and a reduced activity of cholesterol ester transfer protein (CETP). Mabuchi et al. [14] reported that co-administration of ATV-CoQ 10 , favored a significant increase of HDL-C in hypercholesterolemic patients. Singh et al. [15] observed an important increment of HDL-C in patients that received CoQ 10 . However, it has been reported no increase in HDL-C in patients that received simvastatin and CoQ 10 . Nevertheless, it is not clear how is that synergistic effect of CoQ 10 on ATV action. It is well known that CoQ 10 and cholesterol are synthesized by the same pathway and that high ATV doses produce a significant decrement in CoQ 10 levels in plasma [14,15] and this decrement in serum CoQ 10 is related direct or indirectly to the potential liver harm produced by the statin treatment [16]. On the other hand, CoQ 10 administration may inhibit the expression of the apo A-I receptor, increasing apoprotein A-I and increasing HDL-C levels [15]. Our results show that rats that received atorvastatin (0.2 mg/day) and CoQ 10 had lower levels of serum glucose than the same group without CoQ 10 ( Figure 1G, H). In addition, CoQ 10 regulates glucose levels throughout a diminution of oxidative stress [17]. On the other hand, other reports have shown that ATV lowers serum cholesterol, increases glucose blood levels and raises insulin resistance [18]. These data altogether suggest that coadministration of CoQ 10 and ATV improves glucose metabolism in the hypercholesterolemic state. Some reports indicate that CoQ 10 administration improves pancreatic beta cells function, increases insulin sensitivity and preserves the mitochondrial function in the pancreas [19]. Moreover, CoQ 10 diminishes lipoperoxidation and raises glucose uptake. These results suggest that CoQ 10 improves glucose metabolism in hypercholesterolemia under atorvastatin treatment. It was also observed in ATV-treated rats an increment in ALT and AST serum activity. Previous studies have also shown an increase in serum aminotransferases (ALT y AST) in rats that received HD and ATV [20,21]; these results were related to liver damage. In accordance with these results, other animal models with hypercaloric diet are predisposed to hyperlipidemia and liver steatosis [21,22]. On the other hand, a study employing ATV in rats did not show change in the activity of serum aminotransferases [10]. The high serum aminotransferases levels in rats with cholesterol-rich diet are related to liver damage. This harm is due to membrane damage in hepatocytes which produces a lessened antioxidant and detoxification capacity of the liver [21]. Other studies have reported higher activity of transaminases produced by statin administration to rats [21,22]. On the contrary, our study showed a slight decrement of AST and ALT activity in ATV-CoQ 10 , treated animals compared with those that received only ATV. Also, Mabuchi et al. [14] observed a diminution of AST and ALT in patients treated with ATV and CoQ 10 . Moreover, Abbas and Sakr [23] reported a diminution of AST and ALT activity in Guinea pigs that received simvastatin-CoQ 10 , comparing with animals that only received simvastatin. All these results together may assign a protector effect of CoQ 10 on the hepatocytes of rats fed a cholesterol-rich diet. In our study, it was observed that ATV lessened cholesterol and triacylglycerol concentration in the liver in a dose-dependent manner in hypercholesterolemic rats. Several reports suggest that this increment induced by HD contributes to the liver steatosis, as well as the dietary fatty acids and cholesterol promote the lipid accumulation in the hepatocytes. These cells have receptors for the transcription factor PPAR-α, allowing fatty acid oxidation in mitochondria, microsomes and peroxisomes [24]. As a result, fatty acids oxidation products (hydrogen peroxide, oxygen superoxide and lipid peroxides) are produced and induce lipid peroxidation and oxidative stress [25]. Several studies have shown that a cholesterol-rich diet given to rats produce a fatty liver, hypertrophy of the liver and macroscopic alterations [25,26] as a consequence of hepatocyte cholesterol saturation; the novo cholesterol synthesis lowers and consequently it is produced a diminished uptake of LDL by its receptors. Results from other studies show a lower activity of HMG-CoA reductase and lower expression of LDL receptors in the liver from rats fed a high fat diet [27]. Our study showed a significant decrease of cholesterol and triacylglycerol levels in the liver of animals treated with ATV and CoQ 10 , compared with those rats that only received ATV. These results are in coincidence with other reports [11] that suggest CoQ 10 improves the hypolipemiant action of statins. As we already mentioned it is not currently known the mechanism by which CoQ 10 increases statins action. Some studies suggest CoQ 10 influences the negative feedback of hepatic cholesterol. Moreover, cholesterol metabolism in the liver is mediated by lanosterol 14α demethylase (CYP51) throughout the sterol regulatory binding proteins (SREBPs) [28,29]. Previous reports studying the effect of the reduced form of CoQ 10 on the liver cholesterol metabolism, showed an antagonistic action on the ligand binding to X receptor (LXR) [30]. Liver LXRs induce SREBP-1c, a transcription factor that controls the expression of several genes involved in cholesterol biosynthesis and its reverse transport. On the other hand, the amount of dietary cholesterol to be absorbed at the intestine is controlled by a transporter family (ABC), localized at the enterocyte membrane. These proteins pour out cholesterol from the enterocytes to the intestine lumen. The hydroxyl group of the reduced form of CoQ 10 is important for this antagonistic action on ABC transporter genes throughout the LXR ligand [31]. This mechanism may explain the cholesterol diminution in serum and liver observed in all animals that received ATV and CoQ 10 in our study. It is generally accepted that a cholesterol-rich diet produces structural mitochondrial alteration in the liver and higher production of reactive oxygen species (ROS) with hepatocellular damage [31,32]. Electron microscope studies in rats with non-alcoholic fatty liver, show scarce mitochondria, higher in size, deformed, hypodense, with paracrystaline inclusions, hepatosteatosis and altered fatty acid oxidation [33]. In our study HD produced a lower respiratory control. Other authors suggest that a high-lipid diet induce deterioration of complex I (NAD: ubiquinone oxidoreductase) and II (succinate dehydrogenase) of the mitochondrial chain [10]. Other reports suggest that statins like pravastatin lessen the mitochondrial respiratory control affecting complex I and IV (cytochrome c oxidase) in skeletal muscle [33,34]. In addition, simvastatin induces myotube atrophy and cell loss associated with impaired ADP-stimulated maximal mitochondrial respiratory capacity, mitochondrial oxidative stress [35]. It is known that ATV reduces the cholesterol-phospholipid ratio in cellular membrane, raising its fluidity and the activity of ATPase Na + /K + [36]. All these modifications in cellular membrane affect the activity of participating enzymes of the mitochondrial electron transport chain with probable alteration of its bioenergetic function. Our results support that the mitochondrial respiration diminution observed in animals treated with ATV can be attributed to lower levels of CoQ 10 . The mitochondrial respiratory chain and particularly complex I and complex III (ubiquinone:cytochrome c oxidoreductase) are able to produce an anion superoxide from oxygen. In hepatocytes from normal rats this is a tenuous production that doesn't interfere with the respiratory chain activity, but is functioning as a mitochondrial protective antioxidant system. On the other hand, CoQ 10 is an invaluable component of the mitochondrial respiratory chain [36,37] and a diminution in its availability affects for sure the energetic metabolism. It is known that the administration of CoQ 10 and simvastatin increased the activity of complex I in cardiomyocytes but decreased with simvastatin alone. On the other hand, there are evidences suggesting that the significant decrease in ATP concentration in simvastatin-treated rats was due to CoQ 10 deficiency [38]. Our results show a higher mitochondrial RC in all groups that received ATV and CoQ 10 . On the same way, Kimura et al. [39] communicated the increase in muscle fibers contraction in rats that received CoQ 10 due to improvement of cell membrane. A study suggests CoQ 10 may reduce symptoms related to heart failure and increased energy production in heart muscle [40]. Statins sometimes cause muscle pain and oral CoQ 10 might reduce this pain [40,41]. In our study we observed that ATV decreased mitochondrial respiration but ATV and CoQ 10 improved mitochondrial function using succinate as substrate. Statins have been associated with a reduction in serum and muscle tissue coenzyme Q 10 levels that may play a role in statin-induced myopathy. Aged people appear to be more susceptible to coenzyme Q 10 deficiency. Athletes also require the most efficient oxygen consumption by mitochondria for their performance, and are more susceptible to CoQ 10 deficiency. However, there is not a general opinion regarding the effectiveness of CoQ 10 supplementation. It seems that those that would gain the major benefit from this supplementation are the hypercholesterolemic patients. Conclusions Our results support that the combination ATV and CoQ 10 improves biochemical parameters, liver mitochondrial respiratory function in hypercholesterolemic rats with high ATV doses. These results have implications when considering statin safety and effectiveness. Supplementation with CoQ 10 may add beneficial effects in hypercholesterolemic patients, being harmless for human beings and also having a hepato-protector action. Animals and diets Male Wistar rats were obtained from Unidad de Producción, Cuidado y Experimentación Animal (UPCEA), División Académica de Ciencias de la Salud (DACS), Universidad Juárez Autónoma de Tabasco (UJAT), verified by the Secretaria de Agricultura, Ganadería y Recursos Pecuarios (SAGARPA 2005 HD + ATV 1 (atorvastatin 0.2 mg/day) HD + ATV 2 (atorvastatin 0.56 mg/day) HD + ATV 3 (atorvastatin 1.42 mg/day) All Animals were given free access to water and diets during the six week experimental time. Diets were freshly prepared each day with grinded food. Body weight was assessed once a week. All animals were kept under the above mentioned experimental conditions for a 6-week period. At the end of treatment and after a 12 h food withdrawal, rats were sacrificed by decapitation. The liver was removed, weighted and 0.5 g were used for biochemical determinations, the remaining liver tissue was used for the assay of mitochondrial respiratory function. Biochemical parameters Blood was collected and serum was immediately frozen and stored at −70°C until the biochemical determinations were performed. Serum levels of glucose, cholesterol, triacylglycerols, high-density lipoprotein-cholesterol (HDL-C), aspartate aminotransferase (AST), and alanine aminotransferase (ALT) were analyzed using a Clinical Chemistry System from Random Access Diagnostics. Cholesterol and triacylglycerols from the liver Liver lipids were extracted according to the Folch et al. [42] (1957) procedure, whereas triacylglycerols and cholesterol concentrations were measured using enzymatic colorimetric determinations according to diagnostic kits from BioSystems Laboratories. Mitochondria isolation Hepatic mitochondria were harvested by centrifugation, washed twice with 250 mM sucrose, 0.5 mM HEPES, 0.5 mM EGTA (SHE) buffer and resuspended in SHE pH 7.2 at a final ratio of 5 ml/g wet weight. Subsequent steps were carried out in the same buffer at 4°C and mitochondria were isolated by differential centrifugation. Briefly, cell debris was eliminated by centrifugation at 3000 g for 10 min, the mitochondrial pellet was obtained by spinning the supernatant for 10 min at 12000 g, it was washed once to eliminate cytosolic contamination, and suspended with SHE buffer to a final protein concentration of 10-30 mg/mL. Protein determination was performed using the Bradford method [43] (1976). Oxygen consumption Respiratory measurements were carried out in 3.5 ml of air-saturated medium with 5 mM succinate, 2 mM MgCl 2 , 2 mM H 3 PO 4 , 2 mM EGTA, 30 mM HEPES, 0.1% BSA, pH 7.2 at 24°C. Oxygen consumption was determined using a Clark-type oxygen electrode. Data are expressed as the respiratory control ratio (RCR), which is a relative value of state 3 and state 4 that indicates the respiratory coupling in availability of ADP [44] (1967). Statistical analysis Comparisons between means were performed using One-way analysis of variance (ANOVA), followed by Student-Newman-Keuls test and differences between the groups were determined by the Student's t test (without CoQ 10 vs with CoQ 10 ). Differences were considered to reach statistical significance when p < 0.05. For the post hot calculation of the statistical power in the ANOVA test for the experiment, we used the G*Power 3.0.10 software (Franz Faul, Universität Kiel, Germany). We used 20% difference in group (i.e. effect size), α level of 0.05, Total sample size: 96 and number of groups 8, and the value of 16 animals needed, then the power was of 1.00.
2016-05-12T22:15:10.714Z
2014-01-25T00:00:00.000
{ "year": 2014, "sha1": "4f7e5a2872c74bdc161cf8874a1c26670d44e79d", "oa_license": "CCBY", "oa_url": "https://lipidworld.biomedcentral.com/track/pdf/10.1186/1476-511X-13-22", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "727f4c0a601aa1adce37f753fd4ea3ea246c32e2", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
2838667
pes2o/s2orc
v3-fos-license
Sustained low peritoneal effluent CCL18 levels are associated with preservation of peritoneal membrane function in peritoneal dialysis Peritoneal membrane failure (PMF) and, ultimately, encapsulating peritoneal sclerosis (EPS) are the most serious peritoneal dialysis (PD) complications. Combining clinical and peritoneal transport data with the measurement of molecular biomarkers, such as the chemokine CCL18, would improve the complex diagnosis and management of PMF. We measured CCL18 levels in 43 patients’ effluent and serum at baseline and after 1, 2, and 3 years of PD treatment by retrospective longitudinal study, and evaluated their association with PMF/EPS development and peritoneal risk factors. To confirm the trends observed in the longitudinal study, a cross-sectional study was performed on 61 isolated samples from long-term (more than 3 years) patients treated with PD. We observed that the patients with no membrane dysfunction showed sustained low CCL18 levels in peritoneal effluent over time. An increase in CCL18 levels at any time was predictive of PMF development (final CCL18 increase over baseline, p = .014; and maximum CCL18 increase, p = .039). At year 3 of PD, CCL18 values in effluent under 3.15 ng/ml showed an 89.5% negative predictive value, and higher levels were associated with later PMF (odds ratio 4.3; 95% CI 0.90–20.89; p = .067). Moreover, CCL18 levels in effluent at year 3 of PD were independently associated with a risk of PMF development, adjusted for the classical (water and creatinine) peritoneal transport parameters. These trends were confirmed in a cross-sectional study of 61 long-term patients treated with PD. In conclusion, our study shows the diagnostic capacity of chemokine CCL18 levels in peritoneal effluent to predict PMF and suggests CCL18 as a new marker and mediator of this serious condition as well as a new potential therapeutic target. Introduction Peritoneal dialysis (PD) is a technique for substitution of kidney function.Due to the use of a biological membrane with bioincompatible dialysis solutions the peritoneum suffers alterations that limit its use and long-term viability.This bioincompatibility ultimately leads to ultrafiltration failure (UFF), irreversible structural fibrosis, and potentially, encapsulating peritoneal sclerosis (EPS). Clinical risk parameters and functional data of peritoneal membrane transport, such as highfast small solute transport and lower free water transport, are conventionally used to predict functional and structural peritoneal membrane damage [1,2,3,4].Although a continuum between peritoneal fibrosis and EPS has not been established, treatment for more than 4 years with bioincompatible PD solutions is known to present the highest risk for EPS [2,5].Episodes of peritonitis [1,2,6], glucose degradation products (GDPs) [1,7], epithelial to mesenchymal transition (EMT) of mesothelial cells [8], and time are factors promoting peritoneal dysfunction. Early clinical signs of peritoneal transport alterations have not been sufficient to establish scientifically guided evidence for early EPS diagnosis [2].The possibility of combining this approach with molecular biomarkers would improve the detection and prevention of peritoneal functional and structural failure.The peritoneal membrane failure has been associated with peritoneal membrane fibrosis and EMT [9].Alternatively activated macrophages (M2) can contribute to the fibrotic process of the peritoneum under PD [10].M2 macrophages have a specific phenotype, being the chemokine CCL18 one of the hallmarks of this subpopulation [11].In addition to the generation of extracellular matrix [12] M2 can stimulate fibroblast proliferation via CCL18 [10,11,13].Moreover, coculture of M2 with fibroblasts enhances the production of collagen and this is partly dependent on CCL18 [13,14]).CCL18 chemokine is produced by peritoneal M2 macrophages and is notably present at easily measurable levels in the effluent of patients treated with PD [10,15].High plasma CCL18 levels have been involved in progressive fibrosing disorders such as pulmonary and liver fibrosis [14,16]. We found that high levels of CCL18 in peritoneal effluent were associated with UFF and with later development of EPS in a small number of patients [9]; similar data have been reported by other authors [15].Our objective was to analyze the predictive capacity of CCL18 levels in serum and peritoneal effluent to herald peritoneal membrane damage, as shown by functional data [1], or the development of EPS.We performed a longitudinal retrospective study based on our collection of frozen peritoneal effluents and serum samples from patients treated with PD for more than 2 years.To confirm the trends observed in the longitudinal study, a transversal study was performed on 61 isolated samples from long-term patients treated with PD. Study participants A retrospective longitudinal study was performed as intention-to-treat on 43 patients from the peritoneal dialysis unit of La Paz University Hospital (Madrid, Spain) between December 1999 and December 2010, who underwent more than 2 years of PD treatment.Demographic data are shown in S1 Table .Samples and clinical data were collected at 4 time points: baseline (defined as the first 6 months of PD), 1, 2, and 3 consecutive years. The exclusion criteria were age under 18 years, solid or hematological tumor, viral hepatitis, acute liver disease, acute inflammatory processes, significant allergic affectation, pulmonary fibrosis or significant organ fibrosis, deposit disease, or significant connective-dermatologic pathology. Peritoneal functional data from these patients are described in Table 1.It is noteworthy that 13 patients required a high daily peritoneal glucose load to remain in appropriate extracellular volume status at year 3, with average peritoneal functional data (Dialysate/Plasma of Cr [Cr D/ P], Mass Transfer-Area Coefficient of creatinine [Cr-MTAC] and urea [U-MTAC]) stable over the follow-up.Peritoneal membrane failure (PMF) and EPS cases were diagnosed during or after the study time.Ten patients eventually developed PMF.The mean follow-up of the PMF group was 46.2 (range 22-61) months.Similarly, the mean follow-up of patients without PMF was 35.2 (range 13-65) months (nonsignificant).No demographic differences were found between the groups.Only 2 patients were diagnosed with EPS, at 80 and 91 months Sixty-one long-term (more than 3 years) patients treated with PD were analyzed in a transversal cross-sectional study (S2 Table ).The last sample available before the cessation of PD was chosen for the analysis.The mean sampling time was 57.7 (range 30.4-152.9)months.Peritoneal function and risk factors for PMF corresponding to these patients are shown in Table 2. Twenty-two patients developed PMF at a median time of 55.9 (range 19-124) months; of these, 6 developed EPS at a median time of 106.3 months' treatment with PD (range 60-147 months). The study was approved by the Research Ethics Committee of La Paz University Hospital (PI12_0024).All clinical investigations have been conducted according to the principles expressed in the declaration of Helsinki and written informed consent was obtained from all the patients. Definition of peritoneal membrane failure We defined peritoneal membrane failure (PMF) as Cr MTAC values over 12 ml/min or Cr D/P over 0.8 (all representing high/fast transport) and/or UF capacity less than 400 ml/4 h, all appearing after 2 or more years of PD treatment, without a specific precipitating event [1]. Definition of encapsulating peritoneal sclerosis EPS was diagnosed by surgical or histological direct visualization, the presence of two major criteria (compatible image by tomography and ultrasonography), or one major and two minor criteria (symptoms of bowel pseudo-or obstruction, high peritoneal transport). ELISA CCL18 and PAI-1 CCL18 was quantified in serum and 4-hour dwell peritoneal effluents using a DuoSet ELISA Development System (R&D Systems Europe, UK), according to the manufacturer's instructions.On average, 1/1000 dilution and 1/50 dilutions were analyzed in serum and effluent samples, respectively.PAI-1 was quantified in undiluted effluent samples using a DuoSet ELISA Development System (R&D Systems Europe, UK), according to the manufacturer's instructions. Statistical analysis The statistical analysis was performed using SPSS-15 software (IBM, Inc., Chicago, Il, USA).We performed a descriptive analysis of serum and effluent samples at each time point.Parametric or nonparametric tests were applied according to the distribution of the samples.The data of serum and effluent CCL18/PAI-1 levels in the longitudinal study were analyzed using a linear mixed-effects regression model (LMERM), considering time and the intercept as random effects. The development of PMF or EPS in relationship to the evolution of effluent CCL18 concentrations was investigated by Kaplan-Meier analysis.A Cox hazard analysis was used to analyze the association between the behavior of CCL18 in peritoneal effluent and the subsequent development of PMF or EPS. The diagnostic capacity of CCL18/PAI-1 in effluent at the third year of PD treatment for prediction of PMF was evaluated by using the receiver operating characteristic (ROC) curve.Youden's index was used to choose the point at which sensitivity and specificity were simultaneously maximized as the optimal cut-off point. To analyze the additional value of CCL18 in effluent we conducted a multivariate analysis adjusted for classical membrane failure parameters (Cr D/P, Cr-or U-MTAC and UFF). Serum CCL18 levels during longitudinal study Serum CCL18 values during follow-up are shown in S3 Table and S1A Fig.The average serum CCL18 concentration was 151.59±72.06ng/ml.High individual variability was detected and no significant changes were found over time using the LMERM.The serum CCL18 levels did not show any significant association with clinical features, the use of biocompatible solutions, peritonitis, abnormal peritoneal transport, or treatment with immunomodulatory agents.Also, no significant association was found between serum CCL18 levels and the development of EPS or PMF. Effluent CCL18 levels during longitudinal study The CCL18 values in effluent are shown in S3 Table and S1B Fig, none demonstrating overall significant changes during the follow-up (LMERM).However, some clinical situations were identified in which the concentration of CCL18 tended to increase. Patients with no membrane dysfunction showed sustained low levels of CCL18 in peritoneal effluent over time An observational analysis of the raw data gathered from the 43 retrospective longitudinal study patients revealed a group of 11 patients whose CCL18 levels in effluent remained without relevant changes throughout the study and with CCL18 levels in effluent lower than the mean values obtained in each time point, and below or close to the mean CCL18 effluent values determined in patients with no PMF in a previous study [10].Therefore, two groups were defined: group 1, comprising the 11 patients with stable low CCL18 values in effluent, and group 2 including the remaining 32 patients (including patients with initial values higher that the mean baseline values, and patients whose effluent CCL18 levels increased along the time under study) (S2 Fig) .Demographic data are shown in S4 Table.Peritoneal functional data in both groups are shown in Table 3. The differences in the concentration of CCL18 in peritoneal effluent between both groups were significant throughout the study (Fig 1A ).A Kaplan-Meier analysis showed that no patient within group 1 (low and stable CCL18 levels in effluent) developed PMF within the 3 years, whereas patients with higher or growing levels of CCL18 in effluent eventually did (Fig 1B). The elevated CCL18 effluent levels observed in group 2 were not associated with an increased peritoneal membrane transport of small solutes or proteins (Fig 1C). An increase in CCL18 in effluent heralds peritoneal membrane dysfunction Given we observed that sustained low CCL18 effluent concentrations appeared to be associated with improved peritoneal membrane survival, we explored whether an increase in CCL18 Patients who developed peritoneal membrane failure showed higher effluent levels of CCL18 Higher concentrations of CCL18 were found during the entire follow-up period in effluents from the 10 patients who eventually developed PMF, with significant differences in the second year (4.23 vs. 2.68 ng/ml; p = .024)and third year (4.70 vs. 2.92 ng/ml; p = .006) of PD treatment, relative to patients without PMF (Fig 3A). It is of note that the 2 patients included in the longitudinal study who finally developed EPS also showed higher CCL18 in effluent at baseline and during the first and second years of PD treatment, compared with the rest of the patients (data not shown).In addition, in the crosssectional study, we observed significantly higher concentrations of CCL18 in peritoneal Diagnostic capacity of CCL18 levels in effluent A ROC curve was built to explore the diagnostic capacity of CCL18 in effluent at the third year of PD treatment to identify patients at PMF risk.The area under the curve was 0.776 (95% CI 0.611-0.941).The optimal CCL18 value in effluent that simultaneously maximized sensitivity (80%) and specificity (68%) to predict the subsequent development of PMF was 3.15 ng/ml or higher at year 3, with an 89.5% negative predictive value (NPV) (Fig 4A ). Using Cox regression analysis, we also observed an association between levels of CCL18 in effluent higher than 3.15 ng/ml at year 3 and a late diagnosis of PMF/EPS (OR 4.33, 95% CI 0.90-20.89;p = .067). Finally, using a multivariate analysis we observed that levels of CCL18 in effluent at year 3 were independently associated with a risk of development of PMF/EPS adjusted for the classical peritoneal transport parameters (Cr-MTAC, Cr D/P, U-MTAC, UF), which were the best predictors of this outcome (Fig 4B).These results were confirmed in the cross-sectional study on long-term patients (Fig 4C). Analysis of PAI-1 effluent levels It has been recently reported that effluent PAI-1 may help in monitoring peritoneal fibrosis [18].Effluent levels of PAI-1 were evaluated in order to compare the predictive ability of this soluble factor with CCL18.In the longitudinal study, we observed a positive correlation between PAI-1 and CCL18 effluent levels at baseline (r = .31;p = .04)and at year 2 on PD (r = .42;p<0.001).However, this correlation was not found in the samples at years 1 and 3 of PD.Nonetheless, in the cross-sectional study CCL18 and PAI-1 effluent levels seemed to be related (r = .34;p<0.001). The values obtained in the longitudinal study are shown in Fig 5A .No significant differences were detected at different time points.Similar to CCL18, higher effluent levels of PAI-1 were found on average at all time points in the longitudinal study in patients who developed PMF; although the differences were not statistically significant (Fig 5B). In the cross-sectional study significantly higher levels of PAI-1 were found in patients who developed PMF (1.03 vs 0.45 ng/ml; p = .0038)(Fig 6A ).A trend to higher effluent values were found also in patients who developed EPS; although the differences were not statistically significant (Fig 6B).Time-specific ROC curves were built.Effluent concentrations of PAI-1 at the third year of PD treatment showed lower capacity than CCL18 concentrations to predict PMF (AUC: 0.656; 95% CI: 0.41-0.9);not significant).Baseline concentrations of PAI-1 were evaluated as well showing also lower diagnostic power as compared to CCL18 effluent levels at year 3 of PD.The combination of CCL18 and PAI-1 did not improve the diagnostic power, likely because the high quantitative differences in the concentrations of both factors, cause a small contribution of PAI-1 to the variable; although it is possible also that at that time CCL18 is more relevant as predictor of peritoneal survival Effluent appearance rates were also evaluated for comparative purposes showing no improvement in the diagnostic capacity of CCL18 or PAI-1 (Table 4). Discussion In this study, we demonstrated the diagnostic capacity of the chemokine CCL18 evaluated in peritoneal effluent to predict dysfunction of the peritoneal membrane under PD.The most remarkable result was that sustained low peritoneal CCL18 concentrations appeared among patients with no membrane dysfunction at the medium term.In contrast, patients who developed PMF showed higher or progressively increased CCL18 effluent levels.The increment observed in CCL18 effluent levels was not observed in CCL18 serum levels and was not associated with an increase in peritoneal membrane small solutes or protein transport (Cr D/P, Cr-MTAC, U-MTAC, peritoneal protein losses).This suggests a major local production of CCL18 in the peritoneum.At year 3 of PD treatment, CCL18 levels above 3.15 ng/ml accurately predicted ultimate membrane dysfunction (89.5% of negative predictive value), independent of peritoneal transport parameters (Cr-MTAC, Cr D/P, U-MTAC, UFF) at that time.To date, these parameters have been considered the primary criteria with a capacity to predict worsening of the peritoneal membrane's functional status [2].These criteria are now amended by our findings and by other reports demonstrating that specific biomarkers related to inflammation or repair processes, such as interleukin-6 and plasminogen activator inhibitor (PAI)-1, precede severe changes in peritoneal function [18,19].However, contrary to other conditions such idiopathic pulmonary fibrosis [20], serum levels of CCL18 were not associated with peritoneal membrane status prediction. Peritoneal functional changes such as UFF with high-fast small solute transport and lower free water transport have been shown to herald EPS [1,4,21].This disorder is based on denser collagen at the peritoneal interstitium [21], limiting interstitial water circulation with preservation of AQP1 in the peritoneal capillary endothelium, the only factor related to date with free water transport in the peritoneum.Recently, isolated biomarker approaches such as the evaluation of PAI-1 in peritoneal effluent have been successful in predicting EPS [18].The mechanism suggested for this marker/mediator is enhanced fibrin deposition and collagen secretion stimulated by PAI-1 in mesothelial cells.To investigate the power of CCL18 as a biomarker indicating the status of the peritoneal membrane, we defined PMF as the combined values of high solute transport and deficient UF capacity, with EPS being the extreme condition.The relationship between CCL18 in effluent and PMF was investigated in two groups of patients, including a first group of patients from whom serial samples were available (longitudinal study) and a second group of patients who were undergoing long-term PD treatment and from whom late samples were available (cross-sectional study). The primary finding of the longitudinal study was from a subset of patients who maintained low and stable CCL18 levels in effluent throughout PD treatment, given none of these patients developed PMF during the follow-up period.We also found that an increase in CCL18 in effluent at any time was predictive of PMF development.Moreover, effluent levels of CCL18 at the second and third year of PD treatment were significantly higher in the patients who developed PMF.In the cross-sectional study, we also confirmed significantly higher effluent concentrations of CCL18 in the patients who ultimately developed PMF and/or EPS, in agreement with previous findings [10,15].Because it is probably not feasible to evaluate CCL18 periodically in these patients, a ROC curve was used to analyze the predictive value of effluent CCL18 for development of PMF in samples from patients at the third year of PD treatment.CCL18 concentrations above 3.15 ng/ml were found to predict the subsequent development of PMF with 80% sensitivity and 89.5% NPV.We have also evaluated the capacity of effluent concentrations of PAI-1 to predict the development of PMF at different time points.The results revealed that in our patients, effluent CCL18 concentrations after 3 years of PD treatment are a better tool to predict the development of PMF.Furthermore, CCL18 values were independent of the classic transport parameters (Cr-MTAC, Cr D/P, U-MTAC, UFF) admitted as the best predictors for this outcome [2,17]. From this point of view, any factor able to increase CCL18 at any time would be a risk factor for the development of PMF.It is of note that the 2 patients with DM1 showed higher CCL18 values in effluent not only during the later years on PD but also at early time points.Other early events, such as peritonitis or use of high glucose dialysis solutions, were also associated with higher CCL18 values (data not shown). Furthermore, the fact that a group of patients with sustained low CCL18 in effluent did not develop peritoneal membrane dysfunction during the follow-up suggests that CCL18 is not only a biomarker, but it can also be involved in the cellular and molecular mechanisms leading to PMF.Previous studies suggest that PMF is related to fibrotic processes of the peritoneal membrane [22], and CCL18 has been previously related to fibrosis in various diseases [20,[23][24][25].In addition, CCL18 is able to promote the secretion of collagen [26] and the proliferation of fibroblasts [10].Peritoneal M2 macrophages appear to be the primary source of CCL18 in patients treated with PD [10].Factors able to bias macrophage activation toward an M2 phenotype are known to increase the expression and secretion of CCL18 [11].Recently, CD163 + M2 macrophages have been demonstrated as one of the dominant cell populations in EPS peritoneal biopsies [27].In line with this finding, peritoneal CD163+ macrophages secrete high quantities of CCL18 [10]. All these factors could support the "two-hit" theory that attempts to explain the etiology of simple peritoneal fibrosis leading to EPS, and M2 macrophages could be decisive in the second hit.Therapeutic interventions able to shift macrophage polarization from the M2 to the M1 phenotype could be of potential interest to preserve the peritoneal membrane throughout PD, thus preventing EPS. Our study has limitations, such as the small sample size.Nonetheless, the marker behaved coherently throughout the various statistical explorations. In conclusion, we have demonstrated the diagnostic capacity of CCL18 in peritoneal effluent to predict dysfunction of the peritoneal membrane.The most remarkable result is that sustained low CCL18 concentrations appear to guarantee no membrane dysfunction at the medium-term.In contrast, patients with higher or progressively increasing effluent CCL18 levels will develop PMF.At year 3 of PD treatment, CCL18 concentrations over 3.15 ng/ml predicted ultimate membrane dysfunction, independent of concurrent peritoneal transport parameters. Fig 1 . Fig 1.Low effluent CCL18 concentrations over time are found in patients with no peritoneal membrane failure.(A) Time course of effluent CCL18 concentrations define two groups of patients on PD.Eleven patients were included in Group 1, and 32 patients were included in group 2. Mean ± SD values are shown.Higher CCL18 effluent values were found in group 2 at all time points **p < .005;***p < .001(Mann-Whitney U test).(B) Kaplan-Meier analysis of PMF development in the patients included in group 1 and group 2. Patients with no sample at a given time point were censored.Log rank test p = .076.(C) Mean Cr D/P and peritoneal protein losses in patients included in Group 1 and Group 2 evaluated at different time points.https://doi.org/10.1371/journal.pone.0175835.g001 Fig 2 . Fig 2.An increase in effluent CCL18 concentration heralds peritoneal membrane dysfunction.(A) The variable d1U was defined as the last CCL18 effluent value minus the baseline level in each patient included in the longitudinal analysis.(B) The variable Maxub was defined as the maximum minus the minimum CCL18 effluent concentration measured in each patient.A Cox hazard analysis of differences was performed on the patients who developed PMF (N = 10) and the patients who did not (N = 33).https://doi.org/10.1371/journal.pone.0175835.g002 Fig 4 . Fig 4. Effluent CCL18 concentrations predict the survival of the peritoneal membrane independent of classical transport parameters.(A) Receiver operating characteristic (ROC) curve analysis of CCL18 effluent concentrations in samples from patients at 3 years of PD treatment who did or did not develop PMF.(AUC: area under the curve; CI: confidence interval).(B) Forest plot showing the hazard ratio (HR) and 95% upper and lower confidence intervals (UCI and LCI) of CCL18 and peritoneal membrane transport parameters in 35 patients measured at the third year of PD treatment.CCL18 showed significant association with PMF independent of U-MTAC, Cr -MTAC, or D/P Cr. (C) Forest plot showing the HR and 95% UCI and LCI of CCL18 and peritoneal membrane transport parameters measured at the last effluent determination in 61 patients who were treated for more than 3 years with PD.CCL18 showed a significant association with PMF, independent of U-MTAC, Cr-MTAC, or D/P Cr. https://doi.org/10.1371/journal.pone.0175835.g004 Fig 5 .Fig 6 . Fig 5. Time course analysis of PAI-1 values in patients treated with PD. (A) Scatter plots showing mean, and SD values (B) Effluent PAI-1 values in patients who developed PMF (black bars; N = 10) or who did not (white bars; N = 33) Mean ±SD are shown.No significant differences were detected at any time point (Mann-Witney U test).https://doi.org/10.1371/journal.pone.0175835.g005 Table 3 . Peritoneal function and PMF risk factors in patients included in group 1 and group 2. Peritoneal function Baseline 1 year of PD 2 years of PD 3 years of PD Cr D/P (mean ± SD) *p = 0.04 (Mann-Witney U test) Abbreviations: MTAC: mass transfer-area coefficient; RRF: residual renal function; D/P Cr: dialysate/plasma creatinine; DM1: diabetes mellitus type 1, Table 4 . ROC curve analysis of PAI-1 and CCL18 effluent concentrations and appearance rates (calcu- lated as concentration x drained volume in 4h) at baseline (time 0) and after 3 years of PD treatment as diagnostic factors for PMF. https://doi.org/10.1371/journal.pone.0175835.t004
2018-04-03T04:37:22.191Z
2017-04-17T00:00:00.000
{ "year": 2017, "sha1": "9ccba7a8b65cebd8142007a990af1e0fa4c9ad44", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0175835&type=printable", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "2e3a6e68e89fa7eff781ec7c4747f3eb0e980bb4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
21741996
pes2o/s2orc
v3-fos-license
Acute suppurative thyroiditis caused by thyroid papillary carcinoma in the right thyroid lobe of a healthy woman Background The thyroid gland is resistant to microbial infection, because of its organ characteristics such as encapsulation, iodine content, and rich blood supply. Therefore, acute suppurative thyroiditis (AST), as a bacterial infection of the thyroid gland, is rarely seen. AST typically takes places on the left side the neck region in children, because of the coincidence of the left piriform sinus fistula, as a most common route of infection. AST is also usually seen in immunocompromised hosts. Herein, we report a rare case of AST in the right thyroid lobe of adult woman without any immunocompromised condition. Case presentation A 59-year-old woman was introduced to our hospital for the further examination with fever, sore throat, and right anterior neck swelling. The patient appeared not to be immunodeficient. Neck ultrasonography showed a 47-mm, hypoechoic, heterogeneous nodule with ill-defined margins and irregular form, suggesting a right thyroid malignant nodule. Fine needle aspiration (FNA) biopsy specimen revealed numerous number of neutrophils in the background without nuclear atypia. Based on the clinical course and cytology, AST was confirmed to be diagnosed. Complete response was obtained by an intravenous administration of antimicrobial agents within a week. Image findings such as CT scan did not show any piriform sinus fistula. Four months later, neck ultrasonography showed a significant decrease in size of the nodule in the right thyroid gland to 27 mm, but the lesion still resembled a malignant nodule. So, FNA was repeated again and cytological examination confirmed papillary thyroid carcinoma (PTC). The patient subsequently underwent total thyroidectomy and bilateral level D1 lymph node dissection. Histological findings revealed a 20-mm PTC in the right lobe with sternothyroid muscle invasion of the tumor. Conclusions This report represents a rare case of AST associated with PTC on the right side of thyroid gland, found in a healthy adult woman. The reason why AST coincided with malignant thyroid tumor is unclear. We have to take it into our account that malignant tumor may exist in the background when AST is identified on the right side of thyroid gland with a healthy subject. Background Acute suppurative thyroiditis (AST) is a result of bacterial infection and represents a relatively rare condition in the thyroid gland. The thyroid gland is resistant to microbial infection, because of factors such as its encapsulation, iodine content, and rich blood supply [1,2]. As a result, AST rarely develops in healthy individuals. Typically, AST is more likely to occur in children and in the left side of the neck. In 80% of patients with AST, the age at onset is before 10 years old (with 30% between birth and 2 years old), and only 8% occur in adulthood [3]. The presence of a left piriform sinus fistula has been reported as important, as a potential route of infection [4]. AST on the right side of thyroid gland in adults is thus rarely seen. We encountered a case of AST in the right lobe of the thyroid in a healthy woman. Moreover, AST developed against a background of papillary thyroid carcinoma (PTC). Case presentation A 59-year-old woman was introduced to our hospital with a 4-week history of fever, sore throat, and swollen neck, after first visiting a primary-care physician and receiving antibiotics. She had no chronic diseases and appeared not to be to be in an immunocompromised state (she was well-nourished, had no diabetes and did not use any steroids, and human immunodeficiency virus (HIV) antibody was negative). On examination, a nodule showing pain, erythematous changes, and warmth was palpable in the anterior neck on the right side. The nodule showed limited mobility and no adjacent lymphadenopathy. The patient had no medical history of note, and no infectious symptoms such as cough, headache, or abdominal or joint pain were identified, other than the neck pain. Axillary temperature was 36.9°C, heart rate was 109 beats/min, and blood pressure was 169/88 mmHg. Table 1 shows the results of laboratory examination at the first visit (day X). Hematological tests revealed a high erythrocyte sedimentation rate (102 mm/hr) and elevated concentrations of C-reactive protein (10.4 mg/dL). A slight increase in thyroid-stimulating hormone (TSH) was also identified, showing subclinical hypothyroidism. Negative results were obtained for thyroglobulin antibody and serum thyroglobulin level was high (3590 ng/mL). Neck ultrasonography showed a 47-mm nodule, with ill-defined margins, irregular form, and a hypoechoic, heterogeneous appearance, in the area of the right thyroid, suggesting thyroid malignancy (Fig. 1a). Contrast-enhanced computed tomography (CT) confirmed a 37 × 37 × 42-mm nodule within the right thyroid lobe at the middle and lower pole. The thyroid mass resulted in displacement of the trachea toward the left and showed enhancement in the peripheral area of the nodule (Fig. 1b). No signs of metastasis were apparent, including in the lungs and bone. Fine needle aspiration (FNA) was performed on the first visit. Findings on cytological examination were suggestive of AST, because little nuclear atypia was evident, and numerous number of neutrophils were seen in the background (Fig. 2). No PTC was apparent at that time. Based on the clinical course and cytology, AST was confirmed to be diagnosed and the patient was admitted to the endocrinology department. The clinical course is shown in Fig. 3. Antibiotic therapy produced complete response, with rapid improvement within 1 week. Serum thyroglobulin level tended to decrease (528 ng/mL on day X + 8; 47.3 ng/mL on day X + 34). Blood cultures on day X + 6 yielded negative results. CRP concentration was just 0.71 mg/dL on day X + 8, and the patient was discharged with complete disappearance of symptoms. A barium swallow study was performed after CRP turned negative, but no fistula of the pyriform sinus was detected (Fig. 4). Neck ultrasonography 4 months after the onset showed that the nodule in the right thyroid gland had shrunk to 27 mm in diameter, but still showed like malignancy. FNA was repeated, and cytological examination revealed overlapping cell clusters, high nuclear density, nuclear grooves, and intranuclear cytoplasmic inclusion bodies, leading to a diagnosis of PTC (Fig. 2c). The patient subsequently underwent total thyroidectomy and bilateral level D1 lymph node dissection. Histological examination revealed a 20-mm PTC in the right lobe with invasion to the sternothyroid muscle, and a 16-mm PTC in the left lobe (Fig. 5). PTC in the left lobe was not found by ultrasonography before surgery. One possible reason is inhomogenous thyroid grand by adenomatous goiter, but it is unclear why left PTC was not found. Histological examination first detected it after the surgery. The right PTC was well-differentiated, but with large necrotic regions. The left PTC was close to the sternothyroid muscle. No lymph nodes contained metastatic PTC. Cord-like tissue considered as fistulous tract was not found to communicate with the right lobe and the hypopharynx. Pathological staging was pT3N0. Postoperatively, the patient received radioactive iodine ablation. Discussion and conclusions This represents a rare case of right-sided AST concomitant with PTC, found in an adult woman who did not appear immunocompromised and did not have any other foci of infection. In addition, we could not find any evidence of piriform sinus fistula, even after resolution of inflammation. Some reports have described concomitant AST and thyroid cancer. One case developed AST after FNA of a PTC, and was therefore considered as infection secondary to needle aspiration [5]. Haddad and colleagues reported a case of AST in a patient with ischemic heart failure and type one diabetes mellitus [6]. In another case, a pregnant woman who had given birth by Cesarean section was diagnosed with AST after thyroidectomy [7]. The present case clearly differed from these cases in the lack of a clear cause of AST. The reasons for the AST occurring with malignant tumor in the present case remains unclear, as this woman showed no sign of infectious disease and was not immunocompromised, and no piriform sinus fistula was present. Because the AST and PTC showed identical locations, the PTC could easily be imagined to be infected with bacteria, but the mechanisms were not clear. A previous report [8] suggested that an abnormal blood supply from the PTC could facilitate infection, and an abnormal blood supply from malignant tumor may thus have resulted in infection in our case. [4]. In the present case, however, we could not find any sign of a piriform sinus fistula. Repeated inflammation may result in adhesions within the fistula, and potentially masking the fistula in cases with repeated episodes of infection. However, this patient had presented with the first episode of inflammation, so fistula as the route of infection seems unlikely in our case. We attempted to culture bacteria from the thyroid nodule to provide insights into the source of infection, but the results were negative. Cultivation of bacteria probably failed because the patient had been given antibiotics for about 1 week before the sample was obtained for cultivation. In 80% of AST patients, the age at onset is less than 10 years old, with 30% occurring between birth and 2 years old, and only 8% of cases occur in adulthood [3]. An overview of 109 cases of AST reported that 85 patients had first experienced AST in childhood [9]. Patients diagnosed with AST are likely to have experienced repeated inflammation of the neck, but our patient had no any past history of neck infections. The same overview also reported that 92% of patients showed left-sided infection, with bilateral infection in only 2% [9]. Kingsbury described poor development of the right branchial arch in the prenatal period [10]. Park reported that piriform fistula is more likely on the left than on the right, because the pharyngeal arch is drawn out to the nasal side during formation of the aortic arch by the left fourth branchial arch [11], which is why AST mostly occurs in the left lobe of the thyroid. AST on the right side of thyroid gland in adults is rarely seen. Those cases of right-sided AST in adults that have been reported have shown backgrounds of infection such as infective endocarditis [8,12] or miliary tuberculosis [13], or immunocompromise due to steroid use or HIV [14,15]. In our case, the patient was HIVnegative and did not have any other infection or underlying diseases and was thus not considered to be in a compromised condition. The cause of AST remains unknown, but the possibility of some involvement of the PTC must be considered. This case was uncommon in terms of age and lesion location, which made differentiation of AST from malignancy difficult in the early phase. A paper by Lin reviewed 30 patients with malignant thyroid cancer who showed clinical features similar to AST [16]. The significant characteristics of malignant thyroid tumor were clearly indicated as follows: 1) higher age at diagnosis (P = 0.0155); 2) presence of dysphonia (P = 0.0325); 3) right lobe involvement (P = 0.0151); 4) larger thyroid mass (P = 0.0013); 5) presence of anemia (P = 0.0075); and 6) sterile pus culture (P = 0.0013). Our case met 4 of these 6 clinical features suggestive of malignancy. The symptoms and signs of malignant thyroid tumor may mimic those of infectious thyroiditis, so we should be careful in diagnosing AST or aggressive malignant tumor. Long-term follow-up using both ultrasonography and FNA is also necessary. We obtained findings indicating PTC 5 months after identifying the presence of AST, on the third cytological examination. No findings even suggestive of PTC were evident from the first cytological examination, with relatively few variant epithelial cells and numerous leukocytes. One possibility was that we sampled a location where lymphocytes were gathered or that was necrotic. We encountered a case of AST in the right lobe of a healthy woman. AST can develop with thyroid malignant tumor, so we have to take it into our account that malignant tumor may exist in the background when AST is identified on the right side of thyroid gland with a healthy subject. Abbreviations AST: Acute suppurative thyroiditis; ESR: Erythrocyte sedimentation rate; FNA: Fine needle aspiration; FPG: Fasting plasma glucose; HbA1c: Hemoglobin A1c; PTC: Papillary thyroid carcinoma; Tg: Thyrogloblin; TSH: Thyroid-stimulating hormone Availability of data and materials All data generated or analysed during this study are included in this published article. Authors' contributions HO and MN mainly examined and determine how to treat the patient and were major contributors in writing the manuscript. SK and MM also examined the patients. MY, MY and TS gave many advices during treating the patients. TF, IM, NA and HK performed the patient's operation. TI, AA, NI and RM performed the histological examinations of the thyroid. All authors read and approved the final manuscript. Ethics approval and consent to participate Not applicable. Consent for publication Written informed consent was obtained from subjects for publication of this report. A copy of the written consent is available for review upon requests. Competing interests The authors declare that they have no competing interests. Author details
2018-05-19T01:03:53.448Z
2018-05-15T00:00:00.000
{ "year": 2018, "sha1": "3e3ccdd966a4742b9f8a695f4795e0641e4b1fb2", "oa_license": "CCBY", "oa_url": "https://thyroidresearchjournal.biomedcentral.com/track/pdf/10.1186/s13044-018-0049-6", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3e3ccdd966a4742b9f8a695f4795e0641e4b1fb2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
252297090
pes2o/s2orc
v3-fos-license
Cortisol Response in Breast Cancer: The Role of Physical Activity and Exercise Chronic stress is a consistent sense of feeling pressured and overwhelmed for a long period of time and it has been defined as a maladaptive state that is associated with altered hypothalamic pituitary adrenal (HPA) axis. The hyperactivity of the HPA axis is commonly assessed by cortisol levels. Physical activity (PA) and exercise have been demonstrated to regulate cortisol patterns in different healthy study populations, but also in BC patients and survivors. The PA and exercise are related but have distinct concepts that are commonly misused. Nowadays, the regular practice of PA and exercise has been widely recognized as one main strategy to manage chronic stress and its related markers, like cortisol, remains elusive. In the present review, the authors focused on the evidence of the PA and exercise on cortisol patterns of BC patients and survivors. Introduction Breast cancer (BC) is the leading cause of death in women, with diagnosis numbers growing each year [1] . Almost two million cases of BC were diagnosed in last years, according to the World Health Organization (WHO) [2] . Both non-pharmacological and pharmacological treat-ments of BC can result in adverse side-effects at distinct levels, such as physical function, metabolic, cardiorespiratory and psychological [3,4] . These consequences may be associated with an interaction between pharmacological therapies and physiopathological and psychological conditions of each woman at the moment of diagnosis. After a diagnosis of BC, women experience emotional distress, depression and anxiety, which can persist for prolonged periods, irrespective of the clinical treatment outcome [5] . Chronic stress is a consistent sense of feeling pressured and overwhelmed for a long period of time and it has been defined as a maladaptive state that is associated with altered immunity, hypothalamic pituitary adrenal (HPA) axis, and sympathetic nervous system (SNS) functioning [3] . Although research is still limited, the dysregulation of HPA axis and SNS, depression and anxiety have been reported in BC patients and survivors [5] . Studies show that almost 50% of BC patients experienced depression and/ or anxiety during cancer treatments [4,6] and approximately 25% of women have clinically important levels of emotional distress up to 12 months after treatment [5] . The hyperactivity of the HPA axis in response to a chronic stress is commonly assessed by cortisol awakening responses (CAR), i.e., the rapid increase in cortisol secretion roughly within the first 30 min of wakening that of wakening that occurs daily, signifying the physiological stress response to waking [7] . Cortisol levels are usually highest before awakening and decrease during the day [7] ; however, the majority (>60%) of patients with BC show flattened circadian profiles, high levels, or unpredictable fluctuations [8,9] . Physical activity (PA) and exercise have been recognized as a part of a healthy lifestyle, being associated with reduced risk of BC through several mechanisms, including by regulating sex-steroids hormones [10] , maintaining a healthy weight [11] , reducing inflammation [12] , and improving the immune response [13] . Some studies reported that PA and exercise were able to decrease levels of cortisol across different healthy study populations [14][15][16] . In the systematic review and meta-analysis conducted by De Nys et al. [17] , ten original studies were comprised including randomized controlled trial (RCTs) and non-RCTs with relevant control group. Here, they found moderate-certainty evidence for PA as an effective strategy in lowering cortisol levels in women with different clinical conditions. Although the PA and exercise are related they are distinct concepts that in this study were misused. In addition, the role of PA and exercise as beneficial strategy to manage chronic stress and its related markers, such as cortisol, remains elusive. In the present review, we focused on the evidence of the PA and exercise on cortisol fluctuations of BC patients and survivors. Hypothalamic-pituitary-adrenal Axis in Response to Chronic Stress In response to chronic stress, the physiologic mechanisms involve the neuroendocrine pathways constituting the SNS and HPA axis [3,4] . Both mechanisms are initiated by releasing several neurotransmitters and hormones that affluence behavioural and biochemical changes [18] . Under chronic stress, the brain's nerve impulses can continuously activate the hypothalamus to produce the corticotropin-releasing factor, which is targets the pituitary gland. In its turn, pituitary gland releases the adrenocorticotropic hormone (ACTH) [19] that reaches the adrenal cortex by blood stream and promotes the synthesis of corticosteroids, including cortisol. In addition, SNS was triggered by chronic stress and, thus, stimulating the production and secretion of as norepinephrine and epinephrine, both known as catecholamines [18,19] . Both, corticosteroids and catecholamines, may contribute to a decline in the functions of the prefrontal cortex and the hippocampus, and may enhance the activation of the SNS and the HPA by regulating the expression of glucocorticoid receptors [18,20] . A hyperactivation of the SNS and HPA axis in response to a chronic stress has been demonstrated to contribute, at least in part, for several cancer-promoting processes, such as tumorigenesis, progression, metastasis, and multi-drug resistance, by altering the tumour microenvironment (TME) [21] . A stressed TME is characterized by the increased proportion of cancer-promoting cells and cytokines, reduced and dysfunction of immune-supportive cells and cytokines, increased angiogenesis and epithelial-mesenchymal transition, as well as damaged extracellular matrix [19,21] . Of note that the enhanced β-adrenergic signalling and glucocorticoid signalling in TME can be induced by not only chronic, but also TME hypoxia [22] . Cortisol Cortisol is an adrenal hormone with many functions in the human body, such as mediating the stress response, regulating metabolism, inflammatory and immune functions [29] . Considering that cortisol is a glucocorticoid and the glucocorticoid receptors are present in almost tissue in the body, it affects nearly every organ system, including nervous, immune, cardiovascular, respiratory, reproductive, musculoskeletal muscle, and integumentary [29] . Cortisol displays strong circadian rhythmicity, with high levels in the morning in the first 30-45 min after awakening, known as CAR and a gradual decline follows this peak during the waking day to reach the lowest levels at midnight [29] . This diurnal fluctuation is indicative of HPA axis reactivity [30] . Additionally, evidence has been demonstrated that salivary cortisol levels highly correlate with plasma and serum cortisol levels [31] . Cortisol Behaviour in Breast Cancer BC patients and survivors likely have a dysregulation of the HPA axis and nonstandard secretion of cortisol, as previously demonstrated by [8,9] . Previous research demonstrated that BC survivors may experience significant alterations in their cortisol secretion patterns, as well as disruptions in the circadian rhythm of the HPA axis [9,32,33] . In the Obradović et al.'s study [33] , the increase in glucocorticoids during BC progression was related to a lower survival rate, which is in agreement with a stimulatory effect of cortisol on cell proliferation observed in different cancer cell lines [27,34] . On the other hand, women with advanced BC and tamoxifen as first-line treatment presented significant elevations in basal cortisol levels compared to age-matched healthy women [8] . These findings suggest that BC is associated with a hyperactive adrenal gland, which may be due to the physiological stress associated with the presence of (micro)metastases or tumour cells in the circulation, in combination with administration of tamoxifen. Physical Activity and Exercise Although PA and exercise have been used confusingly, PA is defined as "any bodily movement produced by skeletal muscles that results in energy expenditure." [35] PA is closely related to, but distinct from exercise concept. Exercise is a subset of PA defined as "planned, structured, and repetitive bodily movement done to improve or maintain one or more components of physical fitness." [35] . Therefore, exercise practice has been widely recognized to improve cardiorespiratory fitness that positively affects health and self-efficacy [36] , reduces insomnia-related distress [10] by improving nocturnal sleeping [37] , and, consequently, body recovery [36,37] . At psychosocial domain, exercise has several benefits, including favours interpersonal relations, which are important to attenuate depression and anxiety-related symptoms [38,39] . Effects of Physical Activity on Cortisol Levels in Breast Cancer As shown in Table 1, only two studies studied the hypothetical association between PA and salivary cortisol [40,41] . Lambert et al. [40] found no associations in cortisol of physically activity BC women in post-treatment phase. On the other hand, Castonguay et al. [41] reported a decrease of salivary cortisol in 145 moderate-to-vigorous physically active BC women at least 12 months post-treatment. Both studies used healthy women without BC history as a comparator group. Taken together, these findings revealed that little is known about the role of PA in cortisol levels of BC women. Effects of Exercise on Cortisol Levels in Breast Cancer Regarding the effects of exercise intervention program on cortisol variations, controverse data exists, as seen in Table 2. Some of studies reported no changes after 14 weeks of home-based walking [42] , 16 weeks of aerobic combined with strength exercise [43] , 6 weeks of qigong [44] , 3 weeks of dance movement therapy [45] or 48 weeks of supervised and unsupervised exercise sessions [46] in BC women. Three studies used an exercise program where yoga classes were included with different period of time, one study lasted for 14 weeks [47] and two studies lasted for 6 weeks [48,49] . Ratcliff et al. [49] hypothesized that 6 weeks of yoga-based exercise intervention during radiotherapy would be beneficial for women with high baseline depressive symptoms compared with their counterparts par- Table 2 continued ticipating in stretching or waitlist control groups. In this study, yoga group was associated with a steeper cortisol slope compared with stretching and waitlist groups. In this line, findings [39,48] support the idea that yoga intervention provided a huge mental health-related benefits for women with elevated sleep disturbance and, to a lesser extent, depressive symptoms prior to the start of radiotherapy. This effect varied in time with differences emerging especially 3 and 6 months after radiotherapy. Of note, some of these findings should be looked with some caution as the reduced reliability of cortisol slopes assessed at later follow-up points (because of a smaller sample size) may have limited the power to detect the effects of cortisol slopes [49] . Moreover, two of these [48,49] studies have assessed salivary cortisol after chemotherapy and during the radiotherapy, excluding other treatments phases. Interestingly, no study has studied the effects of exercise on cortisol variations during diagnosis, chemotherapy or after surgery. In an exploratory investigation, Evans et al. (2016) [50] aimed to study the effects of one bout of acute exercise on plasma cortisol in BC women in post-treatment phase. Although healthy women without BC history have been used as a comparator group, both groups (intervention and comparator) display identical body index mass, oxygen requirements (18.1±2.7 vs. 18.5±0.83 mL O 2 /min/kg, workload (107±19 vs. 106±17 watts), heart rate (68±6 vs. 66±9 bpm) and RPE (12±1 vs. 12±1). This point is very pertinent as the responsiveness of stress hormones is directly proportional to physical and physiological demands of the body [51] . This study brings novel findings demonstrating that cortisol levels changed across time in the BC survivor group with a decrease immediately after exercise session cessation, but without significant changes after 2 h. The intermittent nature of the exercise training protocol may have stimulated the metabolic and hormonal responses differently than continuous exercise, which explain partly the unexpected cortisol variations. Therefore, the implementation of exercise programs with these characteristics (i.e., intercalated with high-intense periods with low-to-moderate periods of exercise) specially in BC patients/survivors are utmost importance to know the potentially of this exercise type. Exercise-induced fluctuations in plasma cortisol levels typically follow a threshold effect in which exercise at ≥60% of maximal oxygen consumption (VO 2 max) of intensity induce increased plasma cortisol concentrations [51] . However, in the Evans's study [50] the intensity prescription was based on VO 2 peak, which is usually slightly lower than VO 2 max. Thus, exercise may not have reached the threshold that was necessary for eliciting an increase in plasma cortisol concentration, and the decreases in plasma cortisol may have occurred because the rate of removal exceeded the rate of secretion [50] . Another important factor that may help to explain the Evan's findings could be the suppressive role of selective estrogens receptor modifiers, such as Tamoxifen, on adrenal corticosteroids release [52] . In fact, Tamoxifen is a selective estrogen receptor modulator widely used in adjuvant therapy for estrogen receptor-positive BC [53] . Considering that BC women generally received chemotherapy and, thereafter, intake hormonal therapy medication, some controverse results may be in part due to the use of current medication. Conclusions Based on compelling data from studies, the current state of knowledge supports that PA and exercise are interventions that should be included in a BC women's health care program due to their fundamental role in chronic stress management. Although few studies suggest a beneficial effect of exercise in cortisol of BC women during or after radiotherapy, no study has considered other cancer treatments phases. Future studies are warranted to address the effects of PA and also exercise on cortisol patterns of BC women at different cancer treatments phases, along with chronic stress evaluation and other psychological parameters. Disclosure Statement The authors do not have any financial interest and did not receive any financial benefit from this research.
2022-09-16T15:28:16.471Z
2022-09-13T00:00:00.000
{ "year": 2022, "sha1": "1cd82a7e9f61a1f9fc1113dc770a64559b16c873", "oa_license": "CCBYNC", "oa_url": "https://ojs.bilpublishing.com/index.php/jer/article/download/4943/3877", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "99bab96b703b57e88e3cad5b74aed5e23b4fd7b8", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
11648275
pes2o/s2orc
v3-fos-license
Autonomous manipulation with a general-purpose simple hand While complex hands seem to offer generality, simple hands are often more practical. This raises the question: how do generality and simplicity trade off in the design of robot hands? This paper explores the tension between simplicity in hand design and generality in hand function. It raises arguments both for and against simple hands, it considers several familiar examples, and it proposes an approach for autonomous manipulation using a general-purpose but simple hand. We explore the approach in the context of a bin-picking task, focused on grasping, recognition, and localization. The central idea is to use learned knowledge of stable grasp poses as a cue for object recognition and localization. This leads to some novel design criteria, such as minimizing the number of stable grasp poses. Finally, we describe experiments with two prototype hands to perform bin-picking of highlighter markers. Introduction Complex hands surely offer greater generality than simple hands.Yet simple hands such as the prosthetic hook have demonstrated a degree of generality yet untapped by any autonomous system, either with simple or complex hands.The goal of our research is to develop generalpurpose autonomous manipulation with simple hands.Our primary motive is that simple hands are easier to study and to understand.Study of simple hands may more quickly yield insights leading to autonomous general-purpose manipulation.A secondary motive is that simple hands are often more practical.Simple hands are smaller, lighter, and less expensive than complex hands.Robots may require simple hands for the indefinite future for some applications such as micro-manipulation or minimally invasive surgery, just as humans use simple tools for many tasks. How do we define simplicity and generality?By a simple hand we mean a hand with few actuators and few sensors, and with economically implemented mechanisms, so that the whole hand can be small, light, and inexpensive.Figure 1 shows two examples: a simple pickup tool, Matthew T. Mason, Alberto Rodriguez and Siddhartha S. Srinivasa are with the Robotics Institute at Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, PA 15213, USA <albertor@cmu.edu,matt.mason@cs.cmu.edu,siddh@cs.cmu.edu>Andrés S. Vázquez is with the Escuela Superior de Informatica at Universidad de Castilla-La Mancha, Paseo de la Universidad 4 Ciudad Real, 13071 Spain <andress.vazquez@uclm.es>This paper is a revision of papers appearing in the proceedings of the 2009 International Symposium on Robotics Research (ISRR) and the 2010 International Symposium of Experimental Robotics (ISER).and P2, a prototype simple gripper developed in this work for a bin-picking task.By generality we mean that the hand should address a broad range of tasks and task environments, and not be tuned for a specific one.What is the nature of the tradeoff between simplicity and generality in a robot hand?Some arguments in favor of complexity are: • Grippers for manufacturing automation are often simple and highly specialized, perhaps designed to grasp a single part.Human hands, in contrast, are more complex and more general. • Hands grasp by conforming to the object shape.Motion freedoms are a direct measure of a hand's possible shape variations. • Beyond grasping, many tasks benefit from more complexity.Manipulation in the hand, and haptic sensing of shape, to mention two important capabilities, benefit from more fingers, more controlled freedoms, and more sensors. • The most general argument is: Design constraints have consequences.Restricting actuators, sensors, and fingers to low numbers eliminates most of the hand design space. However, there are simple but general grippers, for example prosthetic hooks, teleoperated systems with simple pincer grippers, and the simple pickup tool shown in Figure 1a.With a human in the loop, unhindered by the limitations of current autonomous systems, we see the true generality offered by simple hands.We conclude that, while there is a tradeoff between simplicity and generality, the details of that tradeoff are important and poorly understood.Simple grippers can achieve a level of generality that is not yet achieved in autonomous robotic systems. Autonomous manipulation with simple hands requires a new approach to grasping.Past manipulation research often adopts an approach we will call: "Put the fingers in the right place".The robot plans a desired stable grasp configuration, including all finger contact locations, and then drives the fingers directly to those contacts.It is a direct approach to the grasping problem, but often impractical.It assumes the object shape and pose are accurately known, that a suitable grasp is accessible, and that the object does not move during the grasp. This paper adopts a different approach: "Let the fingers fall where they may".The robot plans a motion with the expectation that it will lead the object to a stable grasp.This expectation anticipates uncertainty in object shape and initial pose, as well as object motion during the grasp.Because there might be several possible configurations for the resting pose of the object, the hand resolves that uncertainty with sensing, during and after the grasp.Thus the approach can also be summarized as, "grab first, ask questions later". Machine learning is central to our approach.The grasping process is well beyond our ability to model and analyze, when the object is allowed to move and interact with fingers and with clutter, and when the object shapes and initial poses are noisy.Nonetheless, there may be enough structure to choose promising grasp motions based on experience. Our approach was inspired by the simple pickup tool of Figure 1a.The pickup tool can grasp various shapes without careful positioning of fingers relative to the object.Our first gripper prototypes P1 and P2 in Figure 1b mimic the pickup tool but also address some of its limitations.Sometimes a robot needs not only to grasp, but also to know what it has grasped (recognition), and to know the object pose within the hand (localization).Our approach then is to apply the near-blind grasping strategy of the pickup tool, augmented by simple sensors and offline training to address recognition and localization. We test the approach on a highlighter bin-picking task: Given a bin full of randomly placed highlighters, grasp a single object and accurately estimate its pose in the hand.Binpicking is a high clutter, high uncertainty task, with a target rich environment that simplifies the experimental testing of our approach.Bin-picking is also suited to a learning approach, since failure and iteration may be tolerated in order to produce singulated objects with great reliability.Grasp recognition, in this task, means recognizing the presence of a single marker rather than several markers or no markers.Localization means estimating the orientation of the marker flat against the palm up to the 180 • near-symmetry. Grasp recognition and localization are the product of two processes.First, the hand is designed so that irregular objects tend to fall into one of just a few discrete stable poses.Second, a modest set of sensor data is used to estimate the pose of the object, or reject the grasp.Thus a key element of the paper, addressed in Section 4, is to explore the distribution of stable poses in the object configuration space. The main experimental results are presented in Section 6.The grasp recognition and localization systems require offline training comprising 200 random grasps, with the results presented to a machine vision system which determines ground truth.The ultimate result is that our approach can acquire singulated highlighters with error rates as low as we are able to measure, while estimating orientation with expected error of 8 • or less. The proposed approach and hand design is general-purpose in the sense that it is not specialized to any particular object shape.Our experiments use highlighter markers, but their use was not even contemplated until after the first hand design.The highlighter markers are small for the hand, hence the tendency to grasp more than one at a time.The main principle, employing stable poses isolated in the sensor space, is best suited to irregular shapes, not cylinders.Thus, prototypes P1 and P2 are general-purpose hands, although the extent of their generality over tasks is a harder question out of the scope of this paper. Previous Work This paper incorporates results in [46,59].Additional discussion of grasp characteristics and clutter is available in [47].Related work on design of finger phalange form appears in [58]. Although interest in generality and simple hands is high today, the tradeoff between simplicity and generality was discussed even as the first robotic hands were being developed 50 years ago.Tomovic and Boni [68] noted that for some designs additional hand movements would require additional parts, leading to "unreasonably complex mechanical devices".The Tomovic/Boni hand, commonly called the "Belgrade hand", was designed for prosthetic use, but with reference to possible implications for "automatic material handling equipment". Jacobsen and colleagues [38] likewise raised the issue in the context of the Utah/MIT Dextrous Hand, over 25 years ago: "A very interesting issue, frequently debated, is the question 'How much increased function versus cost can be achieved as a result of adding complexity to an end-effector?' . . .In short then the question (in the limit) becomes 'Would you prefer N each, one-degree-of-freedom grippers or one each N-degree-of-freedom grippers?'We believe that a series of well-designed experiments with the DH [dexterous hand] presented here could provide valuable insights about the tradeoffs between complexity, cost, and functionality." We share the interest in the tradeoff between simplicity and generality, but we depart from Jacobsen et al.'s thinking in two ways.First, we focus on the generality achievable with a single simple hand (or two such hands in the case of bi-manual manipulation) rather than a toolbox of highly specialized grippers.Second, while we agree that complex hands can be used to emulate simpler effectors, we work directly with simpler effectors designed specifically to explore the issues. Complex Hands Three approaches to generality have dominated hand design research: anthropomorphism, grasp taxonomies, and in-hand manipulation. There are many reasons to emulate the human hand: anthropomorphic designs interface well with anthropic environments; they are the most natural teleoperator device; they facilitate comparisons and interchange between biomechanical and robotic studies; they have purely aesthetic advantages in some assistive, prosthetic, and entertainment applications; they are well-suited for communication by gesture; and finally and most simply, the human hand seems to be a good design [6,22,38,54,76]. • Grasp taxonomies.Rather than directly emulating the human hand, several hand designs emulate the poses taken from taxonomies of human grasp [21,62,51]. • In-hand manipulation.Controlled motion of an object using the fingers, often called "dexterous manipulation" or "internal manipulation" [48]. Two of these three elements were already considered in 1962 with the Belgrade hand [68], which appears to have been cast from a human hand, and which emulated six of the seven grasp poses in Schlesinger's taxonomy [62]. Okada's work [53] almost twenty years later explicitly appealed to the example of the human hand, but Okada's motivation and examples are more focused on the third approach to generality: in-hand manipulation.Salisbury [60] similarly focuses on in-hand manipulation.In Salisbury's work, and in numerous subsequent papers, in-hand manipulation is accomplished by a fingertip grasp, using three fingers to control the motion of three point contacts.For full mobility of the grasped object, each finger requires three actuated degrees of freedom, imposing a minimum of nine actuators.There are other approaches to in-hand manipulation, involving fewer actuators [9,7], but Salisbury's approach requires only the resources of the hand itself, without depending on outside contact, gravity or controlled slip. Grasp taxonomies and in-hand manipulation are coupled.Grasp taxonomies often identify two broad classes of grasps: power grasps (also called enveloping grasps), and precision grasps (also called fingertip grasps) [51].In-hand manipulation is usually performed with precision grasps.Salisbury's design [60] was optimized in part for in-hand manipulation, using a specific fingertip grasp of a one inch diameter sphere.Both the DLR-Hand II [14] and the UPenn Hand [69] are designed with an additional freedom, allowing the hand to switch from a configuration with parallel fingers, better suited to enveloping grasps, to a configuration where the fingers converge and maximize their shared fingertip workspace, better suited to fingertip grasps. Taken together, these three elements: anthropomorphism, grasp taxonomies, and in-hand manipulation, have enabled the development of arguably quite general hands.At the same time, they have driven us towards greater complexity. Simple Hands Thus far we have looked primarily at complex hands, but simple hands also have a long history.Some of the earliest work in robotic manipulation exhibited surprisingly general manipulation with simple effectors.Freddy II, the Edinburgh manipulator, used a parallel-jaw gripper to grasp a variety of shapes [2]. The Handey system also used a simple parallel-jaw gripper exhibiting impressive generality [43,44].It grasped a range of plane-faced polyhedra. Our gripper concept is similar to Hanafusa and Asada's [33], who analyzed the stability of planar grasps using three frictionless compliant fingers.Our work is also close to theirs in our analysis of stability: stable poses correspond to local minima in potential energy. Theobald et al. [67] developed a simple gripper called "Talon" for grasping rocks of varying size and shape in a planetary exploration scenario.Talon's design involved a single actuator operating a squeeze motion with three fingers on one side and two fingers on the other.The shape of the fingers, including serrations and overall curvature, as well as their compliant coupling, were refined to grasp a wide variety of rocks, even partially buried in soil. Simple hands are widely used in industrial automation.Hands are often designed for a single shape, but there are numerous examples of designs for multiple shapes, or for multiple grasps of a single shape [26,50]. Our work is related to other industrial automation problems, such as the design of workholding fixtures [10,11,73,74,65] and parts orienting [32,40,30,16,27], where the ideas of using knowledge of stable poses and iterative randomized blind manipulation strategies are well known, and have even been used in the context of simple hands [45,30].Broad arguments in favor of simplicity in industrial automation algorithms and hardware are advanced under the name "RISC Robotics" [15,16]. Some recent work inspired by service applications has addressed generality with simple designs, by directly testing and optimizing systems to a variety of environments, rather than referencing human hands or grasp taxonomies.Xu, Deyle and Kemp [75] directly address the requirements of the application (domestic object retrieval).Their design is based on the observation that the task often involves an isolated object on a flat surface, and is validated using a prioritized test suite of household objects [17].Similarly, Ciocarlie and Allen's work [18] is tuned to obtain a collective optimum over a test suite of 75 grasps applied to 15 objects.Saxena, Driemeyer and Ng [61] likewise use a suite of common household objects.Their work was focused on vision-guided grasping of unfamiliar objects, but their success is also testimony to generality of the underlying hardware.Dollar and Howe [24] adopt a single generic shape (a disk) and explore variations in pose. Underactuation is an interesting way to address generality and simplicity.One may achieve some of the advantages of having several degrees of freedom, while still retaining a small number of motors.Our prototypes P1 and P2 use underactuation to drive three and four fingers, respectively, through a single motor.Hirose and Umetani [34,35] designed a soft gripper, controlling as many as 20 freedoms with just two motors.Dollar et al. [24] demonstrates a simple planar two-fingered gripper, with two compliantlycoupled joints per finger, and explores grasp generality over object shape and pose.In subsequent work a three-dimensional version employs four two-jointed fingers, all compliantly coupled to a single actuator [25]. Brown et al. [12] takes underactuation to an extreme.An elastic bag containing a granular material such as coffee grounds can be switched from a deformable state to a rigid state by applying vacuum-a virtually infinite number of degrees of freedom actuated by a single motor.The device can be used as a gripper which closely conforms to a broad range of shapes and grasps them securely.It is an interesting case for any discussion of generality and simplicity. Intermediate Complexity Hands One interesting entry in the discussion of simple and complex hands is the work of Ulrich and colleagues [72,70,69,71] leading to the UPenn hand and ultimately to the Barrett Hand [4].Ulrich explicitly attacked the problem of trading off generality for simplicity.He defined simple hands as having one or two actuators; and complex hands as having nine or more actuators.He then defined a new class: medium-complexity hands, with three to five actuators.He achieved the reduction by explicitly eschewing in-hand manipulation, and focusing on a smaller set of The UPenn and Barrett designs also use underactuation.Each finger has two joints driven by a single actuator.The two joints are coupled to close at the same rate, but a clutch mechanism decouples the proximal joint when a certain torque is exceeded. Laliberté et al. [41] also develop designs based on underactuated fingers, with two or three freedoms actuated by a single motor, culminating in a 10 degree-of-freedom hand with just two motors, and the commercial Adaptive Gripper [57]. Dimensions of General-Purpose Grasping In [47] the authors propose a list of eight characteristics of general-purpose grasping to be used to characterize either the requirements of an application or the capabilities of a hand: stability, capture, in-hand manipulation, object shape variation, multiple/deformable objects, recognition/localization, placing and clutter.In Table 1 we make use of that set of generalpurpose dimensions to compare the requirements of bin-picking and the designs of the "pickup tool", and the prototype grippers described in this paper P1 and P2. In particular, clutter is a key characteristic of the bin-picking task, and recognition and localization are key capabilities of the "let the fingers fall where they may" approach.This section reviews previous work in clutter, recognition, and localization. Previous work has seldom addressed clutter explicitly, but there are exceptions.Freddy II [2] used a camera to capture silhouettes of objects.If the objects were in a heap it would first look for a possibly graspable protrusion, then it would try to just pick up the whole heap, and if all else failed it would simply plow through the heap at various levels, hoping to break it into more manageable parts.Handey [43] addressed clutter by planning grasp poses that avoided the clutter at both the start and goal, and planning paths that avoided the clutter in between.It also planned re-grasping procedures when necessary.Berenson and Srinivasa [5] developed an algorithm for planning stable grasps in cluttered environments, and Dogar and Srinivasa [23] use pushing to avoid clutter. Many others have explored haptic object recognition and localization.Lederman and Klatzky [42] survey work on human haptic perception, including object recognition and localization.Here we will borrow the biological terminology distinguishing between kinesthetic sensors such as joint angle, versus cutaneous sensors such as pressure or contact.The present work employs kinesthetic sensing along with knowledge of stable poses, both for recognition and localization.Most previous work in robotic haptic object recognition and localization assumes contact location data: point data, sometimes including contact normal, sampled from the object surface [29,31,1].While cutaneous sensing is the most obvious technique for obtaining contact location, it is also possible to obtain contact location from kinesthetic sensors using a technique called intrinsic contact sensing.If you know a finger's shape and location, and the total applied wrench, and if you assume a single contact, then you can solve for the contact location.Bicchi, Salisbury and Brock [7] explored and developed the technique in detail.From a very general perspective, our approach is similar to intrinsic contact sensing.Both approaches use the sensed deformation of elastic structures in the hand, but our learning approach transforms that information directly to object recognition and localization, without the intermediate representation in terms of contact locations. Our work fuses kinesthetic information with information from the system dynamics and controls, specifically the expectation of a stable grasp pose.Siegel [63] localized a planar polygon using kinesthetic data (joint angles) along with the knowledge that each joint was driven to some torque threshold.Our work could be viewed as a machine learning approach to the same problem, extended to arbitrary three dimensional shapes, using a simpler hand.Jia and Erdmann [39] fused contact data with system dynamics to estimate object pose and motion, and Moll and Erdmann [49] also fused haptic data with system dynamics and controls to estimate both the shape and the pose of a body in three dimensions.Natale and Torres-Jara [52] show an example of the use of touch sensors for object recognition. Approach: Let the Fingers Fall Where they May This section outlines our approach to grasping, illustrated by a classic robotic manipulation problem: picking a single part from a bin full of randomly posed parts.The key elements of the approach are: • Simple control strategy: In the traditional approach to grasping, robotic hands try to "put the fingers in the right place", which means driving the fingers directly to the ultimate desired stable grasp pose.This assumes that the object shape and pose are accurately known, that the grasp pose is accessible, and that the object will not move during the grasp.Instead, we "let the fingers fall where they may", which means using a grasp motion chosen so that the gripper and object settle into a stable configuration. The object shape and pose need not be precisely known, and the approach accommodates motion of the object, even in clutter.Hollerbach [36] describes the idea thus: "in which the details work themselves out as the hand and object interact rather than being planned in advance" and calls the approach grasp strategy planning, in contrast with model-based planning which computes exact grasp points based on prior knowledge of the object and environment. • Simple mechanism: The simple control strategy encourages a simple hand design. The hand does not need several degrees of freedom per finger as in the traditional approach.We adopt a gripper concept inspired by the pickup tool in Figure 1a, which is very effective at capturing parts from a bin, even when operated blindly.Gripper prototypes P1 and P2 in Section 5 have low friction palm and fingers so that, for irregular objects, there are only a few stable grasp configurations.When a single object is captured, we expect the fingers to drive the object to one of those stable configurations. • Statistical model of grasp outcome: We learn a data-driven model of the relationship between kinesthetic sensor feedback and grasp outcome.We introduce the concept of grasp signature to refer to the time history of the entire grasp process as perceived by the hand's own sensors.The proposed gripper design simplifies the prediction of grasp outcome from grasp signature: By reducing the number of stable poses, in-hand object localization requires minimal sensing. • Iteration: To address the stochastic nature of our approach the robot iteratively grasps and classifies, terminating when a single object is captured in a recognized pose. The main problem addressed by this paper is to determine singletude and object pose within the grasp.We propose to use knowledge of stable grasp configurations to simplify both problems.The knowledge of those stable configurations is gained through a set of offline experiments that provide enough data to model the map from kinesthetic sensors to object pose.This leads to a novel design criterion-to minimize the number of stable grasp poseswhich ultimately has implications both for gripper design and gripper control. Our initial gripper concept departs from the pickup tool that inspired it.The basic concept is a planar disk-shaped frictionless palm, with rigid cylindrical fingers evenly spaced around the palm (Figure 1b).The fingers are attached by revolute joints with encoders.The fingers are compliantly coupled to a single motor.The design is generic and simple, manifestly not guided by the geometry of any particular object to be grasped.It is also easy to simulate and analyze. The compliant actuation scheme for our prototype grippers is illustrated in Figure 2. The fingers are coupled through linear springs to a motor which is driven to a predetermined stall torque τ m .Variation in object shape is accommodated mostly by motor travel, rather than by finger joint compliance.Softer finger springs would be an alternative way of accommodating varying sizes, and would have the additional advantage of being more sensitive to contact forces.Unfortunately, excessively floppy fingers sometimes yield no stable grasps at all. The remainder of the paper reports numerical and experimental results, aimed at evaluating the efficacy of the approach. Distribution of Stable Poses This section focuses on the set of feasible stable configurations of hand and object, and their distribution in the object configuration space.If there are only a few different stable poses, well separated in the configuration space, then it is likely that very little sensor data is required to localize the object. One consequence of this approach is that by minimizing the number of stable grasp poses, we maximize the information implied by the existence of a stable grasp.The ideal case would Figure 2: The diagram illustrates the parallel compliance-actuation scheme used to model hand/object interaction.Units are dimensionless throughout the analysis in Section 4 so that the palm radius is 1 and the constant of finger springs is k f = 1.When closing the hand, the motor is driven to a stall torque τ m .be a single stable pose with a large capture region.Because of the symmetric design of our grippers, that ideal will never be attained.Symmetric objects also depart from the ideal, giving rise to continua of quasi-stable poses, and corresponding pose ambiguities.Friction and sensor noise also add to the difficulties.In practice, even with asymmetric objects, equivalent grasps of an object will be observed as a cluster rather than a point in the sensor space, and precision is compromised. We model the handling forces by a potential field, following the example of [33].Assuming some dissipation of energy, stable poses correspond to minima in the potential field.We can also get some idea of localization precision by examining the shape of the potential field.Vshaped wells, or deep narrow wells, are less susceptible to noise than broad shallow U-shaped wells. Our model of the hand is depicted in Figure 2. Fingers are modeled as lines, and fingerfinger interactions are not modeled.For n fingers there are n springs and the motor, giving n + 1 sources of potential energy which account for the total energy of the system U .We assume the motor is a constant torque source τ m , corresponding to a potential U m = τ m • θ m , where θ m is motor position.The finger spring rest positions are also given by θ m , so each finger potential is given by where k f is the finger stiffness and θ i is the finger angle.The total potential energy is: By examining the distribution of local minima in that potential field, and the shape of the potential wells, we seek some insights into hand design, finger stiffness, and choice of stall torque. Examples of Potential Fields This section shows the potential energy and corresponding stable poses for three objects: a sphere, a cylinder and a polyhedron; both with the three and four fingered prototype grippers.To calculate the potential energy for a given object geometry and pose, the first step is to determine the motor angle θ m and finger angles θ 1 . . .θ n , yielding a linear complementarity problem [55].Appendix A describes the solution in detail.Figure 3 shows the potential field of a sphere of radius equal to half the radius of the palm, projected onto the x − y plane.Because of symmetry, x and y are the only coordinates of the pose of the sphere meaningful to the grasping process, and the only ones that can be derived from the knowledge of a stable pose. The plots in Figure 3 present a unique stable grasp of the sphere both for the threefingered and four-fingered cases.With some object shapes, the addition of the fourth finger should "sharpen" the bottom of the potential well, increasing the stiffness of the grasp and adding precision to both grasp recognition and pose estimation.As the plot illustrates, this is not the case with the sphere.However, the global structure of the potential field is altered when adding the fourth finger, yielding a somewhat larger basin of attraction. Figure 4 shows the potential field of a cylinder with both the three-and four-fingered hands.Cylinder location is represented by (r, α), where r is the minimum distance from the cylinder axis to the palm center and α is the axis angle relative to the x axis.Translation of the cylinder along its length is unobservable, and therefore not meaningful for this analysis.As a consequence, each local minimum corresponds to a continuum of quasi-stable poses in the x−y plane that allow translation along the cylinder's axis.Figure 4a shows six stable poses for the case of the three-fingered gripper and Figure 4b shows four for the case of the four-fingered one, corresponding to the different interlocked configurations of fingers and cylinder.These are, in fact, the most frequent poses observed in the experiments described in Section 6. Figure 5 shows the potential field of a scaled 3-4-5 polyhedron, with both the three-and four-fingered hands.Assuming that a triangular face lies flat on the palm, the set of possible displacements is three-dimensional, and cannot be reduced to a two-dimensional plot as we did for the sphere and the cylinder.Figure 5 shows a slice of the potential energy where the orientation of the polyhedron is held constant.For that specific orientation, the hand yields a stable pose. As expected with an irregular object, the stable poses of the polyhedron are isolated points.It isn't known whether an irregular polyhedron exists that would not produce isolated stable poses.One illustrative example is an interesting singular shape developed by Farahat and Trinkle [28], which would exhibit planar motions while maintaining all three contacts without varying the finger angles, but Farahat and Trinkle's example does not correspond to a stable pose of the proposed simple hand. Stall Torque and Stability The proposed strategy to grasp an object consists of closing the hand by driving the motor to stall and letting hand and object settle into a stable configuration.In this section we show that stability of that process depends on motor stall torque. While it might seem intuitive that the stronger the grasp, the more stable it is, the reality is not so simple.In the compliant actuation scheme in Figure 2, high motor torque implies high spring compression which might yield unstable grasps, as shown by Baker, Fortune and Grosse [3] in the context of the Hanafusa and Asada hand [33]. To illustrate this effect we show in Figure 6 the potential field of a three-fingered hand grasping a sphere for four increasing values of the stall torque.Only the first two cases yield stable grasps. Experimental Prototypes This section describes the design and construction of prototypes P1 and P2 (Figure 7 and Figure 8).The main purpose is to explore the ideas in Section 3, in particular the "let the fingers fall where they may" approach, and at the same time deal with the particular constraints of the bin picking task such as singulation of objects and heap penetration.The three main guidelines followed in the design are: • A circular flat low friction palm.Avoiding friction is meant to produce fewer stable grasp poses and yield wider capture regions.For both prototypes, the palm is covered with a thin sheet of Teflon. • Thin cylindrical fingers arranged symmetrically around the palm. • All fingers are compliantly coupled to a single actuator. While we are interested in the behavior of the grippers across different scales, we chose the dimensions so that we could build them mostly with off-the-shelf components.The highlighter markers were selected because they are inexpensive, readily available in large numbers, and about the right size.In fact they are a bit small, but that suits the experiment since it enables the hand to grasp several at a time. The palm was laser cut measuring 2 inches in radius.Fingers are made out of 3/16 inch stainless steel rods measuring 2.5 inches long.Our desire to minimize friction met with limited success.Bench tests yielded a coefficient of friction of 0.13 (friction angle 7 • ) between palm and marker, and a coefficient of friction of 0.34 (friction angle 19 • ) between finger and marker. Prototype P1 (Figure 7) has three fingers.The gripper is actuated by a DC motor that transmits the power to the fingers through a geartrain.The actuator is controlled open loop and driven to stall.Torsional springs coupling the fingers with the gear assembly introduce compliance which allows for moderate conformability of the hand.While all of our bin-picking experiments are with highlighter markers, our analysis in Section 4 with different object shapes suggests that the gripper will work with a variety of objects. Prototype P2 (Figure 8) has four fingers.The actuation is transmitted through a leadscrew connecting the motor to an individual linkage for each finger.The linkage has been optimized to maximize the stroke of the fingers and to yield a nearly uniform transmission ratio from leadscrew to finger rotation.One link in each finger linkage is elastic and provides compliance to the gripper.As explained in Section 4.1, owing to the fourth finger, P2 has a theoretical advantage both in its capture region and grasp stability, as measured by the basin of attraction.However, we can also expect it to perform worse in the presence of clutter.And while P2 might have improved recognition and localization for some objects, we shall see in Section 6 that for highlighter markers the performance is worse, which we attribute to the "self-clutter" effect: fingers interfere more often with each other when the hand has four fingers than when it has three. Sensing the state of the hand is key if we want to "grasp first and ask questions later", so P2 has absolute encoders on each finger and the actuator.Figure 9 shows an example of finger signatures for successful and failed grasp attempts. P1 and P2 are minimal implementations of the proposed simple gripper and only the first of a series of prototypes to come.Still, they have been useful in two ways.First, they have helped us to realize the importance of the mechanical design as part of the search for simplicity, in particular that we should also address complexity of fabrication.At the same time, both grippers have allowed us to verify or refine some of the ideas that arise from the theoretical study of stability in Section 4. Experimental Results In this section we describe the implementation and results obtained in our approach to the bin-picking problem.Bin-picking is characterized by high clutter and high pose uncertainty, making it a challenging task for the conventional model-driven "put the fingers in the right place" approach.It has been the focus of numerous research efforts for several decades, yet successful applications are rare [37,66,8].As we shall see the "let the fingers fall where they may" approach handles high clutter and pose uncertainty, and also benefits from the target rich environment inherent to bin-picking.The experimentation is divided in two parts: First, an offline learning process creates a data-driven model of the mapping from grasp signature to grasp outcome.Second, the robot attempts grasps until it detects a singulated object in a recognizable pose.Grasp classification and in-hand localization capabilities are key to the success of our approach.In the next sections we evaluate and compare the performance of P1 and P2 in both capabilities. Experimental Setting We test our prototype grippers with a 6 DOF articulated industrial manipulator.A preprogrammed plan moves the gripper in and out of the bin iteratively while the gripper opens and closes.For each iteration we record the final state of the hand for P1, and the entire grasp signature for P2.We also record the grasp outcome-the number of markers grasped and their pose in the gripper. The system architecture is built using Robot Operating System (ROS) [56].The system runs a sequential state machine that commands four subsystems interfaced as ROS nodes: • Robot Controller: Interface developed for absolute positioning of an ABB robotic arm. • Grasp Controller: Interfaces the motor controller that drives the gripper.It also logs the grasp signature by capturing the state of the motor and finger encoders during the entire grasp motion. • Vision Interface: Provides ground truth for the learning system, including the number of markers grasped and their position within the hand. • Learning Interface: After offline training, the learning system classifies grasps as singulated or not singulated, and estimates marker orientation for singulated grasps. The robot follows a preprogrammed path to get in and out of the bin.While approaching the bin, the gripper slowly oscillates its orientation along the vertical axis with decreasing amplitude.The oscillation allows the fingers to penetrate the bin contents without jamming.The penetration depth was hand-tuned to improve the chances of obtaining a single marker.During departure, the gripper vibrates to reduce the effect of remaining friction and help the object settle in a stable configuration.Contact forces are not easily determined but in bench tests we observed forces ranging from three to seven newtons.During the experiments, we occasionally shook the bin to randomize and improve statistical independence of successive trials. For each prototype we run 200 repetitions of the experiment.The grasp signatures and outcomes make up the dataset used to evaluate the system in terms of singulation detection in Section 6.2 and pose estimation in Section 6.4.Table 2 shows the distribution of the number of markers grasped both with P1 and P2 and Figure 10 shows some representative singulated grasps. Experimental Results: Grasp Classification This section analyzes experimental performance of grasp classification.We use a supervised learning approach to classify grasps as either successful (singulated) or failed, based on grasp signature.After labeling each run of the experiment as success or failure we train a Support Vector Machine (SVM) with a Gaussian kernel [20,13] to correctly predict singulation.In the case of P1, the classifier constructs a decision boundary in the sensor space-the three finger encoder values of the final gripper pose-by minimizing the number of misclassifications and maximizing the margin between the correctly classified examples and the separation boundary.Figure 11 shows the separation boundary found by the classifier. Number For P2 the grasp signature is of much higher dimension.We use Principal Component Analysis (PCA) [64] to project the grasp signature onto a smaller set of linearly uncorrelated features, reducing its dimension and enabling learning with a relatively small set of training examples. The performance of the system is evaluated using leave-one-out cross-validation.The hyperparameters C and γ are tuned using 10-fold cross-validation on the training set in each training round.The parameter C controls the misclassification cost while γ controls the bandwidth of the similarity metric between grasp signatures.Both parameters effectively trade off fitting accuracy in the training set vs. generalizability.The analysis yields similar accuracies for P1 and P2: 92.9% and 90.5% respectively. To compare observing the full state of the hand (motor and finger encoders) with observing only the motor encoder, we train a new SVM for P2 where the feature vector contains only the motor signature.The accuracy detecting singulation decreases in this case from 90.5% to 82%. For the singulation system to be useful in a real bin-picking application it should be optimized to maximize precision-the ratio of true positives to those classified as positive-even to the detriment of recall -the ratio of true positives to the total number of positive elements in the dataset.Figure 12 shows the relationship between precision and recall, obtained by varying the relative weights of positive and negative examples when training the SVM for P1.The SVM optimized for accuracy achieves a recall of 0.89, but a precision of only 0.875-one out of eight positives is a false positive.By choosing an appropriate working point on the precision-recall curve, we can increase precision, reduce false positives, and obtain a slow but accurate singulation system. Experimental Results: Early Failure Detection While the grasp signature of P1 contains only the final sensor values, the grasp signature of P2 contains the entire time series.This gives us the possibility of early failure detection.Sometimes it becomes clear long before the end of the grasp, that the grasp is doomed.If the robot can detect failure early, it can also abort the grasp early and retry. To test early failure detection, we trained a classifier to predict success or failure at several times during the grasp motion.At each instant we train the classifier using only information available prior to that instant.Fig. 13 shows classifier accuracy as it evolves during the grasp, from random at the beginning, to the already mentioned 90.5% at the end. Experimental Results: In-hand Localization In this experiment we estimate the marker orientation based on the grasp signature.We limit the analysis to those grasps that have singulated a marker, and assume that it lies flat on the palm of the gripper.This assumption holds well for P1 but is violated occasionally for P2, The finger symmetry of the proposed design in Figure 2 is not reflected in the plots.We introduced an offset between the finger rest positions to avoid the situation where all fingers make simultaneous contact and block the grasp.where the marker is sometimes caught on top of a finger or on top of a "knuckle" at the finger base. We use Locally Weighted Regression [19] to estimate the orientation as a weighted average of the closest examples in the training set, where the weights depend exponentially on the distance between signatures.Because of the cylindrical shape of the marker, we only attempt to estimate its orientation up to the 180 degree symmetry. Figure 14 shows a polar chart of the error distribution for P1.The leave-one-out crossvalidation errors obtained for P1 and P2 are 13.0 degrees and 24.1 degrees respectively.While no improvement of P2 over P1 can be expected for cylindrical shapes, the fact that it performs so much worse is unexpected.The most likely explanation relates to the self-clutter effect described earlier.Our numerical analysis models the fingers as lines, and neglects interactions among the fingers.In reality the fingers are thin cylinders, and they tend to stack up in unpredictable order.Marker interactions with the knuckles may contribute to the problem.All of these considerations introduce noise which appears to be worse for the four-fingered design of P2. As in Section 6.2, we can be more cautious and allow the system to reject grasps when its confidence is low.We monitor the distance of any given testing grasp to the closest neighbors in the sensor space.Whenever that distance is too big, it means there is insufficient information to infer marker orientation confidently.By setting a threshold on that distance we can effectively change the tradeoff between average error and recall as shown in Figure 15.By using an appropriate working point in Figure 15 we can lower the average error at the cost of expected number of retries to get a recognizable grasp.Figure 16 illustrates the resulting predictions of the system for a working point yielding an expected error of 8 • .The left half of the figure shows the singulated grasps, and the right half shows the same grasps after using estimated orientation to rotate the image to horizontal.Deviations from the horizontal correspond to errors in the regression. Discussion This paper focused on a "let the fingers fall where they may"-"grasp first, ask questions later" approach to grasping.Near-blind grasping, expectation based on offline training, and haptic sensing are combined to address a manipulation problem with high clutter and high uncertainty.The approach leads in interesting and novel directions, such as using a slippery hand to minimize the number of stable poses.Whether a slippery hand is a good idea or not, the insight is valid: sometimes stability and capture are not the highest priority.Broad swaths of stable poses in the configuration space can work against recognition and localization.Thus the paper can be viewed as an exploration of design for perception.Perhaps more significantly, the paper can be viewed as an exploration of design for learning.The grasping process is generally very complicated.The map from initial conditions to grasp outcome can be extraordinarily complex, but for the simple hands studied in this paper, the map is so simple that its main features can be learned in just 200 trials. It might be possible to analyze the mechanics of our prototypes and develop a direct algorithm for interpreting the sensor data.Indeed, similar algorithms have been developed for pushing, squeezing, and grasping planar parts with parallel jaw grippers [45,30].However, those algorithms depend on numerous simplifying assumptions, including that the part is already singulated.Generalizing those algorithms to parts that are not singulated, and modeling the dynamic interactions with surrounding clutter, is well beyond the state of the art.The learning approach can handle even the extreme clutter challenge presented by bin-picking.Likewise the learning approach can deal with three dimensions, arbitrary shapes, unknown or uncertain shapes, and many other variations not addressed by research on direct algorithms. There are many interesting areas for future research: • Placing.Table 1 shows a checkmark in the "placing" column for bin-picking, but no checkmark for our prototype hands.Early experiments show that our prototypes can place highlighters, but much remains to be done. • Morphology.Our prototype hands have generic designs for palm and fingers.It is obvious that the stable pose distribution could be improved by applying some well-known design features, such as cutting v-shaped grooves across the palm. • Generalizations.Generality is one of the main goals of our work, so we plan to explore bin-picking with other shapes, and also to explore entirely different task domains, such as object retrieval by a domestic service robot. • Dimensions of grasping.Finally, we would like to improve on Table 1.A more objective and refined understanding of the dimensions of generality would help to make tradeoffs between generality and simplicity when designing a robotic hand.The field would benefit from a consensus on measures of generality to enable comparison between specific hand designs or task domains. A Grasp Potential Energy: A Linear Complementarity Problem Here we solve the linear complementarity problem that arises from the compliant coupling scheme in Figure 2. We assume a known object shape and pose, and calculate the potential energy of the grasp.Recall that θ m is the motor position as well as the resting position of each finger, and θ 1 . . .θ n are the finger positions.Also let the finger limit angle θ l i be the angle at which finger i would make first contact either with the object or the palm when closing. We adopt the convention that θ m and θ i increase in the closing direction.If finger i is not in contact, then θ i = θ m and that finger's torque is zero.If it is in contact, then θ i = θ l i ≤ θ m . In other words, The motor torque τ m balances the sum of the finger torques: Imposing the motor torque to equal the desired stall torque τ s , constrained by Equation 2 yields a linear complementarity problem.To simplify the solution we renumber the fingers so that the finger limit angles θ l i are ordered from smallest to largest.Now suppose we increase motor angle θ m until the motor reaches τ s .Before the first contact, the motor torque is 0. After the first contact and before the second, the torque increases linearly from 0 to k f • θ l 2 − θ l 1 .By repeating the same process we find a series of torque limits, the highest torque attained before the next finger contact: Let i m be the largest i such that T i is smaller or equal to the desired stall torque τ s .At the limit angle θ im only fingers 1 to i m contact the object, each finger i providing torque k f θ l im − θ l i for a total of T im .In the final grasp configuration, the remaining torque until motor stall τ s − T im is split evenly between those fingers.The final resting pose of the motor is then: We then substitute the value of the final motor angle in Equation 2 to get the finger angles θ 1 . . .θ n and finally in Equation 1 to obtain the grasp potential energy. Figure 1 : Figure 1: (a) The common "pickup tool" is very simple, but also very effective in achieving stable grasps over a broad class of shapes.Four fingers of spring steel are driven by a single actuator.(b) Bin-picking scenario and prototype gripper P2 with four fingers and angle encoders for object recognition and localization. S t a b i l i t y C a p t u r e I n -h a n d m a n i p u l a t i o n O b j e c t s h a p e v a r i a t i o n M u l t i p l e a n d d e f o r m a b l e o b j e c t s R e c o g n i t i o n , l o c a l i z a t i o n P l a c i n g C l u t t e r Figure 3 : Figure 3: Potential field of a sphere grasped by a (a) three-fingered and (b) four-fingered versions of the proposed simple hand.The plots illustrate the variation of the potential energy of the grasp with translation of the sphere in the x − y plane.The radius of the sphere is 0.5 while the radius of the palm is 1.The hand is driven to a stall torque of τ m = 0.1.The contour plots illustrate that (0, 0) is an isolated stable pose for both grippers. Figure 4 : Figure 4: Potential field of a cylinder grasped by a (a) three-fingered and (b) four-fingered version of the proposed simple hand.The plots illustrate the variation of the potential energy of the grasp with displacements of the cylinder in the r − α space, where r and α are the radial coordinates of the axis of the cylinder from the center of the palm.The hand is driven to a stall torque of τ m = 0.1.The contour plots yield six and four stable poses for the three-and four-fingered simple hands respectively. Figure 5 : Figure 5: Potential field of a scaled 3-4-5 polyhedron grasped by a (a) three-fingered and (b) four-fingered version of the proposed simple hand.The plots illustrate the variation of the potential energy of the grasp with displacements of the polyhedron in the x − y plane, while holding orientation constant.The hand is driven to a stall torque of τ m = 0.2. Figure 6 : Figure 6: Potential fields for a sphere held by P1.Stall torque increases by a factor of 10 from 0.01 at the top to 10 at the bottom.The grasps are stable for lower values of motor torque, and unstable for higher values. Figure 7 : Figure 7: Side and frontal view, and transmission mechanism of gripper prototype P1. Figure 8 : Figure 8: Side and frontal view, and transmission mechanism of gripper prototype P2. Figure 9 : Figure 9: Side by side comparison of the grasp signature (only 4 finger encoders) of representative (a) successful and (b) failed grasps with P2.The fingers begin the grasp perpendicular to the palm (0 • ) and reach the final position shown in the figures. Figure 11 : Figure 11: Perspective view and 2D projection (fingers 2 and 3) of the decision boundary found by the Support Vector Machine in the P1 finger encoder space.Dark dots are successful grasps and clear dots are failed grasps.The interior of the bounded region is classified as success.The finger symmetry of the proposed design in Figure2is not reflected in the plots.We introduced an offset between the finger rest positions to avoid the situation where all fingers make simultaneous contact and block the grasp. Figure 14 : Figure 14: Error distribution in the regression of the orientation of the marker for singulated grasps with P1. Figure 15 : Figure 15: Tradeoff between the average error in estimating the orientation of the marker and the recall of the bin-picking system for P1. Figure 16 : Figure 16: Orientation correction: (a) Random subset of successful grasps (b) Images of the grasps have been rotated based on their estimated orientation in order to orient the marker horizontally. Table 1 : Dimensions of general-purpose grasping to characterize manipulation tasks and systems.A check broadly indicates either a task requirement or a hand capability.indicatesan improvement of P2 with respect to P1, and otherwise.stereotypical grasp poses, favoring enveloping grasps over fingertip grasps. Table 2 : Distribution of the number of markers grasped.The dataset captured to evaluate the system comprises 200 grasp attempts for each prototype gripper.
2014-10-01T00:00:00.000Z
2011-12-13T00:00:00.000
{ "year": 2012, "sha1": "6397c7ad74a99c96d3437008f3c4fef8c22aad2f", "oa_license": "CCBY", "oa_url": "https://figshare.com/articles/journal_contribution/Autonomous_Manipulation_with_a_General-Purpose_Simple_Hand/6552122/1/files/12033389.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "9337e736a08ecf381a6856589f5e7289023c2259", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Engineering", "Computer Science" ] }
241555
pes2o/s2orc
v3-fos-license
Poisson Group Testing: A Probabilistic Model for Boolean Compressed Sensing We introduce a novel probabilistic group testing framework, termed Poisson group testing, in which the number of defectives follows a right-truncated Poisson distribution. The Poisson model has a number of new applications, including dynamic testing with diminishing relative rates of defectives. We consider both nonadaptive and semi-adaptive identification methods. For nonadaptive methods, we derive a lower bound on the number of tests required to identify the defectives with a probability of error that asymptotically converges to zero; in addition, we propose test matrix constructions for which the number of tests closely matches the lower bound. For semi-adaptive methods, we describe a lower bound on the expected number of tests required to identify the defectives with zero error probability. In addition, we propose a stage-wise reconstruction algorithm for which the expected number of tests is only a constant factor away from the lower bound. The methods rely only on an estimate of the average number of defectives, rather than on the individual probabilities of subjects being defective. I. INTRODUCTION Group testing (GT), also known as Boolean compressed sensing, is a method for identifying a group of subjects with some distinguishable characteristic, frequently referred to as defectives, from a large group of entities [2], [3]. The gist of the GT approach is that for a small number of defectives, one can reduce the required number of experiments by testing subgroups of subjects rather than all individuals separately. Given its simple working principles and the potential for reducing the cost of component screening, GT has found many applications in areas as diverse as communication theory, signal processing, bioinformatics, mathematics, and machine learning [4]- [9]. The test model of the GT framework varies depending on the application at hand. The original setup, also known as conventional GT, was proposed by Dorfman [10], and involves logical OR computations on the test signatures. More precisely, in conventional GT, the result of a test is positive if there exists at least one defective in the test pool and negative otherwise. Many other models have been proposed in the literature, such as the adder channel, also known as quantitative GT [8], threshold GT [11], and symmetric GT [12]. More recent developments include the semi-quantitative group testing (SQGT) paradigm, which provides a unifying framework for a number of GT models and generalizes the notion of GT to nonbinary test matrices and non-binary test outcomes [13], [14]. In addition, GT is closely related to compressed sensing (CS) [15], [16] and integer CS [17]; the main differences lie in the structure of the alphabet used (R or C in CS, {0, 1} or a discrete set of integers for GT, and a bounded set of integers for integer CS) and the operations used to perform dimensionality reduction (addition in CS and integer CS, Boolean OR in conventional GT). The group testing literature may be divided into two categories based on how the number of defectives is modeled. In combinatorial GT, the number of defectives, or an upper bound on the number of defectives, is fixed and assumed to be known in advance [8]. On the other hand, in probabilistic GT (PGT), the number of defectives is a random variable with a given probability distribution [10]. With almost no exceptions, the PGT literature focuses on a Binomial(n, p 0 ) distribution for the number of defectives. Such a model arises when each of the n subjects is defective with a fixed probability 0 < p 0 < 1, independent of all other subjects. Binomial models are not necessarily sparse, given that p 0 may be a constant and given that the defective selection process is random. Here, we propose a novel GT paradigm, termed Poisson PGT, which models the distribution of the number of defectives via a right-truncated Poisson distribution with parameter λ(n) = o(n). Our motivation for this assumption comes from clinical testing, where one is interested in identifying infected individuals under the assumption that infections gradually die out. A similar scenario is encountered in screening DNA clones for the presence of certain DNA substrings, where the clones are test subjects and defectives are clones that contain the given substrings. The distribution of clones containing a given DNA pattern is frequently modeled as Poisson [8]. Other applications include testing genetic traits that are negatively selected for (i.e., traits that diminish in time, as they reduce the fitness of a species). The assumption λ(n) = o(n) ensures that the longer the waiting time or the larger the number of test subjects, the smaller the average relative fraction of defectives. In other words, the rate of defectives diminishes with time. The Poisson PGT model has a number of useful properties that make it an important alternative to classical binomial models. Although a binomial distribution with p 0 1 and a large n, where λ = np 0 is a constant, converges to a Poisson distribution with parameter λ = np 0 [18], our model allows the parameter λ(n) of the (truncated) Poisson distribution to grow with n; more precisely, the model and the results derived in this paper are valid even if lim n→∞ λ(n) = ∞, as long as lim n→∞ λ(n) n = 0. Such a model is useful in settings were test subjects are assumed to arrive sequentially in time, and where tests are performed only once a sufficient number arXiv:1410.5373v2 [cs.IT] 27 May 2015 of subjects n is present. This model is also applicable to streaming and dynamic testing scenarios [19], in which the probability that a subject is defective decreases in time so that newly arriving subjects are less likely to be defective. In such a setting, classical binomial(n, p 0 ) models are inadequate, as they assume that the probability p 0 of a subject being defective does not depend on the number of test subjects. A number of papers have considered a Poisson model to capture the streaming dynamics of the arrivals of subjects to a test center [20], [21]. In contrast, our model does not make any assumptions on the distribution of the general subject population, but instead focuses on modeling the number of defectives using a right-truncated Poisson distribution. In addition, the focus of [20], [21] is on determining the total amount of time (delay) required to test a batch of subjects arriving at random times. However, here we concentrate on the completely unrelated problem of finding necessary and sufficient conditions on the smallest number of tests needed for accurate nonadaptive and semi-adaptive GT. In addition, a number of papers have considered the problem of binomial group testing with different subjects having different probabilities of being defective. This line of work was introduced in [22] under the name generalized binomial group testing (GBGT). Recently, this problem has received renewed interest under the name of heterogeneous binomial group testing [23]. In [22], a two-stage algorithm for GBGT was proposed, resulting in a complicated minimization problem for the expected number of tests required; unfortunately, no closed-form expression, nor any simply calculable expression, was provided for the expected number of tests. In [24], a similar problem was considered in which the goal was to isolate a single defective in the GBGT model. For this problem, the authors proposed an optimal adaptive procedure using a binary testing tree, which was obtained for a set of weights that depend on the probabilities of the subjects being defective. In addition, an upper bound on the expected number of tests was provided in the form of a complicated sum. Other papers that consider the GBGT model include [23], [25], and [26]. As we explain in the next section, although related to our Poisson model through Le Cam's theorem, GBGT operates under very different prior knowledge assumptions and cannot be considered within the same analytical framework. The main contributions of this work are three-fold. First, we introduce a novel probabilistic GT model with applications in streaming and dynamic testing scenarios. This model generalizes probabilistic group testing models beyond the binomial GT paradigm with constant p 0 and other models previously considered in the literature. Second, we bridge the gap between combinatorial GT and probabilistic GT methodology by showing how the algorithms and analytical tools developed for combinatorial GT can be generalized and adapted for probabilistic GT. To the best of our knowledge, this is the first attempt to analyze combinatorial GT and probabilistic GT within the same framework. Finally, we derive closely matching lower and upper bounds on the number of tests required for finding the defectives in Poisson testing using both nonadaptive and semi-adaptive algorithms. The paper is organized as follows. Section II introduces the Poisson GT model, while Sections III and IV describe the main results of the paper. A summary of the results and an accompanying discussion are provided in Section V. In Section III, we first use an adaptation of Fano's inequality to find a lower bound on the number of nonadaptive tests required to identify defectives under the Poisson PGT model, with a probability of error converging to zero as the number of subjects grows. We then proceed to describe a simple nonadaptive method based on binary disjunct matrices [27]. The test matrix is constructed probabilistically, with the entries of the matrix being independent and Bernoulli distributed. Given that the number of tests obtained via this method does not tightly match the lower bound, we describe an alternative nonadaptive method with a number of tests differing from the lower bound by only an arbitrary slowly-growing function in n. The test matrix in this method does not rely on the disjunctness property and the entries of the matrix are not i.i.d. distributed. In Appendix A, we use information-theoretic arguments to derive a tight upper bound on the number of nonadaptive tests for Poisson PGT. Following the practice of Binomial probabilistic group testing, in Section IV we use Huffman coding to find a lower bound on the expected number of tests required by adaptive and semi-adaptive methods to identify the defectives with zero error probability. Then, we show that a simple semi-adaptive algorithm identifies all the defectives with an expected number of tests only a constant factor away from the lower bound. II. PROBLEM SETUP Throughout the paper we adopt the following notation. Bold-face uppercase and bold-face lowercase letters denote matrices and vectors, respectively. Simple uppercase letters are used to denote random matrices, random vectors, and random variables; similarly, simple lowercase letters are used for scalars. Calligraphic letters are used to denote sets. For simplicity, we often write X = {x i } s 1 for a set of s codewords, X = {x 1 , x 2 , . . . , x s }. The symbols log(·) and log 2 (·) are used to denote the natural logarithm and the base 2 logarithm, respectively. For a finite integer K ≥ 1, we also make use of the K-fold logarithm function, defined as log (K) n log log · · · log K times n. Note that for K > 1, this function grows slower than log(n). Asymptotic symbols such as o(·) and O(·) are used in the standard manner. More precisely, we say that Let S denote the set of test subjects with cardinality n, among which a subset of D subjects is defective. In the Poisson PGT model, we assume that the number of defectives follows a right-truncated Poisson distribution with parameters λ(n) and n, i.e., Here, D = |D| denotes the number of defectives and c(n) = e λ(n) / n d=0 is a normalization coefficient. Note that c(n) is a decreasing function of n, such that lim n→∞ c(n) = 1. In addition we assume that all subsets of S with equal cardinality have the same probability of being defective. This assumption is used to model the setup in which given D = d, the decoder has no information as to which set of cardinality d is most likely to be the set of defectives. Letλ(n) be the expected number of defectives in the model. It can be easily verified that A right-truncated Poisson distribution is closely related to a finite support version of the non-uniform Bernoulli model on the set of test subjects, in which the i th subject is defective with probability p i , 0 ≤ p i ≤ 1, independent of all other test subjects. From Le Cam's theorem [28], it may be deduced that the number of defectives D under this model satisfies where λ(n) = n i p i . As an example, one can choose p i = c/i, for some constant c > 0, to arrive at a model where individual subjects have decreasing probabilities in i of being defective, so that λ(n) = O(log n). The approximation error with respect to the Poisson distribution scales as 2 c ζ(2) = cπ 2 /3, where ζ(·) denotes the Riemann zeta function. By choosing c sufficiently small, the approximation error can be reduced to any desired positive level. Although adaptive and other classes of non-uniform Bernoulli models were reported in the literature [22], [24], [29], [30] they rely on the exact knowledge of each probability p i , i = 1, . . . , n. However, even in applications in which a subject is defective independently from all other subjects, estimating each of the p i values may be prohibitively difficult. In contrast, Poisson PGT only makes use of a single aggregate value of the probabilities, λ(n), which is less informative but usually much easier to estimate. In the GT framework, each test is performed on a subset of the subjects, and the result of a test equals 1 if at least one defective is present in the test, and 0 otherwise. The total number of tests is denoted by m. For non-adaptive PGT, despite the fact that the defectives are chosen randomly, the number of tests is deterministic. The question of interest is to find the smallest number of tests that guarantee a probability of detection error that converges to zero asymptotically with the number of subjects n. In contrast, adaptive and semiadaptive algorithms, in which tests are performed sequentially or grouped into different stages with the choice at one stage used to inform the choice at the following stages, call for a random number of tests. The goal is then to compute the expected number of tests that allows for zero error probability. In this paper, we focus on nonadaptive and semi-adaptive testing schemes, and our main results are summarized and discussed in Section V. III. NONADAPTIVE METHODS FOR POISSON PGT Nonadaptive group testing refers to group testing methods in which all tests are designed simultaneously. In other words, in nonadaptive GT the choice of a test is not allowed to depend on the outcomes of previous tests [8]. The main advantage of nonadaptive methods is that all the tests can be performed in parallel, which is of great practical importance for large-scale problems. A clear disadvantage compared to adaptive methods is the sometime significant increase in the number of tests. In nonadaptive GT, the assignment of subjects to different tests is usually specified via a binary matrix termed the test matrix, C ∈ {0, 1} m×n , where m denotes the number of tests and n denotes the number of subjects. If C(i, j) = 1, for 1 ≤ i ≤ m and 1 ≤ j ≤ n, the j th subject is present in the i th test; on the other hand, if C(i, j) = 0, than the j th subject is excluded from the i th test. Throughout this paper we use the terms "code" and "test matrix" interchangeably; given this definition, a codeword refers to a column of the test matrix. The test results are captured by a vector y ∈ {0, 1} m , frequently referred to as the vector of test results or syndrome. It can be easily observed that the vector of test results is equal to the Boolean OR of columns of C corresponding to the defectives. Fig. 1 illustrates the notion of a test matrix, the set of defectives, and the vector of test results. Note that S i denotes the i th subject in S. For a fixed test matrix on n subjects, C, and a decoding algorithm f : (C, y) →D, let E(n) denote the event that the decoding algorithm cannot identify the set of defectives, i.e. the event that f (C, y) = D. The ultimate goal of most combinatorial nonadaptive GT methods is to ensure that P (E(n)) = 0. Due to the probabilistic nature of the Poisson PGT model, any subset of subjects may be defective with a non-zero probability. As a result, since in nonadaptive GT each test is designed independently from previous tests, for any fixed test matrix C with m < n, one can always find a choice of D for which f (C, y) = D. Poisson Probabilistic Group Testing To verify the correctness of this claim, consider a fixed test matrix C and a set of subjects S such that each column of C is assigned to one subject in S; for any set D ⊆ S, let y D denote the Boolean OR of the columns of C corresponding to D . Since in Poisson PGT each subset of S may correspond to the set of defectives with a nonzero probability, in order to ensure P (E(n)) = 0, the test matrix must be able to distinguish between any two distinct subsets of S; in other words, for any two distinct sets D 1 , D 2 ⊆ S, we must have y D 1 = y D 2 . Since in total, there exist n d=0 n d = 2 n choices for the set of defectives, at least m = n tests are required (i.e. one has to test each subject individually). The discussion above implies that for Poisson PGT, there does not exist a nonadaptive test matrix with fewer than n rows and an accompanying decoding algorithm for which P (E(n)) = 0; as a result, we instead focus on the requirement that the test matrix satisfy the asymptotic condition lim n→∞ P (E(n)) = 0. In what follows, we propose two nonadaptive test matrix constructions and decoding algorithms to guarantee that the aforementioned condition is met 1 . In order to evaluate how effectively each method uses its tests, we first find a lower bound on the minimum number of tests required by a nonadaptive algorithm to ensure (5), and then use this bound as a benchmark. The constructive methods provide upper bounds on the minimum number of tests. In Section III-A, we use Fano's inequality [31] to find a lower bound on the number of tests of the form is an arbitrarily small fixed scalar such that 0 < < 1. In Section III-B1, we propose a test matrix construction using binary disjunct matrices (Method I). The entries of the test matrix are chosen according to an i.i.d. distribution, and the method requires m = C 2 κ(n)λ(n) 2 log 2 n (1 + o(1)) measurements, where κ(n) is an arbitrary chosen slowly-growing function of n. Given the gap between the number of tests in Method I and the lower bound, we propose another method in Section III-B2 that requires only m = C 1 κ(n)λ(n) log 2 n (1+o(1)) tests (Method II). This method is also based on a probabilistic construction, however the entries of the test matrix no longer follow an i.i.d. distribution. One major difference between these two methods is that Method I uses the disjunctness property [2], [27], while for Method II we relaxed this constraint. Both of these constructions can be extended to identify the set of defectives in the presence of errors in the vector of test results, and also employ a decoding algorithm with computational complexity O(mn). In Appendix A, we use a standard information theoretic approach -combined with a maximum likelihood decoder -to determine an upper bound on the minimum number of tests required by any nonadaptive method based on an i.i.d. test matrix. The number of tests using this approach tightly matches the lower bound under some constraints on the growth of λ(n) with respect to n. A summary of these results are provided in Section V. A. Lower Bound on the minimum number of tests LetD be the set of defectives recovered by some decoding algorithm using a fixed test matrix C ∈ {0, 1} m×n , and given . Using Fano's inequality [31], one has where H(·) denotes the Shannon entropy function [31]. Since conditioned on D = d, the set of defectives D is chosen uniformly at random, independent on C, one has Using the definition of mutual information [31], we may write where the inequality follows since conditioning reduces entropy; also, the test results Y only depend on the codewords assigned to the set D and hence H(Y |C, D, where C D is the set of columns of C corresponding to D. Substituting (6) and (7) in (8) yields On the other hand, from the following chain of inequalities Since P (E) = E D [P (E d )], (9) may be used to find a lower bound on m that ensures P (E) = o(1), as formally stated in the next theorem. Theorem 1. Let 0 < < 1 be an arbitrarily small fixed scalar, and suppose that λ(n) = o(n). Any nonadaptive group testing method designed for Poisson PGT that sat- Proof. Let 0 < < 1. Then, since λ(n) = o(n), for large enough values of n, λ(n)(1 + ) < n. On the other hand, As a result, a necessary condition for where the second inequality is a consequence of Equation (9). Using the Chernoff bound for a standard Poisson distribution, it may be shown that As a result, from the above inequalities we obtain (11) On the other hand, for n > 2(1 − )λ(n), Substituting (11) and (12) in (10), a necessary condition for lim n→∞ P (E) = 0 is of the form As a result, one has This inequality can be further simplified as where the last equality follows since λ(n) = o(n). B. Constructive upper Bounds on the minimum number of tests We describe next two nonadaptive methods for Poisson PGT and find the number of tests that ensures lim n→∞ P (E) = 0. For this purpose, we consider two separate asymptotic regimes for λ(n): one, in which λ(n) = o(n) and lim n→∞ λ(n) = ∞, and another, in which λ(n) = o(n) and 0 < lim n→∞ λ(n) < ∞. Note that the case of constant λ is covered by the latter scenario. We start by proving the following simple large deviations results, which we find useful in our subsequent derivations. Lemma 1. Let D be a random variable following the right-truncated Poisson distribution, with λ(n) = o(n) and lim n→∞ λ(n) = ∞. Then, for any fixed > 0, one has Proof. Using Markov's inequality, one has where the last claim follows since lim n→∞ λ(n) = ∞. Although this lemma is applicable when lim n→∞ λ(n) = ∞, for the case when 0 < lim n→∞ λ(n) < ∞ (including the case when λ is a constant), the above arguments do not follow through. For this case, we prove a lemma in which a slowlygrowing function of n, i.e. β(n) = log (K) n defined in (1), is used to provide the needed guarantees. Proof. Before proving the lemma, Using Markov's inequality, one has where the last equality follows since lim n→∞ β(n) = ∞. 1) Nonadaptive method I: In our first construction, we use disjunct codes to devise practical Poisson PGT schemes. We start with the following definition. [27]). A binary ∆-disjunct code for conventional GT is a code of length m and size n, such that for any set of ∆ + 1 codewords, X = {x j } ∆+1 1 , and for any codeword x i ∈ X , there exists at least one coordinate k such that x i (k) = 1 and x j (k) = 0, for some x j ∈ X , where j = i. It is well known that binary ∆-disjunct codes are capable of identifying up to ∆ defectives in the conventional GT model. In addition, these codes are endowed with an efficient decoder with computational complexity O(mn). The decoding procedure is based on the fact that a codeword corresponds to a defective if and only if its support is a subset of the support of the vector of test results, y. Hence, given y and C, the set of defectives may be identified with zero probability of error throughD where x i is the i th column of C and supp(·) stands for the support of a vector (i.e. the set of its nonzero entries). We consider a simple probabilistic construction for the test matrix: the entries of the test matrix follow an i.i.d. Bernoulli(p) distribution, such that each entry of C is equal to 1 with probability p, and 0 with probability 1 − p. Let ∆ = ∆(n, λ(n)) be a properly chosen function of n and λ(n). The idea is to identify m, p and ∆ so that C is a ∆−disjunct matrix with high probability, while at the same time, the probability that the number of defectives exceeds ∆ is small, as formally stated in the following theorem. Proof. For any value of ∆ > 0, we may write P (E) as In order to bound P (E|D ≤ ∆), we use the following argument. The test matrix is constructed in a probabilistic, i.i.d. manner using the Bernoulli(p) distribution. Given a fixed test matrix C and a vector of test results y, we use the decoder in (13) to findD. Let E be the event that C is not ∆-disjunct. Since a ∆-disjunct test matrix can identify up to ∆ defectives with zero error probability, then conditioned on D ≤ ∆, one has E ⊆ E . As a result, P (E|D ≤ ∆) ≤ P (E |D ≤ ∆) = P (E ), where the last equality follows since the events E and {D ≤ ∆} are independent. The previous theorem relies on the assumption that lim n→∞ λ(n) = ∞. A similar approach can be used for the case 0 < lim n→∞ λ(n) < ∞, as described in the theorem to follow. Theorems 2 and 3 do not account for the presence of errors in the vector of test results. In order to address this issue, we invoke the following definition of an error-tolerent binary disjunct code. Definition 2 (Error tolerant binary ∆-disjunct codes [2]). A binary ∆-disjunct code designed for conventional GT, capable of correcting up to v errors, is a code of length m and size n such that for any set of ∆ + 1 codewords, X = {x j } ∆+1 1 , and for any codeword x i ∈ X , there exists a set of coordinates R i of size at least 2v + 1, such that ∀k ∈ R i , x i (k) = 1 and x j (k) = 0, for some x j ∈ X with j = i. In order to identify the set of defectives using these codes with a zero error probability, we use the following decoder. Proof. Similar to the proof of Theorem 2, we may write P (E) ≤ P (E|D ≤ ∆) + P (D > ∆), for any value of ∆ > 0. Lemma 1 can be used directly to show that In order to bound P (E) ≤ P (E|D ≤ ∆), the approach of [8, Thm. 8.1.3] used in Theorem 2 can be generalized to show that P (E|D ≤ ∆) ≤ P (E |D ≤ ∆) = P (E ), where E is the event that C is not an v error correcting ∆-disjunct test matrix. To bound P (E ), we first fix a set of column-indices I : |I| = ∆+1 and let k ∈ I be fixed. There are (∆+1) n ∆+1 ways to choose k and I. For a fixed choice of I and k, ∀j ∈ {1, 2, . . . , m}, let N j be a Bernoulli random variable such that it has a value 1 if the j th row of C has a value 1 in the k th column while having 0 in each column indexed by I\{k}, and N j has a value 0 otherwise. By definition, the random variables N j are i.i.d., and for j ∈ m one has Using the Chernoff bound for Binomial random variables for 0 < δ < 1, one obtains which provides an upper bound on the probability that for a fixed I and k, at most 2v rows of C satisfy the disjunctness property. As a result, Hence, 2 (∆+1) π N log n + 4v π N tests suffice to ensure lim n→∞ P (E ) = 0. Substituting π N = ≤ 2 e(∆+1) ((∆+1) log n+2v) = 2 e ( λ(n) (1+ ) ) ( λ(n) (1+ ) ) log n+2v . 2) Nonadaptive method II: In [32], Cheng and Du described the construction of a probabilistic test matrix for the nonadaptive combinatorial GT model, and proved that their test matrix can identify up to ∆ defectives from n subjects with high probability. Although the underlying codes are not binary disjunct codes, the decoder in (13) can be used to identify the defectives with high probability. The construction consists of two steps: in the first step, a nonbinary test matrix with i.i.d. entries is created; in the second step, a transformation is used to convert this nonbinary matrix into a binary matrix [32, Thm. 1]. One should note that as a consequence of this transformation, the entries of the binary test matrix are no longer i.i.d. We use this construction technique to identify the set of defectives in Poisson PGT, and achieve this with a suitable choice of ∆. The following lemma is a restatement of the results in [32, Thm. 10], suitable for our application. Lemma 3. The nonadaptive group testing method in [32] can identify up to ∆ defectives among n subjects, using no more than 3∆ log 2 3 log 2 n+log 2 1 1−p tests, with probability at least p. Proof. See [32, Thm. 10] and its proof. Next, we show how this pooling design can be used to identify the set of defectives in Poisson PGT, while ensuring a probability of error that diminishes asymptotically. Theorem 6. Assume that D follows the right-truncated Poisson distribution, with λ(n) = o(n) and lim n→∞ λ(n) = ∞. Then, one can identify the set of defectives such that lim n→∞ P (E) = 0, using m = 3 log 2 3 λ(n) (1+ ) log 2 n (1 + o(1)) tests. IV. SEMI-ADAPTIVE METHODS FOR POISSON PGT An alternative to both adaptive and non-adaptive GT approaches is semi-adaptive testing. A semi-adaptive GT algorithm is an algorithm in which tests are designed in several stages. The tests in each stage are constructed in a nonadaptive manner and therefore can be performed in parallel. However, the set of subjects on which the tests are preformed changes from one stage to the next; in other words, the results obtained during one stage of testing may guide the choice of test subjects and potential defectives in the next stage. One of the best known semi-adaptive algorithms is the original 2-stage algorithm proposed by Dorfman [10]. In the absence of error, a semi-adaptive algorithm is expected to identify all defectives, even if no prior knowledge regarding the number of defectives is available. As a result, unlike the case of nonadaptive algorithms in which one seeks to find a number of tests m for which lim n→∞ P (E) = 0, in semi-adaptive framework one is interested in the expected number of testsm that an algorithm performs in order to identify the defectives with zero probability of error, i.e., with P (E) = 0. In what follows, we first find a lower bound on m for any adaptive (and hence, semi-adaptive) algorithm for Poisson PGT using Huffman coding. Then, we devise a semiadaptive algorithm and show that for this algorithm,m is only a constant factor away from the lower bound. A. Lower bound on the expected number of tests Suppose that the number of defectives follows the truncated Poisson distribution; in addition, assume that for any fixed 1 ≤ d ≤ n, all the sets of D = d defectives are equally likely. In what follows, we show that one can use Huffman source coding [31] to find a lower bound on the expected number of adaptive tests required to identify the defectives. Let w ∈ {0, 1} n be a binary random vector such that w(i) = 1 if the i th subject is a defective, and w(i) = 0 otherwise. There are 2 n choices for w, contained in a set denoted by W. An adaptive GT algorithm has to identify the true realization of w, denoted by w t , using a set of tests. Each such test can be represented as a "yes/no" query of the form "is w t a member of the set W ?", where the set W ⊆ W is determined by the design of the test. For example for n = 5, the query corresponding to a test that contains the first, the fourth and the fifth subjects asks if w t ∈ W , where If the output of the test is 0, the answer to the query is "yes", since none of the three subjects in the test are defective and therefore w t ∈ W ; otherwise the answer to the query is "no" which implies that w t ∈ W\W . On the other hand, it can be easily verified that not every possible subset query corresponds to a group test [8], [34]. As a result, the minimum expected number of subset queries, required to identify w t , provides a lower bound on the minimum expected number of group tests required to identify w t in an adaptive manner. One should note that the minimum expected number of queries of the form above is equal to the expected length of a Huffman code designed for a source with alphabet W and the corresponding probability distribution [31]. For a fixed 0 ≤ d ≤ n, let w d,j , j = 1, 2, . . . , n d , be a realization of w with exactly d entries equal to 1. As a result, the alphabet of the source w is of the form W = {w d,j }, j = 1, 2, . . . , n d , d = 0, 1, . . . , n. It follows that for all 0 ≤ d ≤ n and for all 1 ≤ j ≤ n d , Proof. To prove this theorem, we note that the Shannon entropy [31] of the source, H(w), provides a lower bound on the average length of the optimum Huffman code. Consequently, using (15), one has . B. Constructive upper bound on expected number of tests using an s-stage algorithm In [33], Li proposed an s-stage algorithm to identify d defectives in a combinatorial group testing framework. In what follows, we modify his algorithm and show that the expected number of tests performed by s-stage testing allows one to find all the defectives in a Poisson PGT model, while being only a constant away from the lower bound of Theorem 8. Let s = s(n, λ(n)) denote the total number of stages. Also, let S i , 1 ≤ i ≤ s, be the set of potential defectives at stage i on which the group tests are performed. In the first stage, we set S 1 = S, where S is the set of all subjects, |S| = n. Then, we randomly divide S 1 into disjoint sets of size k 1 , where k 1 = k 1 (λ(n), n). If k 1 does not divide |S 1 |, one set will contain fewer than k 1 entries, equal to the remainder of dividing |S 1 | by k 1 . A test is performed on each of these sets independently. In the second stage, S 2 is formed by pooling all the subjects in sets with a positive test outcome in the first stage. Similarly, the set S 2 is randomly divided into disjoint sets of size k 2 . Again, one set may contain fewer subjects as compared to the other sets, and a test is performed on each set. The procedure continues in the same manner up to stage s−1. In the last stage, S s is formed by pooling all the subjects in sets with a positive test outcome at stage s−1; then, each remaining subject is tested individually to determine if it is defective. The following theorem shows that proper choices of s and k i , 1 ≤ i ≤ s−1, may guarantee that the expected number of tests performed using this algorithm is upper bounded by a value only a constant away from the lower bound. Proof. Assume that D = d is the number of defectives. In the first stage, divide the test subjects into disjoint groups of size k 1 . This leads to n k1 tests. In the i th stage, 1 ≤ i ≤ s − 2, at most d tests are positive, with the upper bound achieved when each defective is in a different group; as a result, the number of remaining subjects and the number of tests in the (i + 1) th stage are at most dk i and d ki ki+1 , respectively. In the last stage, the number of remaining subjects and the number of tests both equal to dk s−1 . Hence, the total number of tests is bounded as Consequently since s and k i , 1 ≤ i ≤ s − 1, do not depend on d, one has Substituting s and k i in the previous expressions, one obtains (1)) ≤ e(s − 1)λ(n)(1+o(1)) +λ(n) e 2 + log n λ(n) (1+o (1)) (1)). V. SUMMARY OF THE RESULTS AND DISCUSSION In the previous sections, we introduced the Poisson probabilistic group testing framework for modeling the number of defectives according to a random variable following a righttruncated Poisson distribution. For the proposed model and under the assumption that λ(n) = o(n), we considered nonadaptive and semi-adaptive methods to identify the defectives. These methods are based on generalization of combinatorial GT schemes, which to the best of our knowledge are used in the context of probabilistic GT for the first time. In Section III-A, we used information theoretic arguments to derive a lower bound on the number of tests (Thm. 1). In addition, we derived constructive upper bounds on the number of tests using practical testing schemes (Thms. 2-7) and information theoretic arguments (Thms. 10 and 11). It is worth mentioning that if λ(n) grows slowly with n (i.e. log 3 (n) = o(log n/β(n))), a conclusion of Thm. 10 is that 2λ(n) 1+α log n measurements is sufficient to find the defectives when lim n→∞ λ(n) = ∞. Similar simplifications can also be obtained for the case where 0 < lim n→∞ λ(n) < ∞ using Thm. 11. The results under the assumption that the vector of test results is error-free are summarized in Table I. In the table, β(n) is used to represent the slowly-growing function defined in (1), and , α, δ, and γ are arbitrary small positive constants. In Thms. 4 and 5, we considered the case in which there are at most v(n) errors in the vector of test results and showed that m = 2 e (β(n)λ(n)) 2 log n + 4 e v(n)β(n)λ(n) (1 + o(1)) tests are sufficient to identify the defectives using a decoder with computational complexity of O(mn) if 0 < lim n→∞ λ(n) < ∞. Similarly, we showed that if lim n→∞ λ(n) = ∞, the same decoder requires m = 2 e λ(n) 2(1+ ) log n + 4 e v(n)λ(n) 1+ (1 + o(1)) tests. The test constructions and decoding algorithms used in Thms. 2-5 rely on designing test matrices that can identify the defectives with zero error probability as long as D ≤ ∆, for an appropriate choice of ∆. However, it is well-known that It is not difficult to show that ∆ must be larger than λ in order to have lim n→∞ P (D > ∆) = 0. As a result, by requiring P (E|D ≤ ∆) = 0, one cannot obtain upper bounds on the number of tests for Poisson PGT that match the lower bound in Thm. 1. In order to overcome this problem, we instead used the less stringent condition lim n→∞ P (E|D ≤ ∆) = 0 in Thms. 6 and 7, and employed the results of [32] to obtain matching upper bounds on m. One should note that there exist other test constructions and decoding algorithms that may be used in conjunction with lim n→∞ P (E|D ≤ ∆) = 0 to obtain matching upper bounds on m for Poisson PGT (see for example [36]- [39]); however, since an approach similar to the proof of Thms. 6 and 7 can be used in these cases as well, we choose not to repeat these arguments and results. In the second part of our exposition (Sec. IV), we focused on the family of semi-adaptive algorithms. These algorithms are performed in sequential stages, allowing to design new tests based on the outcome of previous tests in order to decrease the expected number of tests; in addition, in each stage the tests are designed and performed simultaneously, allowing parallel testing. In Sec. IV, we used Huffman source coding to find a lower bound on the expected number of tests; in addition, we showed how Li's stage-wise algorithm [33] developed for combinatorial GT can be modified for the Poisson PGT model. These lower and upper bounds are listed in Table II. Recent work in the area of group testing has almost exclusively focused on combinatorial GT. The results derived in this paper show that there exists a close connection between methods used for combinatorial GT and probabilistic GT. APPENDIX A AN INFORMATION THEORETIC UPPER BOUND ON THE NUMBER OF NONADAPTIVE TESTS FOR POISSON PGT Information theoretic approaches have been used in the study of combinatorial GT problem by several authors [3], [41]- [44]. In what follows, we apply these approaches to the Poisson PGT model in order to derive an upper bound on the minimum number of nonadaptive tests that satisfy (5). We assume that the test matrix is constructed probabilistically: the entries of the test matrix follow an i.i.d. Bernoulli(p) distribution, such that each entry of C is equal to 1 with probability p, and 0 with probability 1 − p. For this construction method, we consider a maximum likelihood (ML) decoding procedure, which given the vector of test results and the test matrix reduces to: Here, P (y|C, D ) denotes the conditional distribution of observing y given the test matrix C and set of defectives D . Note that the second equality holds since the test matrix is constructed independent of the set of defectives. The goal is to find the number of tests required to satisfy (5). We define the error event E as the event that there exists a set of subjects D = D such that P (y|C, D ) ≥ P (y|C, D). It can be easily verified that P (E) ≤ P (E ). As a result, a number of tests that guarantees lim n→∞ P (E ) = 0 also guarantees (5). Given D = d, 1 ≤ d ≤ n, let E i , 1 ≤ i ≤ d, denote the event that there exists a set of subjects with cardinality d, that differ from D in exactly i items and is at least as likely as D to the decoder. Given these definitions, one has where the last inequality follows from the union bound. At first glance, it may seem that a bound on P (E ) may be obtained using an upper bound on P (E i ) for a fixed value of d (such as the bound presented in [43]), and subsequent averaging; however, there are two subtle, yet important issues that prohibit us from using this approach. First, in (21) the value of d, and hence i, may be as large as n. Since we are interested in the asymptotic regime where n → ∞, a bound on P (E i ) should account for the growth of d and i with respect to n. Second, all known bounds on P (E i ) (see [43] and references therein) rely on a test matrix C with i.i.d. Bernoulli(1/d) entries. However, in Poisson PGT, the true value of d is unknown (more precisely, D is a random variable) and cannot be used as a design parameter in a natural way. In order to overcome the aforementioned problems, we derive special functions that bound P (E i ) for different ranges of d, and in addition derive new bounds that do not rely on the value of d as a design parameter. We start by observing that in [43], it was shown that for d = o(n), and for all ρ, 0 ≤ ρ ≤ 1, one has Here, we diverge slightly from the previously used notation and let Y denote a random variable corresponding to the result of a single test and let y be a realization of Y . Let (D 1 , D 2 ) be a partition of D into disjoint sets with cardinalities |D 1 | = i and |D 2 | = d − i, respectively. The vectors T 1 and T 2 are binary-valued row-vectors of length i and d − i, indicating which subjects in D 1 and D 2 are present in a given test, respectively. Also, t 1 and t 2 are realizations of T 1 and T 2 , respectively. In order to prove the main results of this section, we need the following lemma. Lemma 4. Let h(n) : N → R + be an increasing function of n such that lim n→∞ h(n) = ∞. Assume that each entry of the binary test matrix is an i.i.d. Bernoulli(p) random variable, such that h(n) p = o(n). Then ∀i, d such that 1 ≤ i ≤ d ≤ h(n) , and ∀ρ such that 0 < ρ < 1, one has the following bound on the error exponent: If E T1 [u ρ log u ρ ] is a non-increasing function of ρ, then where we used the following notation . In order to simplify the previous equations, we consider different realizations of u ρ for different values of y, t 2 and t 1 . In particular, we consider four cases based on the realizations of the pair (y, t 2 ). For each case, we find E T1 [u ρ log u ρ ] and show that this expectation is independent of ρ. In addition, when E T1 [u ρ log u ρ ] = 0, we find an expression for g ρ . In order to find the number of tests that guarantee P (E ) = o(1), we consider separately two asymptotic regimes for λ(n): Theorem 10 presents the results for the asymptotic regime λ(n) = o(n) and lim n→∞ λ(n) = ∞; Similarly, Theorem 11 presents the results for the regime where λ(n) = o(n), but 0 < lim n→∞ λ(n) < ∞. Note that the case of constant λ is covered by the latter scenario. ACKNOWLEDGMENT We would like to thank the anonymous reviewers for many useful comments and suggestions.
2014-10-20T10:55:28.000Z
2014-10-20T00:00:00.000
{ "year": 2014, "sha1": "20c292dca8aa74c5842243734f2bdc15c40adbe7", "oa_license": "publisher-specific, author manuscript", "oa_url": "https://doi.org/10.1109/tsp.2015.2446433", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "388e1d6ac1b7ddcb6b88210f2a790396e7fc7d7d", "s2fieldsofstudy": [ "Computer Science", "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Computer Science" ] }
73499769
pes2o/s2orc
v3-fos-license
Computer-Aided Discovery of Small Molecules Targeting the RNA Splicing Activity of hnRNP A1 in Castration-Resistant Prostate Cancer The heterogeneous nuclear ribonucleoprotein A1 (hnRNP A1) is a versatile RNA-binding protein playing a critical role in alternative pre-mRNA splicing regulation in cancer. Emerging data have implicated hnRNP A1 as a central player in a splicing regulatory circuit involving its direct transcriptional control by c-Myc oncoprotein and the production of the constitutively active ligand-independent alternative splice variant of androgen receptor, AR-V7, which promotes castration-resistant prostate cancer (CRPC). As there is an urgent need for effective CRPC drugs, targeting hnRNP A1 could, therefore, serve a dual purpose of preventing AR-V7 generation as well as reducing c-Myc transcriptional output. Herein, we report compound VPC-80051 as the first small molecule inhibitor of hnRNP A1 splicing activity discovered to date by using a computer-aided drug discovery approach. The inhibitor was developed to target the RNA-binding domain (RBD) of hnRNP A1. Further experimental evaluation demonstrated that VPC-80051 interacts directly with hnRNP A1 RBD and reduces AR-V7 messenger levels in 22Rv1 CRPC cell line. This study lays the groundwork for future structure-based development of more potent and selective small molecule inhibitors of hnRNP A1–RNA interactions aimed at altering the production of cancer-specific alternative splice isoforms. Introduction hnRNP A1 is a multifunctional RNA-binding protein that regulates alternative pre-mRNA splicing, transcription, nucleocytoplasmic shuttling, miRNA processing, and telomere elongation maintenance, as well as translation of cellular transcripts both in physiological and pathological conditions [1,2]. Overexpression of hnRNP A1 in various cancer types, including prostate [3,4], lung [5], stomach [6], and breast [7] cancers, Burkitt lymphoma [8], multiple myeloma [9], leukemia [10] and neuroblastoma [11], has been associated with tumorigenesis, cancer progression and drug resistance. The molecular mechanisms by which hnRNP A1 supports malignant transformation, either directly or through interplay with well-established cancer drivers, include regulation of cell survival, alteration of cell cycle, invasion and metastasis, altered metabolism and stress adaption (i.e., to hypoxia, starvation, and response to DNA damage), all of which are recognized hallmarks of cancer [1,2,12]. In prostate cancer (PCa), the second leading cause of cancer-related death in men, alternative splicing (AS) plays a prominent role as it represents a mechanism of resistance to therapy [3]. There in PC3 AR-independent PCa cell line by a mechanism of action that involves binding to the C-terminal region of hnRNP A1, impairment of hnRNP A1 ability to shuttle between the nucleus and cytoplasm, resulting in its cytoplasmic retention and accumulation, driving cells toward apoptosis [38]. More recently and pertinent to this work, quercetin has been shown to downregulate hnRNP A1 and thus AR-V7 expression in CRPC cell lines, and to re-sensitize enzalutamide-resistant 22Rv1-injected mouse xenografts in vivo [39]. Results In this study, we report compound VPC-80051, a novel hnRNP A1 small molecule inhibitor targeting the hnRNP A1 RBD, the first to be identified using a computer-aided drug discovery approach. Binding Site Identification on hnRNP A1 RBD The published 1.92 Å X-ray structure of the UP1 RNA-binding domain (RBD) of hnRNP A1 bound to its RNA 5 -AGU-3 trinucleotide target sequence (PDB ID: 4YOE) [34] was utilized for identification of plausible pockets where small molecule inhibitors could bind and specifically disrupt hnRNP A1 binding to RNA and, therefore, to interfere with the subsequent splicing activities. The 4YOE X-ray structure provided the first insights into the specificity of hnRNP A1-RNA recognition which indicated that UP1 specifically interacts with the 5 -AG-3 ribodinucleotide within a nucleobase cavity formed upon folding of the RRM1 motif and the inter-RRM linker. RRM2 made no contact with the RNA sequence. Both A (adenine) and G (guanine) purines engaged in stereospecific contacts within the pocket thus explaining the preference for the 5 -AG-3 sequence [34]. Their replacement for pyrimidines, having smaller ring size, were deemed unlikely to satisfy the favorable interactions of the purines with similar energetics. For instance, single or double substitutions for cytosine resulted in ã 20-fold reduction in UP1 RNA-binding affinity, rate-limiting for complex formation [30,34,40]. Based on the prior crystallographic evidence regarding the structural determinants of hnRNP A1-RNA recognition specificity, we selected the RRM1 and the inter-RRM linker regions of UP1 for binding site identification. We employed the Site Finder module of the Molecular Operating Environment (MOE) suite of programs [41], which predicted a binding site ( Figure 1) significantly matching the X-ray defined nucleobase pocket. The predicted binding site is shaped by residues: Gln12, Lys15, Phe17, Met46, Arg55, Phe57, Phe59, Lys87, Arg88 from the RRM1 motif, and Ala89, Val90, Ser91, Arg92, Ser95 and His101 from the inter-RRM linker. The functional relevance of these residues has been well-described ( Figure 2) [34]. Phe17 and His101 contribute to the majority of binding free energy, originating from favorable van der Waals and π-π stacking interactions with the rings of the adenine at first position in the RNA 5'-AG-3' sequence. Moreover, Val90 and Arg88 backbone amides make specific H-bonding interactions with the adenine nitrogen atoms. Additional H-bonding and hydrophobic packing are contributed from the hydroxyl group of Ser95 and Phe57, respectively. At the second 5 -AG-3 RNA position, amino groups of Gln12 and Lys15 primarily and selectively recruit the guanine through H-bonding via their amino group. In addition, Val90 and Arg92 provide H-bonding capabilities via their backbone carbonyl and guanidinium groups. Cation-π and π-π interactions with Arg92 and Phe59, respectively, further enhance the interactions with the guanine [34]. In silico model of the UP1 domain of the hnRNP A1 splicing factor bound to the 5′-AG-3′ recognition sequence constructed based on the 1.92 Å X-ray structure of RNA-bound UP1 (PDB ID: 4YOE). The RNA bases are shown as sticks and colored in green. The predicted binding site on hnRNP A1 at the RNA recognition interface is represented as a grey solid surface. Virtual atoms utilized to probe the protein surface are shown as brown spheres within the identified pocket. RRM: RNA recognition motif. Hydrogen-bonding interactions formed between the nucleobases with the backbone and side chains of Val90, Arg88, Gln12 residues, as well as those formed by the sugars of the RNA backbone with side chains of Arg92 and Ser95 are indicated with red dashed lines. Hydrophobic interactions with Phe17, His101, Phe59 and aliphatic side chains are indicated with dashed green lines. Inhibitors targeting the pocket are expected to block UP1 binding to RNA and alter hnRNP A1 splicing activity. Small molecule inhibitors targeting this site are expected to interfere with the most functionally relevant protein-RNA interactions within the site to alter hnRNP A1 s splicing activity. In Silico Identification of Hit Compounds Targeting the hnRNP A1 RBD Site Subsets of drug-like molecules deposited in the ZINC15 open chemical repository [42][43][44] having in-stock availability from selected vendor catalogues, filtered by logP (octanol-water partition coefficient), charge and chemical reactivity, were virtually screened against the identified hnRNP A1-RNA pocket utilizing Glide docking software [45,46] in standard precision (SP) mode as a primary structure-based screening technique. The compounds with the best docking scores were prioritized for subsequent in silico scoring based on calculated pK i [47] and RMSD (root mean square deviation) between poses obtained from various docking programs, including ICM [48] and Hybrid [49]. As a result, 139 chemicals were selected for purchase based on two or more of the abovementioned binding affinity indicators and satisfaction of at least three essential π-π or H-bonding interactions with the residues shaping the pocket, in particular contacts with Phe17, His101, Val90 and Phe59. These compounds were then subjected to experimental evaluation through the UBE2C reporter assay (see Section 4.5 for assessing their effect on AR-V7 driven luciferase-detected UBE2C activity in 22Rv1 cells in androgen-deprived conditions, with quercetin, the previously reported hnRNP A1 binder used as positive control. A dozen compounds demonstrated more than 50% inhibition of UBE2C activity at a 25 µM concentration in this assay. In Vitro Characterization of Hit Compound VPC-80051 To quantify the ability of hit compounds to interact directly with hnRNP A1 RBD, the Bio-Layer Interferometry (BLI) technique was employed with purified UP1 domain of the hnRNP A1 protein (N-terminal residues 1-196) immobilized on a streptavidin biosensor (see Section 4.6 for details). Compound VPC-80051, as well as quercetin control, demonstrated direct binding to hnRNP A1 in a dose-dependent manner ( Figure 3a). Subsets of drug-like molecules deposited in the ZINC15 open chemical repository [42][43][44] having in-stock availability from selected vendor catalogues, filtered by logP (octanol-water partition coefficient), charge and chemical reactivity, were virtually screened against the identified hnRNP A1-RNA pocket utilizing Glide docking software [45,46] in standard precision (SP) mode as a primary structure-based screening technique. The compounds with the best docking scores were prioritized for subsequent in silico scoring based on calculated pKi [47] and RMSD (root mean square deviation) between poses obtained from various docking programs, including ICM [48] and Hybrid [49]. As a result, 139 chemicals were selected for purchase based on two or more of the abovementioned binding affinity indicators and satisfaction of at least three essential π-π or H-bonding interactions with the residues shaping the pocket, in particular contacts with Phe17, His101, Val90 and Phe59. These compounds were then subjected to experimental evaluation through the UBE2C reporter assay (see subsection 4.5) for assessing their effect on AR-V7 driven luciferase-detected UBE2C activity in 22Rv1 cells in androgen-deprived conditions, with quercetin, the previously reported hnRNP A1 binder used as positive control. A dozen compounds demonstrated more than 50% inhibition of UBE2C activity at a 25 μM concentration in this assay. In Vitro Characterization of Hit Compound VPC-80051 To quantify the ability of hit compounds to interact directly with hnRNP A1 RBD, the Bio-Layer Interferometry (BLI) technique was employed with purified UP1 domain of the hnRNP A1 protein (N-terminal residues 1-196) immobilized on a streptavidin biosensor (see subsection 4.6 for details). Compound VPC-80051, as well as quercetin control, demonstrated direct binding to hnRNP A1 in a dose-dependent manner ( Figure 3a). Quantitative RT-PCR (qRT-PCR) was further employed to assess the effect of hits on the mRNA levels of AR-V7 in 22Rv1 cells in androgen-deprived conditions. Treatment with VPC-80051 at 25 and Quantitative RT-PCR (qRT-PCR) was further employed to assess the effect of hits on the mRNA levels of AR-V7 in 22Rv1 cells in androgen-deprived conditions. Treatment with VPC-80051 at 25 and 10 µM concentrations resulted in a reduction of AR-V7 levels comparable to that of quercetin. The ratios of measured cycle threshold differences (∆CT) of levels of AR-V7 mRNA versus DMSO control normalized to actin, in percentage, were 66.20% and 79.55% for VPC-80051 at 25 and 10 µM, respectively, compared to 62.25% and 71.15% for quercetin at same concentrations. The reduction of AR-V7 levels observed in qRT-PCR upon 24 h treatment with VPC-80051 and quercetin was corroborated by Western blot analysis (Figure 3b). Furthermore, the decrease in AR-V7 levels correlated with a reduction in viability of 22Rv1 cells treated with VPC-80051 at tested concentrations (Figure 3c). Mode of Binding of VPC-80051 to hnRNP A1 RBD Site The predicted binding pose of VPC-80051 is shown in Figure 4a. The chemical structure VPC-80051 is composed of an indazole bicyclic aromatic moiety at one end, a carboxamide linker and a difluorophenyl ring at the other end. Within the docking site at the hnRNP A1-RNA binding interface, the benzene ring of the indazole group of VPC-80051 forms π-π stacking interactions with both Phe17 and His101. Moreover, the pyrazole ring of the indazole group forms three hydrogen bonds with the backbone of Val90: two between the Val90 backbone amide with the pyrazole nitrogen atoms and a third with Val90 backbone carbonyl. In addition, a fourth hydrogen bond is formed between the amine hydrogen of the carboxamide linker and the backbone carbonyl of Val90. All these interactions made significant contributions to the binding specificity of hnRNP A1 to the adenine in the first position of the cognate 5 -AG-3 RNA sequence, as described in Section 2.1. In the predicted docking pose, the indazole moiety of VPC-80051 was positioned such that it significantly overlapped that of the 5 adenine in the X-ray structure with a calculated RMSD of 2.85 Å, taking over these specific interactions within the cavity (Figure 4a right). Furthermore, the difluorophenyl ring of VPC-80051 makes significant hydrophobic interactions with the aromatic ring of Phe59, as well as with the aliphatic side-chains of Arg92 and Lys15, overlapping to a good extent with the guanine at the second position in the 5 -AG-3 recognition sequence with RMSD of 2.82 Å (Figure 4a right). The Glide score of VPC-80051 was −8.2 kcal/mol and its binding affinity was dominated by hydrophobic interactions as per calculated pK i (4.5 out of 6.8). For comparison, the Glide score of the literature compound quercetin was −6.2 kcal/mol and its predicted pK i was 6. Quercetin's predicted binding pose is shown in Figure 4b. Quercetin forms two hydrogen bonds: one between a hydroxyl group attached to the chromen-4-one moiety at first position with the backbone carbonyl of Arg88 and a second between a hydroxyl of its catechol moiety at second position with the backbone carbonyl of Val90. Additionally, π-π interactions are formed between the chromen-4-one moiety of quercetin with Phe17 and His101. While the overlap between the trihydroxy-chromen-4-one group with the 5 adenine of RNA is significant with an RMSD of 2.7 Å (Figure 4b right), there is little overlap at the second position between the catechol (dihydroxyphenyl) ring and the guanine base (RMSD = 4.3 Å). The docked structures were further minimized in implicit solvent and the binding free energies were calculated with the MM/GBSA method (molecular mechanics with generalized Born and surface area continuum solvent model) [50]. The estimated binding free energy of VPC-80051 obtained from MM/GBSA simulations was −60.7 kcal/mol relative to −57.9 kcal/mol for quercetin control. In these simulations in which the binding site residues were allowed to relax (i.e., protein flexibility within 5.0 Å from the ligand) during the minimization procedure of the complexes, VPC-80051 and quercetin maintained the protein-ligand interactions predicted by rigid docking ( Figure 5). Further confidence in the binding pose of VPC-80051 was given by calculated RMSD values of 1.4 and 1.1 Å between the Glide pose and the poses obtained from ICM and Hybrid docking programs, respectively. Of note, the RMSD with the best pose generated by a "blind docking" Glide SP calculation, in which conformational sampling occurs within a grid large enough to cover the entire UP1 domain of the hnRNP A1 protein and not only the smaller grid centered on selected RRM1 and inter-RRM residues of the predicted binding site, was 1.7 Å. This provided good evidence on the accuracy of the binding site model as well as on the affinity of VPC-80051 for the site having a large number of equivalent interactions to those formed by 5 -AG-3 cognate RNA with the X-ray nucleobase pocket. The docked structures were further minimized in implicit solvent and the binding free energies were calculated with the MM/GBSA method (molecular mechanics with generalized Born and surface area continuum solvent model) [50]. The estimated binding free energy of VPC-80051 obtained from MM/GBSA simulations was −60.7 kcal/mol relative to −57.9 kcal/mol for quercetin control. In these simulations in which the binding site residues were allowed to relax (i.e., protein flexibility within 5.0 Å from the ligand) during the minimization procedure of the complexes, VPC-80051 and quercetin maintained the protein-ligand interactions predicted by rigid docking (Figure 5). Computationally expensive molecular dynamics (MD) simulations were subsequently carried out to gain further insights into induced fit motions and solvent effects, and thus, the dynamic behavior of the hnRNP A1/VPC-80051 complex when fully subjected to force field in explicit solvent as opposed to the MM/GBSA implicit solvent approach. Analysis of the trajectory obtained from MD simulations (see section 4.4) showed that VPC-80051 was stabilized for more than 30% of the 100 ns simulation time at greater than 80% strength mainly by hydrogen-bonding between its indazole moiety with Val90 backbone and π-π interactions with Phe17 ( Figure 6). Superimposition of conformations of the simulated complex taken at different time steps revealed that the side chain of Arg92 underwent larger fluctuations relative to other side chains in the pocket (Figure 6c), opening and making the binding site more exposed to the solvent. As such, with more translational and rotational freedom, the difluorophenyl ring of VPC-80051 is no longer engaged in hydrophobic interactions with Arg92 side chain or π-π interactions with Phe59, and as such does not contribute substantially to the stabilization of the ligand. Future NMR spectroscopy or X-ray crystallography structural studies may unequivocally prove the binding mode of VPC-80051 to the hnRNP A1 RBD pocket. Computationally expensive molecular dynamics (MD) simulations were subsequently carried out to gain further insights into induced fit motions and solvent effects, and thus, the dynamic behavior of the hnRNP A1/VPC-80051 complex when fully subjected to force field in explicit solvent as opposed to the MM/GBSA implicit solvent approach. Analysis of the trajectory obtained from MD simulations (see Section 4.4) showed that VPC-80051 was stabilized for more than 30% of the 100 ns simulation time at greater than 80% strength mainly by hydrogen-bonding between its indazole moiety with Val90 backbone and π-π interactions with Phe17 ( Figure 6). Superimposition of conformations of the simulated complex taken at different time steps revealed that the side chain of Arg92 underwent larger fluctuations relative to other side chains in the pocket (Figure 6c), opening and making the binding site more exposed to the solvent. As such, with more translational and rotational freedom, the difluorophenyl ring of VPC-80051 is no longer engaged in hydrophobic interactions with Arg92 side chain or π-π interactions with Phe59, and as such does not contribute substantially to the stabilization of the ligand. Future NMR spectroscopy or X-ray crystallography structural studies may unequivocally prove the binding mode of VPC-80051 to the hnRNP A1 RBD pocket. Discussion Here we presented VPC-80051, a small molecule prototype inhibitor of the hnRNP A1 splicing factor. To our knowledge, VPC-80051 is the first drug-like, non-promiscuous hnRNP A1 inhibitor discovered to date by computational approaches involving large-scale virtual screening targeting hnRNP A1 binding to RNA. Discussion Here we presented VPC-80051, a small molecule prototype inhibitor of the hnRNP A1 splicing factor. To our knowledge, VPC-80051 is the first drug-like, non-promiscuous hnRNP A1 inhibitor discovered to date by computational approaches involving large-scale virtual screening targeting hnRNP A1 binding to RNA. Unlike the literature-reported hnRNP A1 inhibitor quercetin, a well-known aggregator [51] and containing the catechol_A (92) moiety, one of the worst offenders enlisted in PAINS (Pan-Assay Interference Compounds) [52], annotated in more than 46 catalogs for promiscuous binding and various inhibitory activities against multiple targets [53] and as such rejected by FAF-Drugs4 ADME-Tox filtering tool [54] for inclusion in any drug discovery campaign, VPC-80051 contains no PAINS moieties and has no documented experimental activities against any protein target. Moreover, the quantitative estimate of druglikeness (QED) score [55] of VPC-80051 as calculated by FAF-QED web service [56] is 0.78 compared to 0.5 of quercetin. On a scale from 0 to 1, only compounds with a QED score above 0.65 (i.e., the median score for the current orally bioavailable drugs) are considered as having a desirable drug likeliness profile [55]. On the binding mode of VPC-80051 to the hnRNP A1 identified pocket, good agreement was obtained between various in silico techniques, including rigid docking with consensus scoring and flexible MD simulations with both implicit and explicit solvent. Furthermore, VPC-80051 showed evidenced ability to bind directly to the UP1 RNA-binding domain of hnRNP A1 in vitro and to alter hnRNP A1 alternative splicing activity in castrate-resistant 22Rv1 cells by reducing expression of the AR-V7 splice variant that promotes drug-resistance in CRPC. By all means exploratory, this study provides a foundation for future computational studies that, when combined with rigorous biological validation, may lead to the discovery of novel, more potent and selective hnRNP A1 inhibitors. Combination studies with current anti-AR drugs or c-Myc inhibitors may provide synergistic or additive responses. Targeting the c-Myc/hnRNP A1/AR-V7 axis may be a viable strategy for treatment of CRPC. Targeting alternative splicing may offer future opportunities for development of next generation cancer-specific therapeutics. Binding Site Identification on hnRNP A1 RBD The published 1.92 Å X-ray structure of the UP1 RNA-binding domain of the hnRNP A1 alternative splicing factor bound to its RNA 5 -AGU-3 trinucleotide target sequence (PDB ID: 4YOE) was subjected to the Site Finder module of the Molecular Operating Environment (MOE)-a fully integrated drug discovery software platform [41]. The Site Finder algorithm calculates plausible sites by scanning the protein surface with virtual atoms of either hydrophobic or hydrophilic character, which are subsequently clustered, scored and ranked based on size and propensity for ligand binding, an index accounting for amino acid composition at the protein-ligand interaction interface. One site was identified at the surface formed by the RRM1 and the inter-RRM linker regions of the UP1 RNA-binding domain of hnRNP A1 that matched to a large extent the X-ray described nucleobase pocket for the cognate 5 -AG-3 RNA ligand [34]. Virtual Screening Structure-based molecular docking was performed using the Glide program [45,46] (Maestro version 10.3.015, Schrödinger LLC, New York, NY, USA) [57] in SP mode, to screen~4.3 million drug-like chemicals from ZINC15 repository [42][43][44]. Prior to docking, each chemical was washed and energy-minimized under the MMFF94x force field and Born solvation as per the ligand preparation protocol implemented in MOE [41]. In preparation for rigid docking, hnRNP A1 X-ray protein structure (PDB ID: 4YOE) [34] was prepared following Maestro's standard protein preparation protocol. A docking grid was defined as a 30 Å box centered on the residues of the predicted hnRNP A1 RNA binding site for sampling and scoring of compounds. Molecules with Glide docking scores ≤ −6 kcal/mol were subjected to pK i calculations using scoring.svl analysis tool for non-bonded intermolecular interactions [47]. Best scoring compounds were re-docked utilizing ICM [48] and OpenEye Hybrid [49,58] docking programs. The root mean square deviation (RMSD) in atomic coordinates between docking results obtained from various programs was calculated using the mol_rmsd.svl script [59]. RMSD was also calculated for poses obtained using the "blind docking" virtual screening technique. "Blind docking" was performed using Glide in SP mode on a large grid defined as a 76 Å box encompassing the entire UP1 domain of hnRNP A1 protein. 139 compounds were purchased based on favorable Glide scores, RMSD or pK i indicators as well as satisfaction of important interactions with the pocket residues. MM/GBSA Simulations To obtain estimates of the binding free energies of compounds to the hnRNP A1 site, the docking poses were subjected to MM/GBSA simulations with implicit solvent as implemented in the Prime program of the Schrödinger software suite [60]. Each protein-ligand complex was MM minimized utilizing the OPLS3 force field [61] and the variable dielectric VSGB 2.0 solvent model [62]. During the minimization of the complexes, hnRNP A1 binding site residues within 5.0 Å of the ligand were allowed to undergo fluctuations to take into account the protein flexibility, while the rest of the protein structure was kept fixed. Molecular Dynamics Simulations Molecular dynamics (MD) simulations with explicit solvent of hnRNP A1-80051 complex were carried out utilizing Maestro's Desmond MD package [63]. The MM/GBSA minimized complex was utilized as the initial structure. The system was neutralized by addition of 2 chlorine counterions and solvated utilizing the TIP3P water model. The system consisting of a total of 32491 atoms with 9856 water molecules was subjected to the OPLS3 force field under periodic boundary conditions. The system was allowed to relax prior to the 100 nanoseconds MD production run. MD simulations were conducted under the isothermal-isobaric ensemble with the temperature kept constant at 300 K using the Nosé-Hoover thermostat [64] and the pressure kept constant at 1 atm using the Martyna-Tobias-Klein barostat [65] with a relaxation time of 2 ps and isotropic coupling. The integrator used to evolve the system was RESPA (reference system propagator algorithm) [66] with a timestep scheduling of 2 fs for bonded and non-bonded van der Waals and short-range electrostatic interactions, and 6 fs for non-bonded long-range electrostatic interactions. The cutoff for the Coulombic short-range interactions was set to 9 Å. A harmonic restraint with a 1 kcal/mol force constant was imposed on the alpha carbons of the protein backbone. MD simulations were executed on the GPU-enabled Helios cluster of Compute Canada high-performance computing platform. AR-V7 Level Measurement As the AR-V7 splice variant has been shown to specifically regulate the expression levels of UBE2C in androgen-deprived 22Rv1 cells [24], a UBE2C reporter screening assay was developed in house to monitor the levels of the AR-V7 isoform in 22Rv1 cells by using a plasmid containing the UBE2C promoter linked to a luciferase reporter [67]. The UBE2C reporter plasmid was purchased from GeneCopoeia (product ID #HPRM16429). The Biolux Gaussia luciferase assay kit was purchased from New 20 England Biolab (#E3300L). 22Rv1 cells were plated with 10,000 cells per well in 96-well plates in RPMI media supplemented with 5% charcoal-stripped serum (CSS) and treated for 1 day with 1 µM, 10 µM and 25 µM of compounds. Bio-Layer Interferometry Assay The direct interaction between purified UP1 RNA-binding domain of the hnRNP A1 protein (N-terminal residues 1-196) and compounds was quantified with bio-layer interferometry (BLI) technique using OctetRED (ForteBio). hnRNP A1 was biotinylated in situ using an AviTag TM sequence (GLNDIFEAQKIEWHE) (Avidity, LLC, Aurora, CO, USA) incorporated at the N-terminus of the hnRNP A1. Escherichia coli BL21 containing both biotin ligase and hnRNP A1 vectors were induced with 0.5 mM isopropyl-β-d-thiogalactopyranoside (IPTG) and the protein was expressed for 4 h at 25 • C in the presence of 125 µM biotin. The bacteria were then lysed by sonication, and the resulting lysate was purified by immobilized metal ion affinity chromatography (IMAC) with nickel-nitrilotriacetic acid (Ni-NTA) resin followed by size exclusion chromatography (superdex s75, GE Healthcare). The purified hnRNP A1 at 0.1 mg/mL was immobilized on the streptavidin sensors (SA) overnight at 4 • C. The sensors were then blocked, washed and moved into wells containing various concentrations of the tested compounds in reaction buffer (20 mM Tris pH 8.5; 150mM NaCl; 0.5 mM TCEP; 5% DMSO and 10% glycerol). qRT-PCR Quantitative RT-PCR was employed to quantify the mRNA levels of the AR-V7 splice variant in 22Rv1 cells upon treatment with compounds at 10 and 25 µM concentrations. 22Rv1 cells were starved in RPMI media supplemented with CSS for 48 h, before 24 h treatment with DMSO (0.1%) or compounds. RNA was extracted with TRIzol (Invitrogen, Carlsbad, CA, USA) followed by cDNA synthesis (SuperscriptII, Invitrogen, Carlsbad, CA, USA). RT-PCR (125 ng cDNA, 5 µM primers, Sybr green master mix) was performed on a ViiA 7 thermal cycler. Actin RNA was used as normalized control. Western Blotting Western blotting was performed using 22Rv1 cells starved in RMPI with 5% CSS media and treated with compounds for 24 h. The blot was incubated with rabbit anti-actin antibody (1:500 dilution) and mouse anti-AR-NTD 441 monoclonal antibody (1:500 dilution). Visualization of the immune-complexes was done by an enhanced chemiluminescence system (Millipore, Burlington, MA, USA) followed by exposure to X-ray films. Cell Viability Assay PrestoBlue assay was used to assess the effect on viability of 22Rv1 cells treated with compounds at increasing concentrations (up to 25 µM) for 3 days. High-sensitivity fluorescence was used as detection method according to the manufacturer's protocol. PrestoBlue cell viability reagent was purchased from Invitrogen #A-13162 (Invitrogen Molecular Probes, Eugene, OR, USA).
2019-03-11T17:19:35.938Z
2019-02-01T00:00:00.000
{ "year": 2019, "sha1": "8166cc6926dcefa09deb51f2d607f2bc98b56be9", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/24/4/763/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8166cc6926dcefa09deb51f2d607f2bc98b56be9", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
12979764
pes2o/s2orc
v3-fos-license
Sticking and sliding of lipid bilayers on deformable substrates † We examine here the properties of lipid bilayers coupled to deformable substrates. We show that by changing the extent of the substrate hydrophilicity, we can control the membrane–substrate coupling and the response of the bilayer to strain deformation. Our results demonstrate that lipid bilayers coupled to flexible substrates can easily accommodate large strains, form stable protrusions and open reversibly pores. These properties, which differ significantly from those of free standing membranes, can extend the applications of the current lipid technologies. Moreover, such systems better capture the mechanical architecture of the cell interface and can provide insights into the capacity of cells to reshape and respond to mechanical perturbations. Introduction Supported lipid bilayers (SLBs) have become a fundamental asset in the research on lipid and cell membranes. Among others, they have been used to assess the membrane structure and phase behavior, 1 to reconstitute proteins, glycans and other cell membrane constituents, 2 or to investigate cell membrane adhesion. 3 In parallel to their wide biophysical applications, SLBs have become a promising platform for encapsulation, bio-sensing, separation and lipid nanotechnologies, 4-9 albeit with modest technological success mainly due to their fragility. In this paper we discuss the emerging mechanical properties of lipid membranes supported on flexible substrates, and the novel biophysical and technological implications that these systems may provide. In current SLB systems, the role of the membrane support is mostly passive -it increases the mechanical stability of the membrane and facilitates the quantitative analysis of the membrane using a range of surface sensitive techniques. 10 Its adverse effects on membrane fluidity and the structure of the reconstituted proteins are overcome by lifting the membrane from the solid support by means of polymer cushions, tethers or self assembled monolayers. 2,10,11 In the last couple of years more and more studies have demonstrated that the support can also be used to actively manipulate the organization of the lipid membrane. In particular chemically heterogeneous or geometrically patterned substrates were shown to control the lateral organization of the lipid membrane. [12][13][14] The requirement of a solid support has remained unchallenged probably due to the inability of freestanding lipid bilayers to sustain even modest stretch and compression. 15 Intriguingly, nature has chosen to support its lipid structures onto elastic and actively reshaping polymeric networks, such as the actin cortex, the extracellular matrix and the basal lamina. Such architecture ensures that the cellular interface is malleable, responsive and yet robust. Moreover, as far as our current knowledge goes, mechanical stress imposed and transmitted through the cell membrane is a key regulator of cell physiology and differentiation, tissue morphogenesis, and embryogenesis. 16 Creating artificial systems that capture the elasticity and deformability of the cell interface would provide invaluable insights into the mechanisms of mechano-transduction in cells, and the principles conferring biological membranes with the ability to dynamically remodel and to sustain mechanical stresses. We further expect that such systems will open new horizons for the current lipid technologies, including flexible biosensors and lipogel capsules that can reversibly change their shape, adhesivity and permeability in response to chemical and mechanical stimuli. The experiments reported here provide the background for such developments. Previously we have demonstrated that a simple lipid bilayer coupled to an elastic polydimethylsiloxane (PDMS) substrate can follow the changes in the substrate area without losing its integrity. 17,18 Upon substrate expansion, the bilayer absorbs lipid protrusions in order to increase its in-plane area; upon compression, it expels them back. The shape of the membrane protrusions can be controlled by the substrate strain and the osmotic pressure difference across the membrane. 18 Our in vitro findings have been recently reproduced in cells coupled to elastic substrates, 19 thus indicating that biomembranes also use purely physical mechanisms to accommodate fast changes a Durham University, UK. E-mail: margarita.staykova@durham.ac.uk b Universitat Politècnica de Catalunya, Barcelona, Spain in their surface area. In addition, the potential of using stimuli responsive gels coated with lipid shells for encapsulation and material release has shown promising results. 20,21 In this paper we present further insights into the rich functionality of deformable supported lipid bilayers. We show that a simple modification of the surface properties of the elastic substrate by plasma oxidation can induce two very different mechanisms of remodeling in the lipid bilayer, which help buffer the applied stresses and preserve its integrity. Device For the membrane strain experiments we use the same device as described previously by us. 17 A thin circular sheet of cured PDMS 1-1.5 mm in diameter is suspended above the outlet of a microfluidic channel. The channel inlet is connected to a machine driven syringe pump (Harvard PhD apparatus). Application of a positive pressure to the syringe causes the thin sheet of PDMS to expand from a flat geometry to a hemispherical cap, the center of which is subject to a biaxial area expansion. Lipids coupled to this substrate are thus required to respond to this change in area. A typical experiment involves the inflation of the device up to a total substrate area change of between 12 and 20% and subsequent deflation, with strain rates between 0.001 and 0.8% s À1 . The nominal strain rate is 0.05% s À1 . The PDMS surface onto which the membrane is deposited is exposed to a low-pressure air plasma (VacuLAB Plasma Treater, Tantec) for a duration of 0-30 seconds. The substrate hydrophilicity is assessed by measuring the contact angle of a 10 ml aqueous droplet deposited on the surface immediately after treatment. Lipid patches Immediately after exposure to plasma the device is wetted with TRIS buffer (13 mM Trizma base, 150 mM NaCl, 2 mM CaCl 2 ). A small amount of solution of giant unilamellar vesicles (GUVs) is added to the incubation chamber and the device is left, covered, for 10 minutes during which time GUVs sediment to the bottom of the chamber and eventually rupture on the substrate surface, forming a fluid and continuous lipid bilayer patch. For the experiments, the chamber is gently washed with TRIS buffer or with water to remove unfused vesicles. GUVs, composed of DOPC and Rh-DPPE in 99.5/0.5 molar ratio, are formed by standard electro-formation techniques described elsewhere 17 and are kept for a maximum of 2 days prior to experiments. Imaging The imaging of the membrane response to substrate deformation is done using an inverted optical microscope Nikon Eclipse Ti-E and recorded using an ANDOR camera Neo 5.5 sCMOS (Oxford Instruments). The integrated perfect focusing system (PFS) in the microscope allows us to follow automatically the PDMS surface which changes its focal plane during the strain deformation (inflation and deflation). FRAP experiments are carried out using an inverted Nikon confocal microscope. Image analysis Image analysis is performed using ImageJ. During stress-strain experiments, the bright-field is sampled at a frequency one third of the frame rate of the fluorescence images. From the sequence of bright-field images one can assess the area change of the substrate by tracking the displacement of small defects (air bubbles) in the PDMS, which become visible under brightfield illumination. The changes in bilayer area in response to this substrate stress are determined by first background subtracting the fluorescence images and then applying an appropriate threshold to generate a binary stack. Particle analysis can then be used to track the area of the patch over time by filtering results according to size. If the bilayer opens pores during the expansion a secondary binary image stack is generated to measure the total pore area in each frame. This is subsequently subtracted from the total membrane area. Data acquired from ImageJ is transferred to MATLAB for subsequent analysis, fits and plotting. Contact angle measurements To measure the static and dynamic contact angles of 10 ml aqueous droplets on plasma oxidised PDMS substrates as a function of the plasma exposure time we use a previously described experimental setup 22 and an Image J plug-in. Substrate surface properties determine the mechanisms of stress relaxation in lipid bilayers The deformable PDMS substrates are exposed to plasma oxidation for various time durations. As shown previously, plasma oxygen hydroxylates the surface by increasing the number of silanol groups (-Si-OH), and increases its hydrophilicity. 23 This is confirmed by the larger contact angle that aqueous droplets form on shortly oxidized PDMS substrates. For plasma exposures longer than 4-5 seconds we observe complete wetting of the PDMS substrate by the droplet (Fig. 1a). To characterise the effects of plasma exposure on the PDMS surface properties on the microscale we have measured the dynamic contact angles of advancing and receding droplets on the PDMS substrate upon increasing and decreasing their volumes, respectively (Fig. S1, ESI †). The relatively large contact angle hysteresis, which decreases from 501 to 351 as the plasma exposure time increases, indicates that there are chemical and/or mechanical surface heterogeneities 24 that are more pronounced for short exposure times. In our experiments, we use two types of substrates to support the membrane: (1) partly hydrophilic PDMS (3 seconds of plasma exposure), on which the droplet contact angle is between 351 and 601, and (2) hydrophilic PDMS (30 seconds plasma oxidation), which appears completely wetted by an aqueous droplet. The membrane patches are formed by GUV fusion on freshly oxidized substrates and left to equilibrate for 20-30 min before the strain application (Fig. 1b). All membrane patches appear unilamellar, as indicated by our AFM measurements (Fig. S2, ESI †), as well as by their homogeneous fluorescence appearance, except for the brighter lipid protrusions on top of the membranes. Note that PDMS substrates that have been exposed for less than 2 seconds to plasma do not support the formation of lipid bilayer patches. In line with previous observations 23 we observe the formation of lipid monolayers with half of the fluorescence intensity of the bilayer membrane (data not shown). For the membrane strain experiments we usually apply a cycle of substrate biaxial expansion, followed by compression, and image the response of the fluorescent bilayer patch by optical microscopy. By using discontinuous patches where we can image both the planar part of the bilayer and the patch perimeter we can quantify changes in the shape and the surface area of the membrane as a function of the substrate strain. Our results show that depending on the substrate hydrophilicity, lipid bilayers exhibit different mechanisms to sustain the substrate deformation without rupturing (Fig. 1b). On partly hydrophilic substrates, the membrane is pinned to the substrate and follows closely the changes in the substrate area (''sticky'' membranes). Consequently, the substrate remains completely covered by the membrane throughout the whole strain cycle (Fig. 1b-i). To sustain the in-plane area changes uncompromised, the membrane releases or acquires extra lipids through out-of plane lipid protrusions, similar to what we have previously shown. 17 On hydrophilic surfaces, the behaviour of the lipid membrane differs substantially. When altering the area of the substrate, the membrane decouples from it and avoids the imposed dilation by sliding over the deforming substrate (hence ''sliding'' membranes). As a result the membrane surface coverage decreases upon substrate expansion, and increases back, upon consequent compression (Fig. 1b-ii). We will next discuss the mechanisms of the sticky and sliding membrane behaviour separately, and under what conditions they cannot be sustained anymore and the membrane forms pores upon stretching. Sticky membranes PDMS substrates exposed to plasma for 3 seconds cannot induce fusion of small unilamellar vesicles, in accordance with previous studies. 23 However larger GUVs readily fuse due to their large size and form continuous bilayer patches, as confirmed by FRAP experiments (Fig. S3, ESI †). The patches usually exhibit numerous multilamellar spherical protrusions on top, which get absorbed into the planar membrane, as it matches the substrate expansion (Video S1, ESI †). When the membrane is compressed back to its initial dimensions the excess lipids are expelled as tubular protrusions, which remain stable for at least 30 minutes after the end of the compression (Fig. S4 and Video S2, ESI †). The change of the shape in the protrusions from spherical to tubular is associated with the loss of volume during the initial fusion process and presumably to the reorganization from multilamellar structures into unilamellar tubules. 18 Upon large substrate expansion, usually e sub of about 10%, the available reservoir of lipid protrusions becomes insufficient to sustain the membrane area increase and the membrane patch ruptures (Fig. S5, ESI †). Note that in our previous experiments, continuous supported bilayers were easily expanded by up to 30-40% before rupture due to their large amount of reservoir. 17,18 Interestingly the ruptured patches are also able to form tubes upon compression suggesting that even in this state lipids are able to flow (Fig. S5, ESI †). Indeed, we observe fluorescence recovery after photobleaching in all sticky membrane patches, albeit slower in the ruptured patches where the lipids need to travel along a network of pores (Fig. S3, ESI †). Soft Matter This journal is © The Royal Society of Chemistry 2016 Sliding membranes Our experiments show that on hydrophilic PDMS substrates (oxidized for 30 seconds) the membrane patch does not follow the deformation of the substrate but instead slides over it (Video S3 and S4, ESI †). In an ideal sliding case, i.e. if the membrane were completely decoupled from the substrate, its surface area would remain constant upon substrate deformation ( Fig. 3(inset)). Our measurements however show that the membrane exhibits a complex area relaxation behaviour. Up to approximately 2% strains, the patch expands simultaneously with the substrate (Fig. 3). When a critical applied strain is reached, the membrane starts sliding, by retracting lipids from the periphery towards the center of the patch. During these lipid flows, the patch leaves the disconnected lipid island behind. Because we define strain through the area of the continuously connected patch the detachment of these islands accounts for the small step-wise area decrease in the sliding patch ( Fig. 3ii and iii). Similarly, upon substrate compression the membrane initially follows the substrate before it sets into outward sliding. This stick-slip behavior upon extension and compression suggests the existence of a yield interfacial traction between the bilayer and the substrate, below which the membrane is pinned and above which it slides. As lipids flow against the shrinking substrate they follow the same path in which they retracted. The patch gains lipid area by merging with the previously disconnected lipid islands (Fig. 3iv). During the strain cycle the patch significantly changes its shape upon expansion ( Fig. 3i and iii) and restores it upon compression (Fig. 3iv). Pore formation The sliding mechanisms cannot indefinitely accommodate substrate deformation, and beyond a certain strain (about 10% in the experiment reported in Fig. 4) most of our patches rapidly form pores. While the pores grow as the substrate is expanded, they also change their shape, implying that there is a lipid flow away from the pore to the continuous part of the bilayer (Video S5, ESI †). At the same time, the sliding of the patch perimeter ceases. Pores in supported bilayers are stable as long as the substrate underneath is kept at constant strain (Fig. 4), suggesting that the line tension acting on the periphery of the pores is not sufficient to overcome the yield traction required for sliding. Upon substrate compression the pores reseal first, before the outward sliding of the contour begins. Interestingly, pore opening and resealing during extensioncompression cycles exhibit some amount of hysteresis (Fig. S6, ESI †), consistent with the notion of a yield interfacial traction required for sliding. The opening of pores in these membranes resting on a hydrophilic substrate suggests that the sliding mechanism The scale bar is 20 mm. The inset plot of normalized pore area versus time demonstrates the stability of the pores when the substrate is held at its maximum expansion. The strain rate is 0.065% s À1 . cannot fully relax the dilational stress imposed on the membrane. One possible explanation would be the existence of frictional tractions at the membrane-substrate interface dynamically opposing sliding, which would build up tension in the bilayer. To test this hypothesis, we subject patches to a rapid expansion or compression and image the area of the patch after the end of the strain pulse. Fig. 5a shows that the membrane area continues to increase for some time after the end of the fast compression. The area change can be fitted to an exponential with a time constant of about t = 40 AE 3 seconds. Similar values are obtained from the rapid expansion experiments on the same patch. Experiments on different samples show relaxation times varying between 15 and 50 seconds (Table S1, ESI †). The exponential evolution towards equilibrium suggests that the nature of the interaction with the substrate dynamically opposing sliding is a viscous friction, by which the force per unit area (traction) opposing sliding is proportional to the sliding velocity. To rationalize these measurements, we assume that the dominant effect driving area changes is bilayer compressibility, characterized by the compressibility modulus K B 0.1 N m À1 25 and that the dominant dissipative effect is given by the viscous friction coefficient b s . From elementary dimensional analysis considerations, the relaxation time is given by t B b s A/K, where A is the area of the patch. In our example, A B 12 850 mm 2 resulting in b s B 3 Â 10 8 N s m À3 , which agrees with the reported values in the literature for membranes on hydrophilic substrates probed using hydrodynamic shear forces. 26,27 If membrane sliding is opposed by an interfacial viscous friction, then it should be possible to prevent the opening of pores by reducing the rate of substrate expansion. In agreement with this prediction, large patches (where relaxation time should be longer) stretched at a strain rate of 0.11% s À1 show pores after 10% expansion, whereas the same samples expanded to the same strain amplitude at 0.01% s À1 remain continuous ( Fig. 5b and Fig. S7, ESI †). Discussion Our results demonstrate that the response of lipid membranes to substrate strain deformation depends strongly on the hydrophilicity of the substrate. On partly hydrophilic surfaces the membrane-substrate coupling is so strong that the bilayer appears to be glued to the substrate underneath and is forced to change its area simultaneously with it. A deformation in the substrate area results in changes of the membrane lipid density, which are compensated by the absorption and expulsion of outof-plane lipid protrusions (Fig. 2). These findings are very similar to our previous results on continuous supported bilayers upon strain deformation, which we have justified by the lateral confinement of the lipids and the interstitial liquid (between the substrate and the bilayer). 17,18 The present study demonstrates that in the absence of such lateral constraints the strong coupling of the membrane to the substrate deformation is sufficient to trigger out-of plane lipid remodeling. The response of the sticky membrane to substrate area changes portrays a seeming contradiction. On the one hand, the formation of lipid tubes upon compression implies that lipids can move freely over the substrate and flow into the extending out-of-plane protrusions. On the other hand, the long range collective motion of lipids relative to the substrate appears completely inhibited. This is experimentally evident from (1) the absence of changes in the patch contour during the strain cycle (Fig. 2); (2) the longevity of the projected lipid tubes (Fig. S4, ESI †); and (3) the fact that ruptured patches prefer to form tubes upon subsequent compression than to close their pores (Fig. S5, ESI †). These observations can be reconciled by our contact angle measurements, which suggest that PDMS substrates following short plasma exposures are chemically and/or mechanically heterogeneous (Fig. S1, ESI †). Such heterogeneities will likely influence the organization or the lipids and their interaction with the substrate, and may function as pinning sites for the membrane. Discretely pinned membranes would allow the molecular motion of lipids and maybe the collective flow on small scales (necessary for the formation of protrusions) but not a coherent large-scale lipid motion past the substrate. Lipid membranes supported on hydrophilic substrates exhibit a very different behaviour. The lipids slide over large scales to allow the membrane to preserve its area upon area changes in the underlying substrate (Fig. 3). Furthermore, we have never observed the absorption of lipid protrusions upon stretch even though they are readily available in these patches as well. Such behaviour suggests that the coupling of the membrane to hydrophilic substrates is weaker than the cohesive interaction between the lipids, and sliding is preferable to membrane deformation. Our relaxation experiments indicate that the membranesubstrate coupling exhibits a complex rheology. On hydrophilic surfaces, bilayers need to overcome a critical yield interfacial traction to begin sliding, and then sliding is dynamically opposed by a viscous friction. Previous studies suggest that this behavior could be related to the lubrication properties of the water hydration layer between the membrane and the substrate. 28,29 Substrates treated longer with plasma exhibit a higher density of uniformly distributed -Si-OH groups that favor the formation of a stable hydration layer, where not all water molecules are ordered and can be sheared upon stress application. 28,30,31 In contrast, on shortly oxidized substrates, the low and heterogeneous density of hydroxyl groups leads to a much thinner and highly structured water layer, 30 which strongly hinders the relative motion of lipids past the substrate. Finally, we should mention that since friction force scales with the surface area, the observed membrane behaviour on deformable substrates is very likely to depend on the size of the membrane patch. Thus, both the relaxation time of the membrane and the critical strain at which sliding occurs are expected to increase for larger patches. This may explain why in our previous experiments with continuous supported bilayers we were able to observe only the stick behaviour. 17,18 However, our current experiments with discrete membrane patches failed to verify the size dependence (Tables S1 and S2, ESI †), likely due to the small variabilty in the patch sizes and our inability to precisely control the PDMS surface properties using our plasma device. Conclusions In summary, we have shown in this paper that supported lipid membranes can adapt to changes in the substrate area by recruiting out-of plane lipid reservoirs or by sliding. The nature of the response is determined by the extent of hydrophilicity of the supporting surface. The possibility of (1) opening and closing membrane pores on demand, (2) making planar membranes to expel diverse protrusions, and (3) controlling the area of lipid coverage of the substrate can lay the foundations of exciting new technological developments. In the future, we will aim to develop more precise methods to control the substrate interfacial properties, and to translate our 2D findings in the design of mechano-responsive capsules for controlled delivery. Moreover, in order to obtain further insights into the links between the mechanical architecture and the functionality of the cell interface we will develop systems that use biologically relevant substrates and include different modes of membranesubstrate binding, including molecular linkers.
2018-04-03T03:52:59.158Z
2016-01-01T00:00:00.000
{ "year": 2016, "sha1": "911efa76305dd892d0a0fbf1370b895001ccd55d", "oa_license": "CCBY", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2017/sm/c6sm00786d", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "01404ee32d4f7fa4da29c9b2d8a247e7726e57f5", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
209387522
pes2o/s2orc
v3-fos-license
Antimicrobial Resistance and Molecular Characteristics of Methicillin-resistant Staphylococcus aureus Isolates from Children Patients in Iran Introduction Methicillin-resistant Staphylococcus aureus (MRSA) causes high rates of mortality and a substantial burden to health systems worldwide. Here, we investigated the antimicrobial susceptibility and molecular characteristics of MRSA isolated from children referred to Children’s Medical Center in Tehran. Materials and methods A total of 98 MRSA isolates were collected from children. Antimicrobial resistance patterns were determined using the disk diffusion and E-test methods. The presence of biofilm encoding genes and the pvl gene were determined by PCR. We used the microtiter plate method to assess the ability of biofilm formation. The MRSA isolates were further analyzed using PFGE and SCCmec typing. Results Antibiotic susceptibility testing showed that the highest and the lowest antibiotic resistance percentage were related to erythromycin (62%) and minocycline (10%), respectively. Overall, 63% of MRSA isolates were biofilm producers. Resistance to two antibiotics such as erythromycin (72% vs 28%, P=0.01) and clindamycin (71% vs 29%, P=0.04) was higher among biofilm producers than non-biofilm producers. All strains had biofilm-forming genes and the prevalence of pvl gene was 41%. Most MRSA isolates belonged to SCCmec IVa (75%) and SCCmec III (18%). In PFGE technique, 5 common types and 2 single types were identified; Common type 1 with 37 isolates was dominant clone. Conclusion We thus report preliminary data on the prevalence and distribution of MRSA genotypes in Tehran Children’s Hospital. These findings characterize the MRSA colonization dynamics in child patients in Iran and may aid the design of strategies to prevent MRSA infection and dissemination. Introduction Methicillin-resistant Staphylococcus aureus (MRSA) is a common pathogen causing various forms of infectious disease in humans. 1 Children colonized with MRSA are potential reservoirs for the spread of MRSA in the community. 2 Furthermore, immunologically immature infants and newborns, especially those born prematurely or requiring specialized care, are most susceptible to MRSA infections. 3 MRSA biofilm formation is regulated by the expression of polysaccharide intracellular adhesion (PIA), which mediates cell to cell adhesion and is encoded by the icaADBC operon. 4 Moreover, surface-associated proteinaceous adhesins can contribute to the adherence, colonization and biofilm formation of MRSA. This pathogen can express a variety of microbial surface components recognizing adhesive matrix molecules (MSCRAMMs), such as fibronectin-binding proteins A and B (FnbA, FnbB), clumping factors A and B (ClfA, ClfB), collagenbinding protein (Cna) and enolase protein (eno). Biofilm formation interferes with bacterial recognition and killing mechanisms of the innate immune system. 5,6 A number of methods have been created for the detection of biofilm formation ability. Currently, several different methods are used, such as tube test, microtiter plate test, radiolabeling, microscopy and Congo red agar plate test (CRA). 7,8 However, the microtiter plate method (Mtp) is a quantitative and reliable method to detect biofilm-forming bacteria. Compared to tube and CRA methods, it can be recommended as a general screening method for the detection of biofilm-producing bacteria in laboratories. 9,10 Molecular typing methods have been applied to help researchers map the spread and evolution of MRSA clones, including pulsed-field gel electrophoresis (PFGE) and staphylococcal cassette chromosome mec typing (SCCmec typing). 11,12 PFGE is still considered a standard reference molecular technique for analyzing the dissemination of hospital and community-acquired MRSA and has proved to be one of the most discriminatory methods in the total sequencing of the MRSA strain. 13 It has been an excellent laboratory tool for emergency identification of new clones. 14 Staphylococcal cassette chromosome mec (SCCmec) typing accompanied with overall genotyping has already provided strong evidence for the independent origins of health-care associated MRSA (HA-MRSA) and community-acquired MRSA (CA-MRSA). 15 To date, eight different types of SCCmec (I-VIII) have been defined on the basis of the combination of ccr and mec complexes, but only types I-V are globally distributed, while others appear to exist as local strains in the country of origin. [16][17][18] PVL is a two-component S. auerus spore-forming protein encoded by the lukF-PV and lukS-PV genes. 19 PVL toxin is responsible for the increased virulence of CA-MRSA, since the gene is responsible for many of the severe clinical syndromes of MRSA such as severe necrotizing pneumonia. 20,21 However, epidemiological analysis among clinical MRSA isolates from children has rarely been performed. The aim of this study was to investigate the antimicrobial resistance pattern, biofilm formation and molecular characteristics of MRSA strains in children. Bacterial Strains In this cross-sectional study, 98 suspected staphylococcal infection samples were routinely collected from patients referred to the pediatric medical center and then specimens infected with Methicillin-resistant S. aureus were included in our study for a specified period (from September 2016 to October 2017). S. aureus isolates were confirmed using conventional microbiological methods (Gram's stain, catalase, coagulase, DNase tests and mannitol fermentation on mannitol salt agar (Merck, Germany)). To definitively identify positive S. aureus isolates, they were subjected to polymerase chain reaction (PCR) for nucA gene. MRSA strains were identified phenotypically using cefoxitin diskdiffusion method (30 μg; MAST, UK). This method was performed according to the Clinical and Laboratory Standards Institute (CLSI) guidelines. 22 Resistance to methicillin in S. aureus isolates was confirmed by the amplification of mec gene by PCR method. Antimicrobial Susceptibility Testing of MRSA Isolates The antibiotic susceptibility patterns of MRSA isolates were determined by the Kirby-Bauer disk-diffusion method, and the results were interpreted according to CLSI guidelines. 22 The antimicrobial agents (Rosco, Denmark) tested in this study included clindamycin (2 μg), linezolid (30 μg), penicillin (10 μg), gentamicin (10 μg), trimethoprim-sulfamethoxazole (25 μg), minocycline (30 μg) and erythromycin (15 μg). S. aureus ATCC 25923 was used as a standard strain. The minimum inhibitory concentration (MIC) for vancomycin was determined with E-test strips (Liofilchem, Italy) according to manufacturer's instructions. The standard reference strain S. aureus ATCC 25923 was used as a quality control strain in every test run. Detection of Biofilm Formation by Microtiter Plate Assay (Mtp) Biofilm production was determined quantitatively using microtiter plate method as described previously. 23 Briefly, bacterial isolates were grown in Brain Heart Infusion (BHI) with 1% glucose (Merck, Germany) and incubated at 37ºC for overnight. 24 Cultures were diluted 1:20 in fresh BHI-0.1% glucose. Then, 200 μL of the diluted solution was added to wells of a flat-bottomed polystyrene microtitre plate and incubated for 48 hrs at 37ºC. The negative control wells contained 200 μL of BHI-0.1% glucose. Wells were gently washed 3 times with phosphate-buffered saline (PBS; pH 7.2), fixed with sodium acetate (2%) for 10 mins, dried at room temperature and then strained with 0.1% crystal violet. After removing the crystal violet solution, wells were washed with PBS to remove unbound dye. The optical densities (ODs) of the plates were observed at 630 nm using a microtiter plate reader. Each assay was performed in duplicate. As a negative control, brain heart infusion broth with 1% glucose medium was used to determine the background OD. OD cut-off was then determined as an average OD of negative control + 3× standard deviation of negative control. The OD cut-off value was separately calculated for each microtiter plate. Biofilm formation by isolates was calculated and categorized according to the absorbance of the crystal violet-stained attached cells (Table 1). Staphylococcus epidermidis ATCC 35984 was used as the biofilm producer control strain. 25,26 Extraction of Genomic DNA Genomic DNA was extracted from pure cultures using the High Pure PCR Template Preparation Kit (Roche, Germany), according to the manufacturer's guidelines. The concentration of DNA was assessed using a spectrophotometer. Detection of Biofilm Encoding Genes and pvl Gene All 98 MRSA isolates were tested for the presence of pvl gene and biofilm encoding genes (icaA, icaD, fnbA, fnbB, clfA, clfB, cna, eno) with the degenerate primers as listed in Table 2. Pulsed-Field Gel Electrophoresis PFGE based on SmaI macrorestriction analysis was performed using the CDC laboratory protocol for S. aureus. 28 The PFGE was run on a CHEF DR III system (Bio-Rad, CA, USA) with optimum settings as follows: initial switch 5 s, final switch 40 s, run time 21 hrs, voltage 6 V/cm and a SeaKem Gold agarose (Lonza, Rockland, USA) gel concentration of 1%. Analysis of PFGE clusters was performed using the BioNumerics software package (Applied Maths, Sint-Martens-Latem, Belgium), using the Dice coefficient, and visualized as a dendrogram by the unweighted pair group method. Statistical Analysis The relationship between biofilm formation and antibiotic resistance among MRSA isolates was evaluated by the Pearson Chi-Square test using SPSS version 21. P-values less than 0.05 were considered to be significant. Results A total of 98 MRSA isolates were collected from children referred to pediatric hospital during a 2014-2015 years. Of these patients, 51 (52%) were girls and 47 (48%) were boys. The median age of the patients was 45 ± 5 months (1 month to 14 years). The MRSA isolates were recovered from respiratory secretions (57%), blood (15%), wounds (10%), the ear (8%), as skin abscesses (5%), the eye (5%). All isolates were susceptible to linezolid and vancomycin and resistant to penicillin and cefoxitin. The rates of resistance to the majority of antibiotics tested varied from 10% to 62% (Figure 1). Linezolid and vancomycin showed good activity against MRSA isolates. The rate of the susceptibility of vancomycin is shown in Table 3. Discussion S. aureus is included in the group of "ESKAPE" bacteria, which comprise the MDR pathogens that are currently considered as the biggest concern for humanity. 34,35 There is a relative abundance of the different antibiotic-groups for the treatment of MRSA. 36,37 This is underlined by the recent WHO report, urging drug companies to invest and target various drug-resistant bacteria during antibiotics research, which also includes MRSA. 38 The pathogenicity of S. aureus is related to the ability to produce toxins and extracellular factors such as biofilms that enable the bacterial adhesion and resistance to phagocytosis. 39,40 It is now estimated that biofilms are responsible for more than 65% of nosocomial infections and 80% of all microbial infections. 41 In biofilm formation of different bacterial species, the transmission of antimicrobial resistance markers occurs more frequently, and the transfer of antibiotic resistance from Enterococcus to more pathogenic bacteria such as. aureus is a major threat. 41 In this study, 63% of isolates were capable of biofilm formation by microtiter plate method, of which 1% were strongly adherent, 8% moderately adherent, 54% weakly adherent and 37% non-adherent, which were matched with the researches conducted by Lotfi et al and Yousefi et al. 25,42 Studies show that the microtiter plate method is more sensitive and specific than other methods and has been introduced as a gold standard in biofilm identification. 10,23,24 In this study, all isolates were susceptible to vancomycin and linezolid, while (62%) isolates showed resistance to erythromycin, (57%) to clindamycin, (24%) to trimethoprimsulfamethoxazole, (24%) to gentamicin, (12%) to rifampin and (10%) to minocycline. Although antibiotic resistance was higher in biofilm-producing strains than other strains, there was a statistically significant relationship between antibiotic resistance of erythromycin and clindamycin and biofilm formation. All genes involved in biofilm formation including clfA, clfB, fnbA, fnbB, cna, eno, icaD, icaA were identified in all S. aureus isolates. In the study by Yousefi et al in Iran, the prevalence of biofilm-related genes was 100%, 42 while in the study by Mohamed et al in Iraq, the prevalence of fnbA, clfA and cna genes was, respectively, 56%, 56% and 81%. 43 The results of this study and other studies indicate that biofilm formation in Staphylococcus strains is dependent on environmental conditions and is influenced by environmental signals that can respond to external stress and inhibitory concentrations of antibiotics. 24 Failure in biofilm formation despite the presence of ica genes can be due to the inactivation of ica operons by activation of icaR repressor. 44 In the present study, frequency of SCCmec typing was, respectively, SCCmec I (1%), SCCmec III (18%), SCCmec IVa (75%), SCCmecIVb (4%) and SCCmec IVc (2%). 100% 100% 62% 57% 24% 24% 12% 10% 0% 0% Figure 1 Antimicrobial resistance patterns of MRSA isolates. SCCmec II, SCCmec IVd and SCCmec V types were not detected. Results of studies indicated that strains carrying large chromosomal cassettes, such as SCCmec I-III, are often resistant to non-β-lactam antibiotic classes and rarely carry the pvl gene. In contrast, strains carrying smaller chromosomal cassettes, such as SCCmec IV and SCCmec V, are less resistant to non-β-lactam antibiotic classes and often carry the pvl gene. 45,46 In this study, the strains showed less resistance to non-β-lactam drugs such as gentamicin, minocycline and rifampin, and 41% of the strains carried the pvl gene. In this study, PFGE technique was used as a powerful discriminative tool to investigate the epidemiological characteristics of MRSA strains. This method has high resolution and reproducibility and is used as the gold standard method for typing this genus. Genotyping techniques such as PFGE are helpful in finding a transferable clone and infection control and prevention measures. In this study, 5 common types with 4 to 37 subtypes and 2 single types were identified. Common type 1 with 37 isolates was dominant clone and all strains had SCCmec IV, most of which were isolated from outpatients with respiratory infections. Whereas Common type 2 with 27 isolates was often obtained from inpatients in different departments of the hospital. Based on these results, it is likely that Common type 1 colonizes in the respiratory tract of children and circulating in the community, whereas Common type 2 is circulating in hospitals and in different parts of it. Common type 3 with 14 isolates also had the same antibiotic resistance pattern and were only separated from the emergency and surgical departments. A similar study by Ohadian moghadam et al, in 2017, was performed to classify MRSA strains using the PFGE technique in Iran. In this study, 43 MRSA strains were isolated from wound swabs of patients referred to Shahid Motahhari Hospital (specializing in the treatment of burns). After performing PFGE, 5 common types and 31 single types were identified. The investigation indicated that each common type represents an outbreak, because it was taken over an identical time interval and the diversity of strains had been explained by the acquisition of MRSA from various sources. 47 Another study by Hussein et al investigated the typing of 114 strains of S. aureus isolated from healthcare workers using the PFGE technique in Iraq. In this study, 8 common types were isolated, more than 50% of isolates belonged to types A and B, indicating infection with the same source. 48 In this study, since no sampling of hospital personnel and equipment was carried out, it is impossible to investigate the source of infection and its transmission to patients in different departments of the hospital, which requires a large-scale study, but the important point that should be noted is the infection is transmitted from the community to the hospital, which must be prevented by appropriate infection control measures. Conclusion In this study, most of the strains belonged to CA-MRSA because they were mostly carriers of the SCCmec IVa gene and were highly sensitive to non-beta-lactam drugs such as minocycline and rifampin. According to the PFGE technique, cross-sectional circulation of clones was observed in the hospital, which requires careful control of infection in different parts of the hospital.
2019-12-12T10:19:11.216Z
2019-12-01T00:00:00.000
{ "year": 2019, "sha1": "43e08b45dd9732e189c41f7cbae9783fbcafb805", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=54576", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "379162f57d446ce48ae4dd50ed74e7789be17bc2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
18427316
pes2o/s2orc
v3-fos-license
Humoral autoimmune response heterogeneity in the spectrum of primary biliary cirrhosis Objective To compare autoantibody features in patients with primary biliary cirrhosis (PBC) and individuals presenting antimitochondria antibodies (AMAs) but no clinical or biochemical evidence of disease. Methods A total of 212 AMA-positive serum samples were classified into four groups: PBC (definite PBC, n = 93); PBC/autoimmune disease (AID; PBC plus other AID, n = 37); biochemically normal (BN) individuals (n = 61); and BN/AID (BN plus other AID, n = 21). Samples were tested by indirect immunofluorescence (IIF) on rat kidney (IIF-AMA) and ELISA [antibodies to pyruvate dehydrogenase E2-complex (PDC-E2), gp-210, Sp-100, and CENP-A/B]. AMA isotype was determined by IIF-AMA. Affinity of anti-PDC-E2 IgG was determined by 8 M urea-modified ELISA. Results High-titer IIF-AMA was more frequent in PBC and PBC/AID (57 and 70 %) than in BN and BN/AID samples (23 and 19 %) (p < 0.001). Triple isotype IIF-AMA (IgA/IgM/IgG) was more frequent in PBC and PBC/AID samples (35 and 43 %) than in BN sample (18 %; p = 0.008; p = 0.013, respectively). Anti-PDC-E2 levels were higher in PBC (mean 3.82; 95 % CI 3.36–4.29) and PBC/AID samples (3.89; 3.15–4.63) than in BN (2.43; 1.92–2.94) and BN/AID samples (2.52; 1.54–3.50) (p < 0.001). Anti-PDC-E2 avidity was higher in PBC (mean 64.5 %; 95 % CI 57.5–71.5 %) and PBC/AID samples (66.1 %; 54.4–77.8 %) than in BN samples (39.2 %; 30.9–37.5 %) (p < 0.001). PBC and PBC/AID recognized more cell domains (mitochondria, nuclear envelope, PML/sp-100 bodies, centromere) than BN (p = 0.008) and BN/AID samples (p = 0.002). Three variables were independently associated with established PBC: high-avidity anti-PDC-E2 (OR 4.121; 95 % CI 2.118–8.019); high-titer IIF-AMA (OR 4.890; 2.319–10.314); antibodies to three or more antigenic cell domains (OR 9.414; 1.924–46.060). Conclusion The autoantibody profile was quantitatively and qualitatively more robust in definite PBC as compared with AMA-positive biochemically normal individuals. Introduction Autoantibodies are hallmarks of autoimmune diseases (AIDs), and disease-specific autoantibodies are valuable diagnostic tools [1]. Primary biliary cirrhosis (PBC) is an autoimmune liver disease involving predominantly intrahepatic biliary duct epithelial cells [2]. Antimitochondria antibodies (AMAs) are detected in roughly 95 % of PBC patients, with a specificity of at least 98 % [3,4]. AMAs are directed to components of the inner mitochondrial multienzyme 2-oxoacid dehydrogenase complex (2-OADC) implicated in the mitochondrial respiratory chain [5]. The E2 subunit of pyruvate DC (PDC-E2) is the main antigenic moiety in 2-OADC [6]. One-third of PBC patients present a positive result in the antinuclear antibody (ANA) indirect immunofluorescence (IIF) assay on HEp-2 cells (ANA-HEp-2). Two autoantibodies detected in the ANA-HEp-2 assay appear to be specific for PBC and are observed in approximately 25 % of the patients. Anti-gp210 antibodies recognize a nuclear pore glycoprotein [7] and anti-Sp-100 antibodies recognize a 53-kDa protein restricted to the promyelocytic leukemia (PML) nuclear bodies [8]. Anticentromere antibodies have also been observed in PBC, with an apparent association with portal hypertension [9,10]. The ANA-HEp-2 test is the standard screening assay for autoantibodies [11], and the immunofluorescence pattern in a positive ANA-HEp-2 test may indicate the possible autoantibody specificities present in a given sample [11][12][13]. AMA yields a peculiar cytoplasm pattern in the ANA-HEp-2 assay (Fig. 1). By processing roughly 323,000 samples in the ANA-HEp-2 assay in a time frame of 8 years, we identified a considerable number of samples with the characteristic AMA-like cytoplasm pattern (Fig. 1). Processing such samples in AMA-specific assays eventually resulted in a sizable number of AMA-positive asymptomatic individuals with normal liver enzyme serum levels (Fig. 1). The significance of such AMA response in apparently normal individuals is unknown. Neither is it known how this immune response compares with the autoimmune response observed in clinically established PBC? The intrinsic features of any given humoral response are quite heterogeneous with respect to titer, avidity, and immunoglobulin isotype of the antibodies, as well as the spectrum of targeted antigens and epitopes. In the present study we investigated AMA serum levels, AMA isotypes, and the avidity and serum levels of anti-PDC-E2 IgG antibodies in PBC patients and in AMA-positive asymptomatic and biochemically normal individuals. Reactivity against other mitochondrial antigens and the serum levels of antibodies to gp210, Sp-100, and the centromere proteins CENP-A and CENP-B were also investigated. Materials and methods The Samples were suspected to be AMA-positive because of the AMA-like pattern in the ANA-HEp-2 assay (Fig. 1). Those confirmed to be AMA-positive on specific assays (IIF on rodent tissue, ELISA, or Western blot) were selected for the present study. Clinical data were obtained by chart review and interview with the physicians who ordered the tests. PBC diagnosis was established according to the American Association for Study of Liver Diseases criteria [14]. For most patients consistently increased alkaline phosphatase serum levels and AMA were enough to establish PBC diagnosis. Some patients required liver biopsy for confirmation. Whenever appropriate, image studies were undertaken to rule out the possibility of biliary tract obstruction. Serum liver enzymes were determined in all samples of patients without definite PBC diagnosis. Samples were classified into four groups: PBC group (n = 93)-definite PBC according to established diagnostic criteria [14]; PBC/AID group (n = 37)-definite PBC plus any nonhepatic AID; biochemically normal (BN) group (n = 61)-individuals with no apparent disease and normal alkaline phosphatase serum levels; and BN/AID group (n = 21)-BN plus any nonhepatic AID. BN and BN/AID individuals had at least two samples separated by at least 6 months with normal alkaline phosphatase levels. Samples with slightly elevated alkaline phosphatase serum levels (less than twofold the upper normal limit) and no further evidence of PBC were excluded from the present study. Samples were processed without the knowledge of their identity and the group to which they belonged. The study was approved by the National Committee for Ethics in Research (CONEP). AMA was determined by IIF (IIF-AMA) and ELISA. IIF-AMA on in-house rodent tissue preparations was performed as described elsewhere [15]. Samples were screened at 1:40 and serially diluted up to end-point fluorescence or to 1:2,560. Positive samples were tested for AMA isotype with fluorescein isothiocyanate (FITC)-conjugated goat antibodies against human IgG (BioMérieux, Marcy l'Etoile, France), IgM and IgA (Dako, Bucks, UK) at 1:200 (anti-IgG and anti-IgM) and 1:20 (anti-IgA). ANA-HEp-2 was performed at 1:160 in HEp-2 cell slides (Bion, Des Plaines, IL, USA) following the manufacturer's instructions. In addition to the characteristic cytoplasm mitochondria-like pattern, we registered the occurrence of reactivity to the nuclear envelope, multiple nuclear dots, and centromere. Slides were independently analyzed by two blinded readers (AD and LECA) using an Olympus B50 fluorescence microscope (Center Valley, PA USA) at 9400 magnification. Reactivity against purified gp210 and Sp-100, and recombinant CENP-A/B was determined by ELISA (INOVA Diagnostics), according to the manufacturer's instructions. Antibodies against PDC-E2 (M2 fraction) were determined by ELISA (Orgentec, Mainz, Germany) according to the manufacturer's instructions. The affinity of anti-PDC-E2 IgG was determined under chaotropic conditions [16]. Samples were incubated in quadruplicate in the standard anti-PDC-E2 ELISA for 1 h. Next, for each quadruplicate set, two wells were incubated with regular washing solution and two wells were incubated with urea 8 M in PBS-T for 15 min at room temperature. Plates were then washed and further processed as per the regular ELISA. Affinity was estimated by dividing the optical density observed in the wells submitted to urea treatment by the optical density of wells without urea treatment. Categorical variables were analyzed by the Chi-square test. Kruskal-Wallis and Mann-Whitney tests were used to compare quantitative and semiquantitative nonparametric variables. Multiple regression analysis was used to identify variables independently associated with the classification as definite PBC. The nomogram model was used to calculate the interaction of the independent variables. A p value of less than 0.05 was considered significant. Results The majority of individuals were females (90.6 %), and there was no significant difference in gender and age distribution among the four groups (Table 1). Information on current liver enzyme serum levels was available for all BN and BN/AID subjects, but not for all PBC and PBC/AID patients. Increased alkaline phosphatase serum levels were observed at the time of the study in 37 % of PBC patients and in 25 % of PBC/AID patients. Normal alkaline phosphatase levels in PBC and PBC/AID groups were because of ursodeoxycholic acid therapy, which was used in the majority of these patients (Table 1). Liver biopsy information was available for one-quarter of PBC patients and half of PBC/AID patients, most of whom exhibited lesions compatible with stages II and III (Table 1). IIF-AMA was positive in 94.3 % (200/212) of the samples (Table 2) and the remaining 12 samples were reactive in PDC-E2 ELISA or WB-AMA. IIF-AMA frequency was slightly lower in the BN/AID group (p = 0.021). IIF-AMA titers were arbitrarily divided into low (1:40-1:80), medium (1:160-1:320), and high strata (1:640-1:2,560). High titer IIF-AMA was associated with PBC and PBC/AID groups, whereas low titer IIF-AMA was associated with BN and BN/AID groups (Fig. 2a). In addition, there were some differences in IIF-AMA isotype among the groups ( Table 2). All IIF-AMA-positive samples had the IgG isotype. IgM IIF-AMA was more frequent in PBC and PBC/AID than in the BN group (p = 0.003 and p = 0.004, respectively). IgA IIF-AMA was also more frequent in PBC and PBC/AID than in the BN group (p = 0.023 and p = 0.007, respectively). Finally, PBC and PBC/AID had a higher frequency of samples with triple isotype IIF-AMA than the BN group (p = 0.013 and p = 0.008, respectively) ( Table 2; Fig. 2b). The four groups were equivalent in the frequency of positive samples for anti-PDC-E2 (Table 2). However, PBC and PBC/AID groups presented higher serum levels of anti-PDC-E2 antibodies than BN and BN/AID groups (Fig. 3a). There was no difference in serum levels of anti-PDC-E2 antibodies between PBC and PBC/AID, and between BN and BN/AID groups. Receiver operating curve (ROC) analysis indicated that the best anti-PDC-E2 serum level to discriminate PBC and PBC/AID from BN and BN/ IAD groups was 3.0 IU/mL (Fig. 3c). The avidity of anti-PDC-E2 IgG was determined in all anti-PDC-E2-reactive samples (170/212) and was found to be significantly higher in PBC and PBC/AID than in the BN group (Fig. 3b). The best anti-PDC-E2 IgG avidity level to discriminate PBC and PBC/AID from BN and BN/IAD groups was 64 % (Fig. 3d). WB also showed some differences among the four groups ( Table 2). The BN group had a lower frequency of reactive samples than PBC (p = 0.019) and PBC/AID (p = 0.048) groups. In addition, the PBC group presented a higher frequency of 74-kDa reactive samples than the BN group (p = 0.004) ( Table 2). The frequency of anti-gp210 antibodies was equivalent among the four groups but anti-gp210 serum levels were higher in PBC and PBC/AID groups (Fig. 4b). In fact, anti-gp210 levels above 100 AU/mL were observed in 16 (12.9 %) of the PBC and PBC/AID samples but only 1 (1. (Fig. 4c). Several serological parameters presented a significant association with the current classification of samples as belonging to patients with definite PBC (Table 3), including high titer IIF-AMA, triple isotype IIF-AMA, high titer and high avidity anti-PDC-E2 antibodies, and three or more cellular domains recognized by autoantibodies. Multiple regression analysis identified three independent variables presenting a significant association with definite PBC: high (Table 4). Discussion The present study disclosed several differences in the intrinsic features of the autoantibody profile in individuals with AMA reactivity and normal levels of alkaline phosphatase as opposed to patients with definite PBC. Patients with definite PBC displayed a more vigorous autoantibody profile, represented by higher serum levels of IIF-AMA, a higher frequency of triple isotype IIF-AMA, higher serum levels and higher avidity anti-PDC-E2 IgG, and higher titer anti-gp210 antibodies. In addition, the autoantibody profile in patients with definite PBC addressed a broader set of antigenic targets, recognizing a higher number of cell domains than individuals with no biochemical or clinical evidence of PBC. These differences were true regardless of the presence of an associated extrahepatic AID. Multiple regression analysis identified three independent risk factors for the classification of a sample as belonging to biochemically normal individuals or to patients with definite PBC, namely high titer IIF-AMA, high avidity anti-PDC-E2 antibodies, and widespread reactivity against multiple cell domains. This observation might be clinically useful in the instance of an unexpected positive AMA result in an individual with no clinical and biochemical evidence for PBC. Obviously, these findings must be confirmed by similar studies in independent series of AMA-reactive biochemically normal samples and by longitudinal studies comparing AMA-positive samples before and after the development of liver involvement. What is the exact clinical situation of AMA-positive asymptomatic individuals with normal alkaline phosphatase levels? Could they represent preclinical stages of PBC? Could they represent normal individuals with no relationship with the PBC disease spectrum? Because of the design of the present study and the setting in which samples were obtained, one can determine that they had no clinical or biochemical evidence of PBC at the moment of the study. However, we cannot rule out the possibility that some of them had varied degrees of histological biliary tract involvement typical of PBC and therefore represented preclinical stages of histopathologically established disease. Regardless of the histological status, it is reasonable to admit that some of these individuals will eventually develop definite PBC. In fact, previous follow-up studies of AMA-positive asymptomatic cohorts have shown that a significant proportion of individuals will develop overt disease within a variable time interval [14,19]. In this context, it is relevant to consider that the 82 AMA-positive and biochemically normal individuals represent 0.02 % of 323,000 individuals screened. This frequency is not far from the estimated prevalence of PBC in the general population [14]. However, because of the cross-sectional design of the present study we cannot determine the fraction of these individuals who will eventually develop overt PBC. With these restrictions in mind we may consider that these individuals might represent a heterogeneous group, comprising potential patients at preclinical stages of PBC and normal subjects with no relationship to PBC. Overall, the obtained data are quite provocative because they shed some light onto the nature of the autoimmune response at very early stages of PBC. It is well established that disease-specific autoantibodies frequently precede the onset of symptoms and the diagnosis of the cognate diseases for months or years. Examples of such include antinative DNA antibodies and systemic lupus erythematosus [20], anticitrullinated peptide antibodies or rheumatoid factor and rheumatoid arthritis [21], antithyroid peroxidase antibodies and Hashimoto thyroiditis [22], anti-insulin antibodies and type I diabetes mellitus [23], and AMAs and PBC [24]. The consistent demonstration of disease-specific autoantibodies preceding the clinical onset of diverse AIDs seems to indicate that immunological disturbances regularly precede the establishment of overt disease for a variably long preclinical period in which no or very low inflammatory activity is present at the target tissues. Understanding the immunology of such ''pre/low-inflammatory'' stages of AIDs may allow the development of effective immunomodulatory therapy to successfully prevent the full development of such illnesses. An initial approach to study the immunology of the ''pre/low-inflammatory'' stages of AIDs is to analyze the intrinsic features of autoantibodies before and after their clinical onset. PBC appears to be an appropriate model for this objective because its natural history comprises a long period in which circulating autoantibodies are detected in the absence of clinical and biochemical events [25]. The relationship between this particular autoimmune response and the etiology of PBC as b Anti-gp210 antibody serum levels were higher in PBC and PBC/AID groups as compared with that in BN and BN/AID groups (p = 0.0032). c Anti-Sp-100 antibody serum levels showed no difference among the groups of samples (p = 0.808). BN, AMA-reactive asymptomatic individuals with normal levels of alkaline phosphatase; BN/AID, same as BN but with any associated extrahepatic autoimmune disease; PBC, definite primary biliary cirrhosis; PBC/AID, definite primary biliary cirrhosis and any associated extrahepatic autoimmune disease well as the precise mechanisms of bile duct destruction remains unclear, and some authors believe that autoantibodies represent an epiphenomenon not directly related to the disease pathophysiology. The findings of the present study suggest that along the transition from early to later stages of PBC there are marked qualitative and quantitative changes in the autoantibody profile represented by a higher rate of production and higher avidity of AMA, as well as the spreading of antigenic targets recognizing several cell domains. Although the methodological restrictions of the present study preclude definite conclusions, these preliminary findings encourage follow-up studies to determine the longitudinal behavior of the humoral autoimmune response along the transition from early to later stages of PBC. In summary, the present study has provided evidence for marked qualitative and quantitative differences in the autoantibody profile of AMA-positive individuals with normal alkaline phosphatase levels and patients with established PBC. The present data indicate that high titer AMA, high avidity anti-PDC-E2 antibodies, and widespread response to multiple cell domains represent risk factors for a given AMA-positive sample to be associated with definite PBC. This is an original finding that may shed some light onto the understanding of the obscure immunological abnormalities preceding the instatement of the full-blown inflammatory stage of PBC and other AIDs. Studies on patients with different PBC histological stages and AMA-positive individuals with no biochemical or histological evidence of liver disease are required to validate these preliminary results. nical assistance of Cristiane Gallindo in processing the Western blot experiments, Marcia Pereto in preparing the rodent tissue slides, and José de Sá for the digital survey of patients' records. This study was supported by grant 2009/51887-0 from the Sao Paulo State Research Open Access This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.
2016-05-18T16:22:21.122Z
2012-12-05T00:00:00.000
{ "year": 2012, "sha1": "cfef4e445957d9420ba5120c6b77f239c1a1f8b0", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1007/s12072-012-9413-0", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "cfef4e445957d9420ba5120c6b77f239c1a1f8b0", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
246456595
pes2o/s2orc
v3-fos-license
Anomalous Method of Cement-Sand Ratio Evaluation in Hardened Cement-Mortar or Cement-Concrete Construction or creation is a genetic human tendency and places, pyramids, houses, villas, bridges, y-overs, etc. are living examples of this inborn tendency. Civil construction is possible through proper binding materials which can bind all the construction components together. Lime has been used for centuries for this purpose but in the modern era, Portland cement has taken its place. Portland cement is widely being used to bind bricks, stones, or other construction materials in the form of cement- mortar or cement concrete. Cement-mortar or cement -concrete is made by mixing cement in a certain ratio to ne aggregates(sand) or ne and coarse aggregates together, and this certain ratio is quite important for the durability, strength, and sustainability of any civil construction. Sometimes, civil construction gets collapsed unfortunately and the ratio of cement-sand becomes the crucial point of investigation to ascertain the cause of failure. The most used method for this purpose is the acid digestion method, which is based on an insoluble percentage of silica in the cement-mortar or cement-concrete, but this method is quite lengthy having multiple stages of ltering, drying, weighing, causing multiple sources of errors. For precision, the silica percentage is being calculated through EDX/XRF nowadays. EDX/XRF method is fast as well as precise also but requires quite a costly set up as well as quite a time consuming also in respect of proper sampling and sample preparation. The present method is almost emerged as precise as the acid digestion method but gives quicker results with fewer error sources. roofs. Properties of pure cement paste are meant to be improvised by the addition of ner nanoparticles [6]. Fine nanocrystals of CaOH2 and AFM are considered to be favorable for the strength of cement paste [7][8][9]. One of the laws that discuss the assumption of the strength of mortar is Abraham's generalization law [10] which states that mortar strength varies inversely with the water/cement ratio. Law developed to measure the strength of cement paste is stated as: -Strength= K1/K2 W/C Where K1, K2 = Constants W = mass of water C = mass of cement Implementation of Abraham's law is logical to any duration between 3 and 365 days of mortar age [11]. Current Study The current study examined an alternative method of cement-sand ratio evaluation in hardened cementmortar or cement-concrete. Sand which is used in civil construction work has to qualify some conditions as per IS (Indian Standards) to be used as construction sand out of which the most important is grain size. The construction sand should have a grain size between 0.0075mm to 4.75mm and should not pass through 100 BSS sieve having a mesh size of 150 microns. If some part of sand passes through 100 BSS test sieve, then it should be lesser than 15%. The pit sand does not have much smaller grains than 150 microns so the sand passes in almost negligible amounts through the 100 BSS test sieve. A similar story is of the river sand but manufactured sand has that, so it has to be ascertained by proper sieving about the grading. Cement OPL and PPL both have grain sizes smaller than 150 microns, so both easily pass through the 100 BSS test sieve. For precision, the percentage of silica in the cement-mortar and cement-concrete is being calculated through EDX/XRF nowadays. EDX/XRF method is fast as well as precise also but requires a quite costly setup as well as quite a time consuming also in respect of proper sampling and sample preparation, whereas present method is almost emerged as precise as acid digestion method but gives quicker results with fewer error sources. Procedure A mixture of cement and pit sand ratio 1:4 is made by proper mixing. Water is added to the mixture in the proper amount to make the proper consistency of the mortar. The mortar is placed into moulds to make blocks. Settled and hardened blocks were quenched by water for seven days and dried for the next 28 days. The block was broken after drying and a small piece of almost 100gram weight is taken and kept in over at 100 degrees Celsius for 5-6 hours for complete drying. The piece is weighted and noted down. This piece is placed in a beaker as such without crushing and concentrated HCL (Hydrochloric acid) is added to fully submerge the piece. The reaction starts abruptly. The next day the beaker containing concentrated HCL and the piece are tested for completion of the reaction by adding a small amount of concentrated HCL. If reaction starts again (bubble formation) means more HCL is required, if not then HCL is su cient. Beaker is then placed on a hot plate, setting temperature 80 degrees Celsius for 2-3 hours to make sure all the soluble components of cement get dissolved in HCL. The piece is meanwhile stroked lightly and stirred with the glass rod. By stroking and stirring the piece broke into its ner components. After the complete ssure of a piece, the whole material of the beaker is ltered. During ltration, hot distilled water washing is done multiple times to remove excess acid. The ltrate is placed with lter paper (Whatman 40) in an oven at 100 degrees Celsius for 7-8 hours for complete drying, after complete drying, the material is weighed carefully and noted down, after that the material is transferred to 100 BSS test sieve for careful sieving. The remaining material on the test sieve is weighted again carefully as well as the material passed through the test sieve and noted down. Full road map of experiment is shown in Figure 2. Percentage of insoluble additives of cement = 2.694/11.822*100 = 22.788% As we have taken PPL cement for our experiment so the values of insoluble additives are also in good agreement with the theoretical value as shown in Table 1. Observation Table Table 1 sieving. The main difference will remain in removing the coarse aggregates before weighing through proper sieving. The IS states that sand should not contain a grain size ner than 0.075mm and also should not contain organic material, soluble salt, or materials, etc. in the light of IS guidelines, this method can be utilized for quick and almost fair results without EDX/XRF. Methodology Summing over 21 papers and 5 different types of calculations of current set out work, the current study has initiated possible areas for further studies about methods of cement-sand ratio evaluation in hardened cement-mortar or cement-concrete. Findings from this paper are therefore useful because the present method is almost emerged as precise as the acid digestion method but gives quicker results with fewer error sources. With a likeness and dissimilarity in mind, future studies should focus more on how to cope up with such types of lengthy methods which have multiple stages of ltering, drying, weighing causing multiple sources of errors in the acid digestion method. Figure 1 Elements present in Cement Full Road map of experiment
2022-02-02T16:02:58.051Z
2022-01-31T00:00:00.000
{ "year": 2022, "sha1": "c960918cb0a598ded1382ad8bcafcfe8d4ce6c4e", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-1157979/latest.pdf", "oa_status": "GREEN", "pdf_src": "Adhoc", "pdf_hash": "0e57890e3b02741bc3532049b067fa424796b645", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "extfieldsofstudy": [] }
224681
pes2o/s2orc
v3-fos-license
Cerebrospinal fluid Presenilin-1 increases at asymptomatic stage in genetically determined Alzheimer’s disease Background Presenilin-1 (PS1), the active component of the intramembrane γ-secretase complex, can be detected as soluble heteromeric aggregates in cerebrospinal fluid (CSF). The aim of this study was to examine the different soluble PS1 complexes in the lumbar CSF (CSF-PS1) of individuals with Alzheimer’s disease (AD), particularly in both symptomatic and asymptomatic genetically determined AD, in order to evaluate their potential as early biomarkers. Methods Western blotting, differential centrifugation and co-immunoprecipitation served to determine and characterize CSF-PS1 complexes. We also monitored the assembly of soluble PS1 into complexes in a cell model, and the participation of Aβ in the dynamics and robustness of the stable PS1 complexes. Results There was an age-dependent increase in CSF-PS1 levels in cognitively normal controls, the different complexes represented in similar proportions. The total levels of CSF-PS1, and in particular the proportion of the stable 100–150 kDa complexes, increased in subjects with autosomal dominant AD that carried PSEN1 mutations (eight symptomatic and six asymptomatic ADAD) and in Down syndrome individuals (ten demented and ten non-demented DS), compared with age-matched controls (n = 23), even prior to the appearance of symptoms of dementia. The proportion of stable CSF-PS1 complexes also increased in sporadic AD (n = 13) and mild-cognitive impaired subjects (n = 12), relative to age-matched controls (n = 17). Co-immunoprecipitation demonstrated the association of Aβ oligomers with soluble PS1 complexes, particularly the stable complexes. Conclusions Our data suggest that CSF-PS1 complexes may be useful as an early biomarker for AD, reflecting the pathology at asymptomatic state. Electronic supplementary material The online version of this article (doi:10.1186/s13024-016-0131-2) contains supplementary material, which is available to authorized users. Background Alzheimer's disease (AD) is a progressive neurodegenerative disorder that involves a gradual decline in memory and other cognitive functions, representing the most common cause of dementia in the elderly. Apart from the common late-onset forms of sporadic AD (sAD), rare mutations in the genes encoding the β-amyloid precursor protein (APP; chromosome 21q21), presenilin-1 (PSEN1; chromosome 14q24.3) and presenilin-2 (PSEN2; chromosome 1q31-q42) cause autosomal dominant AD (ADAD; also named as familial AD or FAD) [1]. ADAD exhibits similar phenotype as sAD but with an earlier clinical onset. The APP gene encodes a large type I transmembrane protein that upon proteolytic processing [2] can generate the β-amyloid peptide (Aβ), the major constituent of senile plaques and the triggering effector of AD. In the amyloidogenic pathway the Aβ peptide is generated by sequential cleavage of APP, starting with the cleavage of the large extracellular domain by the βsecretase cleaving enzyme (BACE1), which is followed by the successive action of γ-secretase at the membranespanning domain [3]. This γ-secretase is an intramembrane protease complex composed of presenilin-1 (PS1), nicastrin, APH1 (anterior pharynx-defective 1) and PEN2 (presenilin enhancer 2) [4]. PS1 is the catalytic subunit of the γsecretase complex [5]. Duplications of APP and neighboring sequences are also linked to an early age of AD onset [6]. As such, Down's syndrome (DS) is also associated with the development of AD since the APP gene lies on chromosome 21, and the extra copy leads to Aβ over-expression. Accordingly, most DS patients who live beyond the age of 40 years develop typical brain neuropathology AD and a significant proportion develop additional cognitive decline [7][8][9]. Thus, both these disease conditions, ADAD and DS, can be considered as early-onset forms of genetically determined AD [10]. Classic biomarkers, total and phospho-tau, as well as Aβ42, have shown diagnostic accuracy for incipient AD [11]. However total and phospho-tau also increased as a result of other neurological processes; while levels of the pathological Aβ42 species, which increased in the AD brain, resulted decreased in CSF due to increasing deposition, hindering the interpretation of changes in their soluble levels in early stages. Thus, there is still a need to identify additional early biomarkers. We recently demonstrated the presence of heteromeric PS1 complexes in human CSF (CSF-PS1) and serum, and that increases in the proportion of stable CSF-PS1 complexes served to discriminate sAD from non-disease controls [12]. PS1 is known to undergo endoproteolytic cleavage as part of its maturation, generating N-and C-terminal fragments (NTF and CTF) of about 29 and 20 kDa, respectively [13]. Both, the NTF and CTF of PS1 contain several transmembrane domains [14]; and our earlier data indicated that PS1 fragments might be highly unstable in CSF and serum, and that they spontaneously form complexes due to the large number of hydrophobic regions. Indeed, we demonstrated the presence of stable 100-150 kDa heteromeric complexes in CSF that contained the NTF and CTF of PS1 (maybe also involving other γ-secretase components), as well other large complexes. Some of these complexes were unstable under denaturing conditions and resolved as~50 kDa heterodimers upon electrophoresis [12]. Moreover, an increase in the proportion of stable 100-150 kDa complexes appears to be a good marker to discriminate pathological AD samples from controls. As such, we set out to further characterize these soluble PS1 complexes and the involvement of oligomeric Aβ in the formation of these complexes. We also evaluated the possibility that the proportions and nature of the CSF-PS1 complexes may vary during aging. The main interest was to investigate the levels of CSF-PS1 complexes in ADAD, sAD and DS, particularly in AD and DS subjects who had not yet developed dementia, including also mild-cognitive impaired (MCI) subjects. Thereby, we attempt to determine whether alterations to the levels of these complexes might reflect the pathological state at early, asymptomatic stages. Using a collection of well-characterized CSF samples from sAD PS1 complexes were also analyzed. Genetically determined AD offers unique opportunities to analyze diagnostic biomarkers at asymptomatic stages, particularly given that only in this group is a diagnosis guaranteed for the early comparison of biomarkers. Patients Lumbar CSF samples were obtained from ADAD subject that were all carriers of PSEN1 mutations and who were part of the Genetic Counseling Program (PICOGEN) at the Hospital Clínic, Barcelona [15]. This group included 14 subjects carrying PSEN1 mutations (including six asymptomatic mutation carriers), and eight age-matched non-mutation carriers from the same families (younger non-disease controls: yNC). The clinical and CSF data of some of these patients has been reported previously [16,17]. We also included lumbar CSF samples from 10 DS subjects with Alzheimer's type dementia (dDS) and 10 DS subjects without signs of memory decline (ndDS) obtained at the Hospital Sant Pau, Barcelona, along with 15 additional age-matched yNC obtained from both hospitals. In addition, 15 patients with dementia due to sAD, 12 subjects with MCI and 17 age-matched elderly controls (eNC) were also obtained from the Hospital Sant Pau, Barcelona. See Table 1 for details of clinical and demographic data. All AD patients fulfilled the 2011 NIA-AA criteria for dementia or MCI due to AD [18,19], while discrimination between the dDS subjects and those without dementia was assessed using the modified Cued Recall Test and the CAMDEX-DS battery [20,21]. All the control subjects had no history or symptoms of neurological or psychiatric disorders, or memory complaints. This study was approved by the ethics committee at the Miguel Hernandez University and it was carried out in accordance with the Declaration of Helsinki. Western blotting and immunoprecipitation Although the denaturation temperature prior to electrophoresis has not been standardized, we found that high temperature sample preparation for electrophoresis (98°C) produced an overall loss of CSF-PS1 immunoreactivity [24]. Hence, all analyses of in this study PS1 avoided freeze-thaw cycles (samples were aliquoted), and denaturation prior to electrophoresis was conducted at 50°C. Samples (30 μL for CSF) were resolved by sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) under reducing conditions. The proteins were then transferred to nitrocellulose membranes (Schleicher and Schuell Bioscience GmbH) that were probed with PS1 antibodies directed against the N-terminal amino acids 1-20 (antibody 98/1) [24]. GAPDH (Abcam) served as a loading control for cellular extracts. Membranes were incubated with the corresponding horseradish peroxidase conjugated secondary antibody and the immunoreactive signal was detected in a Luminescent Image Analyzer LAS-1000 Plus (FUJIFILM) using Super-Signal West Dura Extended Duration Substrate (Thermo Scientific). A control CSF sample was used to normalize the immunoreactive signal, and for semi-quantitative studies the intensity of the immunoreactive bands was measured by densitometry using Science Lab Image Gauge v 4.0 software provided by FUJIFILM. Aβ peptides in CSF immunoprecipitates (see below) were resolved by 16 % Tris-tricine SDS-PAGE and detected with the 6E10 antibody (Covance Research). For immunoprecipitation, samples were precleared for 2 h at 4°C by incubation with protein A-Sepharose (Sigma-Aldrich). Immunoprecipitations were performed at 4°C by incubating 150 μL of CSF or cell media, overnight with the primary PS1 C-terminal antibody 00/2 (raised against residues 301-317) [23] previously coupled to protein A-Sepharose using Dimethyl pimelimidate dihydrochloride (Sigma-Aldrich Co). Precipitated proteins were washed with PBS and eluted with 0.1 M glycine buffer at pH 2.5. After pH neutralization, supernatants were denatured in Laemmli sample buffer at 50°C for 15 min and subjected to SDS-PAGE. The membranes were then probed with anti-PS1 (98/1) and anti-Aβ (6E10) antibodies. Sucrose gradients PS1 complexes were analyzed by ultracentrifugation for 4 h at 4°C on a continuous sucrose density gradient (5-20 %) at 250,000 × g. CSF aliquots (65 μL) were carefully loaded onto the top of the gradient containing 2 mL of 0.15 M NaCl, 50 mM MgCl 2 and 0.5 % Brij 97 in 50 mM Tris-HCl (pH 7.4). After centrifugation,~14 fractions were collected gently from the top of the tubes. Enzyme markers of known sedimentation coefficient, β-galactosidase, catalase and alkaline phosphatase were used in the gradients to determine the approximate sedimentation coefficients. The sucrose fractions containing highly stable and unstable PS1 complexes were pooled separately, dialyzed against Tris buffer In the yNC group (younger controls), the values for the control subgroup of non-mutation carriers from the same families as the carriers of PSEN1 mutations are also indicated; the rest of cases correspond to subject without family history of ADAD. The PSEN1 mutations included in this study from syADAD cases ("symptomatic" autosomal dominant AD subjects) corresponded to 3 carriers of L286P, and one of I439S, S169P, L173F, L235R and L282R. Those psADAD subjects (pre-symptomatic subjects carrying mutations in PSEN1) were 3 carriers of M139T, and one of I439S, R220G and K239N. Patients with (dDS) or without (ndDS) signs of clinical dementia were also compared with yNC; sporadic AD (sAD) and mild-cognitive impaired (MCI) subjects were compared with elderly controls (eNC). Levels of Aβ42, T-tau and P-tau were determined by ELISA; the intra-assay coefficient of variability (CV) was below 5 % and inter-assay CV below 15 % for all the classical AD biomarkers, in agreement with previous reports [36]. The number of samples "n" for female (F) and male (M) subjects is indicated. The data represent the means ± SEM, and for age and MMSE (Minimental State Examination), the range of values is also indicated. *Significantly different (p <0.05) from the yNC group, a and from the ndDS group; **Significantly different (p <0.05) from the eNC group and concentrated by ultrafiltration (Amicon Ultra 10,000 MWCO, Millipore Corporation, Bedford, MA). The PS1 complexes were then immunoprecipitated with anti-PS1 00/2 as described. Statistical analysis All data were analyzed using SigmaStat (Version 3.5; Systac Software Inc.), applying a one-way analysis of variance or a Kruskal-Wallis test when the hypothesis of equality of sample variances was rejected. Pairwise group comparisons were then sustained using Student t test (two-tailed) or Mann-Whitney U test, and the exact p values determined. The results are presented as the means ± SEM, and correlations between the variables were assessed by linear regression analyses, with p values <0.05 considered statistically significant. Results The increase in CSF-PS1 with age Since the main aim of the present study was to determine the changes in CSF-PS1 associated with ADAD and DS, and given that both ADAD and DS exhibits earlier clinical onset, we first assessed whether the amount and nature of the soluble PS1 complexes varies with age. The PS1 complexes in samples from control subjects (NC) from 25 to 80 years-of-age were detected with the 98/1 antibody, which predominantly recognized complexes of approximately 100 and 150 kDa, together with a less abundant 50 kDa band (Fig. 1a). The identity of these bands as complexes involving NTF-and CTF-PS1 was demonstrated in a previous study [12]. This soluble 50 kDa PS1 band may represent a NTF and CTF-PS1 aggregate, as the holoprotein had a mass of~43 kDa and it differs in its electrophoretic migration [12]. PS1-NTF monomers are not detectable in human CSF samples. Since ADAD starts prior to 60 years of age [1], we sub-grouped young and elderly NC below and above this threshold. The sum of the immunoreactivity for the major 100 and 150 kDa PS1 complexes was significantly higher (~58 %) in the elderly NC (eNC; n = 18) than in the young NC samples (yNC; n = 19; p <0.001: Fig. 1b). No differences were found between values obtained from the two center of sample collection. In all the NC samples, the major 100 and 150 kDa PS1 complexes were positively correlated with age (r = 0.54; p <0.001: Fig. 1c). Therefore, this age-dependent increase in PS1 complexes must be taken into account when comparing the different pathological groups with non-disease subjects, defining appropriate age-matched controls. We also attempted to assess potential differences in the class of the PS1 complexes in the NC sub-groups based on the direct analysis of the Western blots. As such, we defined the (100 + 150 kDa)/50 kDa quotient for each sample. No change was observed in the (100 + 150 kDa)/50 kDa quotient evaluated in CSF from yNC and eNC subjects (Fig. 1b). Higher PS1 levels in symptomatic and asymptomatic ADAD To assess whether the amount of CSF-PS1 is altered in ADAD, the levels in the age-matched yNC group were compared with those in the CSF from symptomatic (syADAD) and asymptomatic (pre-symptomatic: psA-DAD) subjects carrying mutations in PSEN1 in Western blots (see Table 1 and Fig. 2a). Stronger immunoreactivity for the 100 and 150 kDa complexes was evident in syADAD (~119 %; p <0.001) and in psADAD (~87 %; p <0.001) subjects compared to the yNC, with no differences between the two pathological groups (Fig. 2b). Indeed, the levels in these AD subjects were significantly higher than in the yNC sub-group, composed by non-mutation carriers from the same ADAD families (p <0.001). The previously defined quotient of CSF-PS1 complexes (see above) also discriminated between the yNC and the two ADAD groups, both individually (p = 0.007 for syADAD; p = 0.027 for psADAD) or when considered as a unique pathological group (p = 0.007). Thus, a higher proportion of 100 + 150 kDa CSF-PS1 complexes appears to be associated with ADAD even at pre-symptomatic stages (Fig. 2b). PS1 complexes can be also characterized by gradient ultracentrifugation [24], followed by Western blotting under denaturing conditions, which served to illustrate the existence of different CSF-PS1 complexes [12]. When, CSF-PS1 complexes from yNC and syADAD subjects were characterized by sedimentation analysis on sucrose density gradients (Fig. 2c), 100-150 kDa PS1 complexes were identified close to the alkaline phosphatase marker (~140-160 kDa), along with larger complexes that sedimented in regions closer to the catalase marker (~232 kDa). These latter complexes were unstable and resolved as 50 kDa peptides by SDS-PAGE/ Western blot analysis (Fig. 2c). In good agreement with results with the CSF-PS1 complex quotient obtained for direct Western blot analysis, samples separated by ultracentrifugation revealed higher abundance of the highly stable 100-150 kDa PS1 complexes in the syADAD samples than in the yNC samples, more so than the complexes of the 50 kDa fragments that sedimented in the denser fractions. This difference was clearly evident with the determination of a refined quotient, the "stability" quotient, reflecting the differences between the highly stable complexes (the 100-150 kDa heterodimers that sediment close to the internal marker of similar molecular mass) and the unstable complexes (the 50 kDa complexes that sediment closer to catalase), this quotient allowing us to discriminate syADAD (p = 0.004) from yNC samples (Fig. 2d). Highly stable CSF-PS1 complexes are elevated in sAD and MCI In sAD no notable differences in total PS1 were observed between patients with dementia due to sAD, MCI due to AD, or age-matched eNC subjects (Fig. 3a, b). However, the highly stable PS1 complexes were again more abundant in probable sAD cases compared to elderly eNCs when the CSF-PS1 complexes quotient was calculated (p = 0.006; Fig. 3b). Sucrose density centrifugation profiles (Fig. 3c) and the subsequent estimation of the "stability" quotient confirmed the greater abundance of highly stable PS1 complexes in sAD compared to eNC (p = 0.02; Fig. 3d), as well as indicating that the highly stable complexes were particularly increased in MCI subjects (p = 0.008; Fig. 3d). Fig. 1 Characterization of the CSF-PS1 complexes in younger and elderly NC subjects, and their correlation with age. a Representative Western blots of human CSF samples from non-demented control (NC) subjects arbitrarily categorized as young (yNC; ≤60 years; n = 23) and elderly (eNC; >60 years; n = 17), and probed with an anti-NTF-PS1 antibody. b Densitometric quantification of the major 100 and 150 kDa CSF-PS1 complexes (the sum of the 100 + 150 kDa CSF-PS1 bands) and the quotient derived from the immunoreactivity for the 100 and 150 kDa bands relative to that for the minor 50 kDa band in each sample [(100 + 150 kDa)/50 kDa]. The data represent the means ± SEM and they were compared using a paired Students t test: *p <0.001. c Correlation between the levels of the 100 + 150 kDa CSF-PS1 complexes with age Higher PS1 levels in demented and non-demented DS DS is considered a pre-symptomatic AD [10]. To assess whether an increase in the CSF-PS1 complexes is also associated with DS, we analyzed CSF samples from DS patients with (dDS) or without (ndDS) signs of clinical dementia, comparing these to age-matched yNC (Fig. 4a). The cumulative immunoreactivity of the major 100 and 150 kDa bands was significantly higher in both dDS (p <0.001) and ndDS (p = 0.007) CSF than in that from yNC subjects (Fig. 4b). Remarkably, the CSF-PS1 complexes quotient also revealed consistent changes in the proportion of the different complexes for both dDS (p <0.001) and ndDS subjects (p = 0.04) relative to yNC (Fig. 4b). The formation of stable CSF-PS1 complexes is favored by β-amyloid Although PS1 clearly forms native complexes in CSF, there is little knowledge about the dynamics of soluble PS1 fragment assembly into heteromeric complexes. Thus, we monitored the assembly of soluble PS1 into complexes in a cell model, CHO cells over-expressing wild-type human PS1. An increase in the 29 kDa NTF of PS1 in extracts from CHO cells transfected with human PS1 corroborated that these cells over-expressed the protein (Additional file 1: Figure S1A). Immunoblotting of the cell-conditioned medium revealed predominant bands of approximately 100 and 150 kDa, and a weaker~70 kDa band. The amounts of these soluble PS1 complexes increased in conditioned media from CHO cells transfected with PS1 (Additional file 1: Figure S1A). CHO cells stably transfected with PS1 and APP showed similar soluble PS1 complexes with additional 50 kDa band and monomeric NTF (Additional file 1: Figure S1A). To ascertain the identity of the soluble PS1 complexes in the cellular model, we reduced PS1 expression in CHO cells stably over expressing wild-type human PS1 with siRNA PS1. Cells transfected with siRNA PS1 displayed decrease in cellular PS1-NTF, but also in soluble PS1 complexes identified in cell media (Additional file 1: Figure S1A). Fig. 2 The increase in the CSF-PS1 complexes in ADAD. a Representative blot of the PS1 complexes in the CSF samples from eight symptomatic ADAD (syADAD), six presymptomatic mutation carriers (psADAD) and 23 younger NC controls (yNC), eight of which were from the same families a the ADAD subjects but that did not carry mutations (black symbol; see also Table 1). b Densitometric quantification of the accumulative immunoreactivity from the sum of the higher molecular mass PS1 complex (100 + 150 kDa). A quotient was calculated for each sample defined as the sum of (100 + 150 kDa) immunoreactivity relative to the 50 kDa immunoreactivity: (100 + 150 kDa/50 kDa). c Six syADAD and five yNC samples were fractionated on 5-20 % sucrose density gradients to further characterize the PS1 complexes. The fractions (collected from the top of each tube) were immunoblotted under denaturing conditions and probed for PS1, as in (a). β-Galactosidase (G, 16.0S;~540 kDa), catalase (C, 11.4S;~232 kDa) and alkaline phosphatase (P, 6.1S;~140-160 kDa) were used as internal markers. Representative blots are shown. d The "stability" quotient was defined as the sum of the stable immunoreactive bands that sediment close to alkaline phosphatase (~140-160 kDa; fractions 2-7), mainly the 100 and 150 kDa bands, relative to the large unstable complexes that sediment closer to catalase (~232 kDa; fractions [8][9][10][11][12], and resolve mainly as 50 kDa immunoreactive bands in Western blots. The data are the means ± SEM: *Significantly different (p <0.005) from the yNC group as assessed by the Student t or Mann-Whitney U tests We also analyzed the soluble PS1 complexes in the conditioned medium of PS1-transfected CHO cells and CHO cells over-expressing PS1 and APP, using sucrose-density gradient fractionation followed by Western blotting under denaturing conditions (Additional file 1: Figure S1B). The majority of the soluble PS1 in the CHO cell-conditioned medium accumulated close to the alkaline phosphatase marker (~140-160 kDa) and resolved as 70 kDa complexes after denaturation, with only faint bands at 100 kDa. However, some 29 kDa monomeric PS1 was also evident, probably released from the complexes (Additional file 1: Figure S1B). By contrast, in the medium of CHO cells overexpressing PS1 and APP there was virtually no 29 kDa NTF immunoreactivity, indicating that in the context of βamyloid over-expression, most of the soluble PS1 is stably incorporated into complexes (Additional file 1: Figure S1B). Fig. 3 Increase in the stable PS1 complexes in AD and MCI CSF. a Representative blot and (b) densitometric quantification of the accumulative immunoreactivity from the sum of stable higher molecular mass PS1 complexes (100 + 150 kDa) in CSF samples from 13 sAD, 12 MCI and 17 age-matched eNC subjects. A quotient calculated as the sum of (100 + 150 kDa) immunoreactivity relative to the 50 kDa immunoreactivity: (100 + 150 kDa/50 kDa) is also shown. c Six AD and MCI individuals, and 8 eNC subjects were fractionated on 5-20 % sucrose density gradients, and probed with the PS1 antibody under denaturing conditions. The internal markers were β-galactosidase (G), catalase (C) and alkaline phosphatase (P), as in Fig. 3. d The values for the "stability" quotient reflecting the highly stable complexes (100 + 150 kDa immunoreactive bands sedimenting in fractions 2-7) relative to the unstable complexes (50 kDa immunoreactive bands sedimenting in fractions 8-12) is also shown. *p <0.05, **p <0.01 as assessed by the Student t or Mann-Whitney U tests Fig. 4 An increase in PS1 stable complexes in DS CSF. a Representative blot of PS1 complexes in CSF from 10 DS subjects with dementia of the Alzheimer's type (dDS), 10 DS without any sign of memory decline (ndDS) and 23 yNC. b Densitometric quantification of the accumulated immunoreactivity from the sum of the higher molecular mass PS1 complex (100 + 150 kDa), and the quotient of the (100 + 150 kDa) immunoreactivity relative to the 50 kDa immunoreactivity (100 + 150 kDa/50 kDa). The means ± SEM are shown: *p <0.005, **p <0.005, Student t test We further tested the possible interaction between soluble PS1 complexes and Aβ. PS1 was immunoprecipitated from the medium of CHO cells over-expressing PS1 and APP with the 00/2 antibody that recognizes the PS1 CTF. Immunoprecipitation of heteromeric PS1 complexes was confirmed in Western blots probed with the anti-Nterminal 98/1 antibody (Additional file 1: Figure S1C). Considerable amounts of Aβ oligomers were also detected in these immunoprecipitates by the 6E10 antibody (Additional file 1: Figure S1C), while no immunoreactivity was resolved by a C-terminal APP antibody (not shown); indicating that oligomers of Aβ, but not C-terminal fragments, interact with the soluble PS1 complexes. To confirm that Aβ oligomers favors the formation of stable PS1 complexes in human CSF we examined the Aβ peptides in PS1 complexes immunoprecipitated from CSF samples from sAD and eNC subjects. Again, CSF samples immunoprecipitated with 00/2 antibody were probed in immunoblots with the 98/1 and 6E10 antibodies (Fig. 5a), demonstrating that Aβ oligomers coimmunoprecipitated with heteromeric PS1 complexes from both eNC and sAD CSF samples. We further tested the involvement of Aβ on the formation of the highly stable PS1 complexes. After CSF-PS1 complexes were fractioned by sucrose density gradients and the peak fractions of the highly stable and unstable complexes were isolated, they were immunoprecipitated with the 00/2 antibody (Fig. 5b). Aβ oligomers were clearly present in the fractions rich in stable 100-150 kDa complexes from both eNC and sAD samples, whereas Aβ oligomers are present in highly stable CSF-PS1 complexes. a CSF samples from eNC and sAD subjects were precleared with protein A-Sepharose (T: total), and then immunoprecipitated with the anti CTF-PS1 00/2 antibody. The precipitated proteins (IP) were probed in immunoblots with the antibody indicated (98/1 for NTF-PS1 and 6E10 for Aβ). Note that oligomeric Aβ species co-immunoprecipitate and interact with CSF-PS1 complexes in both eNC and sAD. None immunoreactivity was resolved in negative controls incubated with beads in the absence of antibody (not shown). b CSF-PS1 complexes were fractionated by sucrose gradient centrifugation, and the fractions containing highly stable (Hs) or unstable (Us) PS1 complexes were pooled, dialyzed and concentrated by ultrafiltration. Representative sedimentation profiles illustrate the fractions selected for peak isolation. The enriched CSF-PS1 complexes were then immunoprecipitated with the 00/2 antibody and assayed in immunoblots probed with the 6E10 antibody against Aβ (insert). Representative blots reveal that Aβ oligomers are mainly present in peak fractions containing highly stable PS1 complexes (illustrative examples from two different experiments) virtually no Aβ immunoreactivity was detected in the pooled fractions of 50 kDa PS1 complexes (Fig. 5b). Hence, oligomers of Aβ appear to mainly associate with the highly stable PS1 complexes. Discussion The detection of soluble PS1 in CSF and serum [12] was a somewhat unexpected finding, particularly since PS1 is a multi-pass transmembrane protein with several hydrophobic regions [14]. Indeed, the presence of soluble PS1 has been reported in the medium of primary neurons [25] and confirmed in human serum [26]. Here, we corroborated the existence of different PS1 complexes in human CSF and we revealed their potential utility as a biomarker for AD. Like many membrane proteins, PS1 has a tendency to aggregate under non-native conditions [27,28]. Thus, CSF-PS1 complexes probably represent non-specific aggregates of PS1 NTF and CTF distinct to the active γ-secretase membrane-complexes [12]. How PS1 complexes become soluble and appear in the CSF is yet to be determined. However, it appears that Aβ oligomers can probably contribute to the formation of stable CSF-PS1 complexes which are particularly abundant in AD. Indeed, it is remarkable that when we follow the formation of PS1 complexes in the cellconditioned media, the co-expression of APP and PS1 favored the accumulation of complexes and not soluble monomeric PS1 is existent. We were able to pull down oligomeric Aβ species by PS1 immunoprecipitation from the medium, as well as from human CSF, in which Aβ oligomers are mainly associated to the highly stable PS1 complexes. Aβ peptides are chemically "sticky", gradually building up into fibrils and aggregates; although the mechanism of how can Aβ stabilize CSF-PS1 is yet to be determined. Also in this context, levels of soluble Aβ peptide assessed by ELISA determinations appear consistently decreased in AD CSF [11]. The possibility that some amounts of Aβ participate within stable protein complexes in CSF, resulting underestimated by conventional ELISA protocols, may deserve consideration. In CSF samples from NC subjects we observe an agerelated increase in the total amount of PS1, while the relative proportion of the different complexes remains unaltered. No changes were observed comparing NC samples from different center of sample collection or gender. However, the relative proportion of stable PS1 complexes does appear to increase in the AD condition. We propose that the most significant phenomenon related to the potential use of CSF-PS1 to discriminate the pathological state is the change in the proportion of PS1 complexes, rather than the estimates of the total PS1 levels. Accordingly, we focused our analysis on the highly stable 100-150 kDa PS1 complexes in CSF. The highly stable CSF-PS1 complexes co-exist with unstable complexes, sedimenting after differential centrifugation in regions closer to 200-250 kDa, but mainly resolved as 50 kDa components by reducing SDS-PAGE. We found that a quotient of PS1 complexes can discriminate all pathological groups from age-matched controls. We suggest that these quotients reflect differences in the properties of the PS1 complexes formed under pathological conditions. Screening large numbers of samples by sucrose gradient ultracentrifugation is difficult. As a reliable alternative, we addressed the discrimination of samples using a complementary parameter, a quotient of CSF-PS1 complexes calculated directly from Western blot analysis [(100 + 150 kDa)/50 kDa], thereby simplifying the analysis. This alternative quotient is useful to discriminate ADAD and DS subjects from agematched yNC, as well as sAD from eNC. In our analysis, this quotient of PS1 complexes only failed to adequately discriminate MCI subjects, maybe indicating a lack of sensibility with respect to the evaluation of the complexes after separation by ultracentrifugation in sucrose density gradients. The inherent uncertainty in clinical diagnosis may also account for these differences, particularly for MCI group in which some subjects maybe remained MCI stable or develop to other dementias. Anyhow, large overlap is observed between groups when assessment of the relative amount of CSF-PS1 complexes is estimated by a quotient obtained directly from Western blot analysis, without fractioning by ultracentrifugation. It will be necessary to replicate these finding using other techniques, such as ELISA specific for stable CSF-PS1 complexes, to evaluate their true potential as biomarkers. Interestingly, altered levels of CSF-PS1 are detectable in both symptomatic and asymptomatic ADAD subjects. Similarly, alterations to CSF-PS1 levels occur in DS subjects with and without dementia. The analysis of CSF samples from DS subjects is of particular interest since it is well known that almost all adults with DS over 40 years of age display AD neuropathology [29,30], although the prevalence of dementia in these individuals varies considerably [31][32][33][34]. Thus, there is no association between the age of onset of AD neuropathology in DS subjects and the appearance of clinical dementia [35], and we cannot predict the number of ndDS that will develop future cognitive impairment. In the view of the consistent changes in CSF-PS1 in ndDS we assume that this biomarker is more related to the brain pathological status than the occurrence of dementia and cognitive decline. Conclusions In conclusion, our present findings demonstrate that CSF-PS1 complexes are altered in genetically determined AD, as well as in sAD. Together, our results indicate that the increase in stable PS1 complexes in CSF is an early phenomenon associated to AD pathology and may constitute an asymptomatic biomarker.
2017-08-03T02:15:29.473Z
2016-09-29T00:00:00.000
{ "year": 2016, "sha1": "439fe8469ec5203027008d3e3b4be5e59e10bbf0", "oa_license": "CCBY", "oa_url": "https://molecularneurodegeneration.biomedcentral.com/track/pdf/10.1186/s13024-016-0131-2", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "409168211b55daf54bcce81515afaf425be7ff7b", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
119235086
pes2o/s2orc
v3-fos-license
Light Curve Analysis of GSC 2750-0054 and GSC 03208-02644 We present the first photometric analysis for the newly discovered eclipsing binary systems of Algole-type GSC 2750-0054 and GSC 03208-02644. Our analysis was carried out by means of the most recent version of Wilson Devinney (WD) code, which applies the model atmosphere by Kurucz (1993) with prescription in pass band for the radiative treatment. The accepted light curve solutions reveal absolute physical parameters and the spectral classifications for the components are adopted. Distance to each system was calculated based on the parameters of the accepted photometric solutions. Comparison with the evolutionary models are presented. Introduction Modelling the light of the eclipsing binaries can provide precise results of the physical parameters and thus evaluate their evolutionary state. This is because the observed quantities like brightness, colour and radial velocity give strong constraints on the geometric configuration of a given system. One of the characteristic of the components in the detached and semi -detached binaries have not filled their Roche lobes yet and hence one could treat their evolution as single stars. The binaries GSC 2750-0054 and GSC 03208-02644 are newly discovered short period eclipsing systems of Algole type (EA). The present paper is a continuation of the series concerning the photometric analysis of the newly discovered eclipsing binaries, , and The present paper consists of five sections as follows: Section 2 is devoted to the new times of minima. Section 3 deals with light curve modelling and their parameters. In section 4 we investigate the evolutionary status for both systems. Conclusions of the results are outlined in section 5. GSC 2750-054 The system GS 2750-0054 (RA (2000, 23 h 00 ' 07' ', Dec (2000), 30 d 39 ' 18'') was discovered as a variable star in 2013 by Nelson and Buchhein (2014), it was classified as detached system (EA) with period of 0. d 47187. Observations by Nelson was carried out in the period from 3 to 26 August, 2013 using 33 cm f/4 Newtonian telescope on Paramount ME mount of Sylvester Robotic Observatory, with SBIG ST-10 XME CCD in clear filter only. The star TYC 2750 1818 was used as a comparison star. Buchhein observed the system in the period from 19 to 28 August 2013 in B, V, and R pass bands by means of 11-inch (28 cm) Schmidt-Cass at F/6.3 telescope of Altimira Observatory (Minor Planet Center Observatory Code G76) with ST-8XE CCD (plate scale = 1.1 arc-sec pixel). A complete light curves were obtained in V and R pass bands. A total of six minima were estimated from Nelson and Buchhein observations using a Minima V2.3 package (Nelson 2006) which based on the Kwee and Van Worden (1956) fitting method. Table 1 GSC 03208-02644 The by Liakos & Niarchos (2011) in the frames of V407 Lac. The star TYC 3208 2737-1 was used as a comparison star. They carried out the first observations for the system during the period from 21 July to 1 August 2010 in B and I (Bessell) pass bands using 0.2 m reflector f/5 telescope of the university of Athens observatory equipped with SBIG ST-8XMEi CCD camera. A total of six new minima (two primaries and four secondaries) and the corresponding residuals (O-C)'s are calculated using the first linear ephemeris by Liakos and Niarchos (2011) given by: Light Curve Analysis The analysis of the observed Light curves of the systems GSC2750-0054 and GSC We used (J-H) colour index temperature relation by Tokunaga (2000) to estimate the corresponding temperature for each colour index. We analysed all individual observations of the observed light curves in each band. Bolometric albedo and gravity darkening were assumed for convective envelopes (Teff < 7500). Hence we adopted g1 = g2 = 0.32 (Lucy 1967) and A1 = A2 = 0.5 (Rucinski 1969). Bolometric limb darkening values are adopted using tables of Van Hamme (1993) based on the logarithmic law for extinction coefficients. Through the light curve solution, the commonly adjustable parameters employed are, the orbital inclination (i), mass ratio (q), temperature of the secondary component (T2), surface potential Ω1 (only for the system GSC 03208-02644), Ω2 (for both systems) and the monochromatic luminosity of the primary star (L1). The relative brightness of the secondary component is calculated from the stellar models. GSC 2750-0054 The light curve analysis for the system GSC 2750-0054 is performed using Buchhein (2014) observations in V and R passbands using the W-D code (Nelson 2009) through Mode 4 (semidetached). A set of parameters represent the observed light curves are estimated after some trails and lead to the best photometric fit listed in Table 2. According to the accepted model, the primary component is more massive and hotter than the secondary one with temperature difference of about 500 0 K. Fig. 1 GSC 03208-02644 We carried out the photometric solution to the system GSC 03208-02644 in B and I pass bands using Mode 2 (detached) of W-D code (Nelson, 2009). The parameters of the accepted model are listed in Table 2. The best light curve solution reveals that the primary component is hotter than the secondary one by about 500 0 K. The reflected observed points in B and I pass bands are displayed in Fig. 5 together with the corresponding theoretical light curves. The corresponding three dimensional structure of the system GSC 03208-02644 (according to the parameters of the accepted model) is displayed in Fig. 6 with two hot spots on both components. Results and Conclusion To determine the physical parameters of the two systems and due to the noavailability of radial velocity curves, we used the empirical Teffmass relation by Harmanec, 1988. The estimated physical parameters reveals that the primary components in both systems are more massive than the secondary ones. Distances ( Girardi et al. (2000). Open symbols for GSC 2750-0054 and closed symbols for GSC 03208-02644. Circles for the primary and triangles for the secondary. Figure 6: Positions of the two systems GSC 2750-0054 and GSC 03208 02644 on the mass-luminosity diagram of Girardi et al. (2000). Open symbols for GSC 2750-0054 and closed symbols for GSC 03208-02644. Circles for the primary and triangles for the secondary. In concluding the paper, we have performed light curve analysis for the newly discovered eclipsing binary systems GSC 2750-0054 and GSC 03208-02644 discovered in 2013 and 2010 respectively. A complete light curves are obtained for both systems and new times of minima are calculated. First photometric analysis for both systems reveals that the primary components in both systems are hotter and more massive than the secondary ones. Spectral classifications was adopted for each components based on the absolute parameters resulted from the accepted photometric solution. Distance to each system was calculated and a three dimensional structure is displayed. As the two systems are detached and semi-detached, we expect that they could be modelled by the evolutionary models of single stars. Locations of the individual components on M-R, M-L and M-T eff diagrams give a preliminary result and need to be confirmed by more photometric and spectroscopic observations.
2019-04-13T14:06:06.083Z
2016-02-12T00:00:00.000
{ "year": 2016, "sha1": "62a4a07a441f242220f1a5325d1bf111f74292f8", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1602.04100", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "495e3387489ca1a9b0c718d1163d4f5bc76c6c7a", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
235922900
pes2o/s2orc
v3-fos-license
MR Imaging–Pathologic Correlation of Uveal Melanomas Undergoing Secondary Enucleation after Proton Beam Radiotherapy : Background: Currently, radiotherapy represents the most widely employed therapeutic option in patients with uveal melanoma. Although the effects of proton beam radiotherapy on uveal melanoma end ocular tissues have been histologically documented, their appearance at MR imaging is still poorly understood. The purpose of our study was to elucidate the magnetic resonance (MR) semiotics of radiotherapy-induced changes to neoplastic tissues and ocular structures in patients with uveal melanoma undergoing secondary enucleation after proton beam radiotherapy. Methods: Nine patients with uveal melanoma who had undergone proton beam radiotherapy, MR imaging, and subsequent secondary enucleation were retrospectively selected. The histopathologic findings evaluated for irradiated tumors were necrosis, fibrosis, and viable tumor, while the histopathologic findings evaluated for extratumoral ocular/periocular tissues were radiation-related intraocular inflammation, vitreous hemorrhage, optic nerve degeneration, iris neovascularization, and periocular fibrotic adhesions. On MR images, the appearance of the abovementioned histologic features was assessed on conventional and diffusion-weighted sequences. Results: T2-weighted sequences performed better in detecting radiation-induced necrosis, fibrosis, optic nerve degeneration, and periocular fibrotic adhesions. T1-weighted sequences were preferable for identifying cataracts, vitreous hemorrhage, and inflammatory complications. Contrast-enhanced T1-weighted sequences were irreplaceable in assessing iris neovascularization, and in confirming inflammatory complications. Conclusions: In the light of their increasing role in the multidisciplinary management of patients with uveal melanoma, radiologists should be aware of the MR appearance of the effects of radiotherapy on neoplastic and ocular tissue, in order to improve the accuracy of follow-up MR examinations. Today, three different histologic types of uveal melanoma are recognized: spindle cell, epithelioid cell, and mixed cell type [5][6][7]. Various prognostic factors may influence the patient's outcome. In particular, ciliary body involvement, extraocular extension, diffuse growth pattern, some histopathological features (epithelioid cell type), and genetic factors (monosomy 3, 6p gain, or loss of BAP-1 gene) are associated with a more severe prognosis and increased risk of metastases [5]. The diagnosis is essentially entrusted to clinical evaluation and ophthalmological imaging methods; nevertheless, currently, radiological imaging techniques play a critical role in the clinical management of patients with uveal melanoma. In particular, magnetic resonance imaging (MRI) has multifaceted utility for a variety of different purposes: confirmation of clinical diagnosis, assessment of disease extent, evaluation and prediction of tumor response to radiotherapy treatment, follow-up, and detection of local radiotherapyrelated complications [8][9][10][11]. Enucleation and, more recently, radiotherapy (plaque brachytherapy and proton beam radiotherapy) represent the two pillars underpinning the local treatment of uveal melanoma. Since the initial observations of Zimmerman et al. [12], different studies and trials have proven no significant difference in terms of mortality between eye-sparing radiation therapies and surgery [13][14][15][16][17]; therefore, over the last few decades, radiotherapy techniques have gained in importance, and currently represent the most widely used therapeutic option [1]. Enucleation is currently reserved for a few indications: large tumors (basal diameter >20 mm, thickness >12 mm), and uveal melanomas with optic nerve or orbital involvement. Enucleation can be divided into the following types: (1) primary, in patients who do not undergo any other type of therapy, and (2) secondary, in patients already treated with eye-conserving radiotherapy. Indications for secondary enucleation include tumor progression after radiotherapy (local recurrence or extrascleral extension) and treatment-related complications (e.g., ocular pain, vision loss, vitreous hemorrhage, neovascular glaucoma, or chronic inflammation) [18][19][20]. Over the years, considerable progress has been made in radiotherapy techniques and in the management of radiation-related complications; hence, both primary and secondary enucleation have become less frequent. In different series the rate of secondary enucleation after radiotherapy is up to 12.5% [17,21]. Although proton beam radiotherapy has been used in the treatment of uveal melanomas since 1975 [22], only a few authors have reported histopathologic findings of irradiated eyes undergoing secondary enucleation; owing to the rarity of neoplasm and of enucleation, the latter becoming increasingly rare, the scientific literature dealing with this topic is scant. The first articles dealing with the histopathologic and ultrastructural features of uveal melanomas enucleated after prior proton beam radiotherapy date back to the 1980s [22][23][24][25]. In these studies, patients underwent enucleation because of treatment failure [23,25] or clinical treatment-related complications [22,24,25]. A-and B-scan ultrasonographies (US) were used as imaging methods to study uveal melanomas before enucleation [22][23][24]; nevertheless, a real radiologic-pathologic correlation was not performed. Lemke et al. [26] were the first to carry out a correlation between MR and histologic findings in enucleated eyes with uveal melanoma; however, patients underwent primary enucleation and, therefore, did not incur any kind of radiotherapy. The histologic appearance of eyes with uveal melanoma undergoing primary enucleation is very different from that of eyes undergoing secondary enucleation; this is due to radiation-related changes observable in the latter [19,27]. At present, the effects of proton beam radiotherapy on both melanoma cells and healthy ocular/periocular structures have been well documented from a histological point of view [22][23][24][25][27][28][29]; nevertheless, much less is known about their appearance under MR imaging. In our study we focused on the MR imaging-pathologic correlation of uveal melanomas undergoing secondary enucleation after proton beam radiotherapy. In particular, our aim was to elucidate the MR semeiotics of radiotherapy-induced changes on both neoplastic tissues and healthy ocular structures. Patients Our single-institution retrospective cohort study was carried out in accordance with the Code of Ethics of the World Medical Association (Declaration of Helsinki) for experiments involving humans, and in accordance with the recommendations of our local ethics committee. Informed consent was waived because of the retrospective design of the study. Employing the search software of our picture archiving and communication system (PACS) and the Anatomic Pathology Section database, we identified patients affected by uveal melanoma who had undergone MR examination and subsequent enucleation between September 2016 and December 2020. Among these patients, those who underwent secondary enucleation were considered for eligibility in our study. Patient inclusion criteria were as follows: proton beam radiotherapy performed as primary treatment for uveal melanoma at INFN LNS Nuclear Physics Laboratory, Catania (Istituto Nazionale di Fisica Nucleare INFN-Laboratori Nazionali del Sud, Catania); MR examination (brain and orbit) performed at our department before enucleation; secondary enucleation performed within 2 weeks of MR examination; histopathologic diagnosis of uveal melanoma. Patient exclusion criteria were as follows: incomplete MR protocol; poor quality of MR images; interval between MR examination and enucleation greater than 2 weeks; incomplete radiotherapeutic treatment; radiotherapy treatment other than proton beam radiotherapy. Demographic information and the reason for enucleation were recorded. Proton Beam Therapy Protocol All patients underwent proton beam radiotherapy at the INFN LNS Nuclear Physics Laboratory (Istituto Nazionale di Fisica Nucleare INFN-Laboratori Nazionali del Sud, Catania), using a superconducting cyclotron that delivers a 62-MeV proton beam [30]. A total dose of 60 GyRBE (Gray relative biological effectiveness, taking into account a constant RBE of 1.1 over the modulated Bragg peak) was split into four consecutive daily fractions of 15 GyRBE. EYEPLAN, a dedicated software developed at Massachusetts General Hospital by Goitein and Miller [31], was used for treatment planning. MR Protocol All MR examinations were performed using a closed-configuration superconducting 1.5-T MRI unit (Signa HDxT; GE Healthcare, Milwaukee, WI, USA) with 57.2 mT/m gradient strength and 120 T/m/s slew rate, by means of an 8-channel high-resolution neurovascular phased-array coil with array spatial sensitivity technique (ASSET) parallel acquisition. All patients underwent an MR examination of the brain and orbit. MR pulse sequences and corresponding imaging parameters of our MR protocol of the orbit are summarized in Table 1. Table 1. Orbital MRI protocol. The synoptic table summarizes the imaging parameters of the MR sequences. T1-weighted FSE spectral presaturation sequences were performed before and after the intravenous administration of 0.2 mL gadoteric acid (gadoterate dimeglumine, Dotarem, 0.5 mol/L; Guerbet, Roissy, Charles-de-Gaulle Cedex; France) per kilogram of body weight. DW imaging sequence was performed before contrast medium administration. All sequences had a field of view that included orbital structures, lids, and the optic chiasm. Histopathology Enucleated specimens were fixed in formaldehyde, paraffin processed, cut, and stained with hematoxylin-eosin following the routine protocol. Specifically, specimens were cut at 90 • to the plane of the lesion, including the median portion of the eye, encompassing the tumor, cornea, and optic disk. Histological slides were available for diagnostic review in all cases. On enucleated eyes two pathologists jointly recorded and evaluated determined histopathologic features on both irradiated tumors and extratumoral ocular tissues. On irradiated tumors the following histopathologic findings were evaluated: necrosis, fibrosis, and viable tumor tissue. On irradiated extratumoral ocular/periocular tissues the following histopathologic findings were evaluated: radiation-related intraocular inflammation (uveitis, endophthalmitis, and chronic conjunctivitis), vitreous hemorrhage, optic nerve degeneration, iris neovascularization, and periocular fibrotic adhesions. Image Analysis MR images were matched with corresponding histological sections (histopathology slides), taking into account specific anatomic landmarks such as the optic disc and ciliary body. On MR images the appearance of the abovementioned histologic features was recorded and assessed on both conventional sequences (T1-weighted and T2-weighted) and diffusion-weighted imaging (DWI) sequences, by two radiologists. According to the purposes of the study, the radiologists were not blinded to the clinical history of the patients. Moreover, on the MR images, the radiologists evaluated the appearance of the lens and, in particular, the radiation-related cataract. These findings were not assessed in terms of histopathology; therefore, they were correlated with the most recent preoperative clinical examination. Patients On the basis of the abovementioned criteria, 11 patients were identified for potential inclusion in the study. Of these patients, two had to be excluded: one because of inadequate quality of MR images, the other because of plaque brachytherapy treatment. Therefore, the final enrolled population included nine patients (six men, three women). The patients' mean age at irradiation was 55.5 years (range 29-78 years). The time interval between irradiation and enucleation ranged from 12 months to 46 months. The reasons for secondary enucleation were as follows: tumor progression after radiation therapy (local recurrence or extrascleral extension) (n = 4), treatment-related complications (ocular pain, vision loss, vitreous hemorrhage, neovascular glaucoma, or chronic inflammation) (n = 2) or both (n = 3). The demographic data of our case series are summarized in Table 2. Radiation-induced necrosis was found in 5/9 patients. It was characterized by intratumoral coagulation and liquefaction necrosis, with dispersion of melanin pigment and enrichment with melanophages. Fibrosis occurred in 1/9 patients, and was characterized by scar-like collagen bundles that replaced previously viable tumor tissue. Viable tumor tissue was found in 7/9 patients. It was characterized by the presence of visible neoplastic cells with a distinct nucleus. The histopathologic features of the enucleated specimens are summarized in Table 3. Extratumoral Histopathologic Findings The histologic appearance of irradiated extratumoral ocular tissues was as follows: • Radiation-related intraocular inflammation (uveitis, endophthalmitis, and chronic conjunctivitis) (3/9 patients): presence of a conspicuous inflammatory infiltratemainly composed of lymphocytes, plasma cells, and granulocytes-populating the extratumoral ocular tissues. • Vitreous hemorrhage (2/9 patients): extravasation of red blood cells within and around the vitreous body. Extratumoral histopathologic findings are summarized in Table 4. Table 4. Histopathologic findings with respect to irradiated extratumoral ocular tissues. MR Findings with Respect to Irradiated Tumors The MR findings with respect to irradiated tumors are summarized in Table 3. After MR imaging, radiation induced necrosis was appreciable in 4/9 patients. Radiationinduced necrosis showed high signal intensity on T1-weighted, low signal intensity on T2-weighted, no restriction on DW images, and no enhancement on contrast-enhanced fatsuppressed T1-weighted images ( Figure 1). In the one remaining patient with histopathologic evidence of necrosis (Table 3), the irradiated tumor was characterized by microscopic foci of necrosis, only appreciable under pathologic examination and, therefore, below the resolving power (spatial resolution) of MR imaging. At MR imaging fibrosis was appreciable in 1/9 patients. Fibrosis showed intermediate signal intensity on T1-weighted images, low signal intensity on T2-weighted images, no restriction on DW images, and moderate enhancement on contrast-enhanced fat-suppressed T1-weighted images. At MR imaging viable tumor tissue was appreciable in 7/9 patients. The MR appearance of viable tumor tissue resembled that of primary melanoma, largely depending on the melanin content. Typical pigmented melanomas showed high signal intensity on T1weighted images and low signal intensity on T2-weighted images. Poorly pigmented lesions demonstrated intermediate signal intensity on both T1-and T2-weighted sequences. In any case, viable tumor tissue displayed restricted diffusion on DW sequences (high signal intensity on DW images, low signal intensity on the ADC map) due to high cellularity. Table 5 summarizes the MR imaging appearance of irradiated uveal melanomas. Extratumoral MR Findings The MR appearance of irradiated extratumoral ocular tissues was as follows: Radiation-related intraocular inflammation (uveitis, endophthalmitis, and chronic conjunctivitis). In the case of panuveitis (1/9 patients), MRI showed diffuse thickening of the globe wall that displayed noticeable enhancement on contrast-enhanced fat-suppressed T1-weighted images. This sequence also showed a coexisting diffuse choroidal detachment. In endophthalmitis (3/9 patients), the anterior chamber and vitreous body showed increased signal intensity on precontrast T1-weighted images. Another typical finding was the high signal intensity of the vitreous body on T2-weighted fluid-attenuated inversion recovery (FLAIR) sequences performed for the brain examination executed concomitantly with the MR of the orbit (Figure 2). Vitreous hemorrhage (2/9 patients) was better visible on T1-weighted sequences in which both the anterior chamber and the vitreous body displayed high signal intensity. In 1/2 patients, on T2-weighted images, an intraocular fluid-fluid level, with relative hypointensity of the dependent portion, was identifiable within the vitreous body. On DW images vitreous hemorrhage showed restricted diffusion (Figure 3). showing the close proximity between the tumor and the optic nerve-which appears slightly compressed, but not infiltrated, with initial signs of degeneration (fibrosis and microcystic changes). Iris neovascularization (1/9 patients). Conspicuous enhancement of the ciliary body and of the anterior part of the choroid was seen on contrast-enhanced fat-suppressed T1-weighted images. Discussion The aim of our study was to elucidate the MR semeiotics of radiotherapy-induced changes on both neoplastic tissues and healthy ocular structures in patients with uveal melanoma undergoing secondary enucleation after proton beam radiotherapy. When performing this MR imaging-pathologic correlation we took into account certain histopathologic features and compared them with individually matched MR images. The histopathologic appearance of eyes with uveal melanoma undergoing secondary enucleation after radiotherapy is different from that of eyes undergoing primary enucleation. Saornil et al. [27] and Avery et al. [19] described the histopathologic changes in eyes with uveal melanoma that had undergone secondary enucleation after proton beam radiotherapy and plaque brachytherapy, respectively, comparing them with those of primary enucleation. Irradiated lesions demonstrated more inflammation, necrosis, fibrosis, blood vessel damage, and hemorrhage, as well as fewer mitoses than nonirradiated tumors [19,27]. As for extratumoral findings, eyes undergoing secondary enucleation had more vitreous hemorrhage and iris neovascularization [19]. Previous authors have postulated a direct relationship between the duration of the time frame between radiotherapy and enucleation and the amount of histopathological alterations [24]. According to Seddon et al., in particular, areas of necrosis were more numerous and extensive several months after irradiation [24]. Necrosis in irradiated tumors is the result of a dual effect: (1) a direct cytotoxic effect of radiation on neoplastic cells, and (2) an indirect action through the damage of neoplastic vasculature with resultant ischemia. Immune response against degenerated and necrotic neoplastic cells, in the form of inflammatory infiltrates (often with a perivascular distribution), would also play a role in tumor regression [24,27]. From a histological point of view, radiation-induced necrosis is characterized by dispersion of the melanin pigment with storage of pigmentladen macrophages [32]. In our study, at the histopathologic examination, we found necrosis in 5/9 irradiated tumors; in 4 of these patients necrosis was seen under MR imaging. Radiation-induced necrosis had a peculiar appearance under MR imaging. It showed low signal intensity on T2-weighted images because of melanin pigment dispersion. Moreover, when proton beam radiotherapy-induced necrosis and viable neoplastic tissue were present in the treated tumor, the border between these two areas was well-defined under MRI. This finding, particularly evident in T2-weighted sequences, had a strong overlap with histopathology, showing a stark transition between the two distinct portions of the tumor. Such an appearance is related to the physical properties of protons and, in particular, to the possibility of obtaining a highly collimated beam with minimal lateral scatter [22,23]. Therefore, the appearance of radiation-induced necrosis is different from that of spontaneous necrosis, observable in nonirradiated melanomas, especially in those of considerable size. Our observation is in contrast with that of Ferry et al. who, in a previous article, assumed the impossibility of the pathologist being able to distinguish radiation-induced necrosis from spontaneous necrosis in uveal melanomas [22]. On the other hand, Saornil et al., taking into account different histologic findings, managed to distinguish irradiated from nonirradiated eyes in 85% of cases of their series [27]. Gragoudas et al. suggested that the time course of necrotic alterations after radiotherapy could be related to the baseline dimensions of the neoplasm; in particular, small lesions would exhibit necrotic alterations earlier than large tumors [29]. We agree with this hypothesis, not only as regards the necrosis, but also with regard to another degenerative alteration-namely, fibrosis. Indeed, in our series, the choroidal melanoma that showed fibrotic alterations was a relatively small lesion (basal diameters 4 × 4 mm, prominence 6 mm), and the time course from radiotherapy to enucleation was rather short. In this patient, exhibiting complete tumor regression in the form of fibrotic remnants, the low signal intensity on T2-weighted images was related to fibroblasts, collagen deposition, and the abundant presence of melanophages. Our data are consistent with those of Kincaid et al. who, in two uveal melanomas enucleated after proton beam radiotherapy, found areas of fibrosis characterized by storage of collagen inside the stroma [25]. In our opinion, however, factors other than tumor size should also be taken into account, such as tumor response to radiotherapy, which may be heterogeneous and related to other determinants, including histologic type, cell kinetics, and host immune response [33]. Inflammatory complications are rather common after proton beam radiotherapy, being reported in 28% of patients during the first 5 years of treatment. Tumor size is considered the main risk factor [34]. Inflammatory complications encompass a wide spectrum of clinical manifestations (uveitis, endophthalmitis, conjunctivitis, etc.). Early detection and treatment of intraocular inflammation after radiotherapy is crucial since, if misdiagnosed, such a complication may result in secondary enucleation, even in the absence of tumor recurrence [34]. Uveitis may affect the anterior segment (iritis), the ciliary body (cyclitis), the posterior segment (choroiditis), or the whole uveal layer (panuveitis). In this last case, MRI may reveal a thickening of the anterior segment and the posterior aspect of the globe, associated with conspicuous enhancement on fat-suppressed T1-weighted images acquired after contrast agent administration [35]. In endophthalmitis the inflammatory process causes an increase in the protein content of the vitreous body and anterior chamber, due to leakage from retinal and choroidal vessels [35]. These vitreous changes are easily detectable in the form of increased signal intensity on unenhanced T1-weighted sequences, and also on T2-weighted FLAIR sequences commonly used in the brain MR protocol. On the other hand, vitreous alterations are less visible on contrast-enhanced T1-weighted sequences (because of modifications of the MR dynamic range induced by gadolinium) and hardly detectable on T2-weighted images. Uveitis and endophthalmitis can coexist, and may also be associated with choroidal and retinal detachment [35]. It has been hypothesized that in the pathogenesis of radiation-related inflammation, tumor necrosis plays a relevant role through the release of proinflammatory cytokines [34]. This assumption is consistent with our series; in the patient showing the most severe inflammatory complications, with concomitant panuveitis and endophthalmitis, the choroidal melanoma was wholly replaced by necrosis without appreciable neoplastic tissue. This patient underwent secondary enucleation four years after proton beam radiotherapy. In this case, at gross examination, the eyeball was surrounded by pus, and the posterior chamber showed necrotic-hemorrhagic content. Under MRI the edematous thickening of the periocular tissues displayed high signal intensity on fat-suppressed T2-weighted (STIR) images, restricted diffusion on DW images, and marked enhancement on contrastenhanced fat-suppressed T1-weighted images. Moreover, the last sequence excellently demonstrated the diffuse thickening and enhancement of the detached choroid. The alterations to the anterior chamber and vitreous body, typically seen in endophthalmitis, were better demonstrated on unenhanced T1-weighted sequences. Chronic conjunctivitis is a rather common complication following proton beam radiotherapy, and is related both to the direct action of radiation on the conjunctival epithelium, and to changes in the tear film caused by the treatment [36]. In one enucleated eye of our series, at the histologic examination, we observed an evident (sub)-conjunctival inflammatory infiltrate consistent with chronic conjunctivitis. Following MR this alteration was clearly detectable only on fat-suppressed T1-weighted sequences acquired after contrast medium administration, which showed a conspicuous enhancement and thickening of the conjunctiva of the treated eye. On the other hand, these findings were hardly or not at all perceptible on T2-weighted and plain T1-weighted sequences. Vitreous hemorrhage is observable in about 8% of patients treated with proton beam radiotherapy. It is less common after proton beam radiotherapy than after brachytherapy. In cases of recurrent vitreous hemorrhage, enucleation can be indicated because of the impossibility of performing a correct fundoscopic evaluation during follow-up [37]. The composition of the vitreous body (acellular fluid with 99% water content) explains its physiological water-like appearance under MRI [38]. In our series, vitreous hemorrhage occurred in 2/9 patients; in identifying this alteration, T1-weighted sequences performed better than T2-weighted ones, due to T1 shortening provided by subacute blood products (methemoglobin). The anterior chamber and the vitreous body lost their physiological water-like appearance and became hyperintense on T1-weighted images. The appearance on T2-weighted images may vary depending on the severity and the age of bleeding, and sometimes a fluid-fluid level with a relatively low signal intensity of the dependent portion can be seen. The T2-weighted FLAIR sequence of the brain study may also be of help in detecting vitreous hemorrhage, easily demonstrating the loss of the physiological water-like hypointensity of the vitreous body. Seddon et al. reported a series of three patients in which secondary enucleation was performed because of proton beam radiotherapy-related complications. In the patient with the longest time interval between irradiation and enucleation (11 months), the authors found optic nerve atrophy under light microscopy, although the tumor involved the ciliary body and choroid without covering the optic disc [24]. Kincaid et al. reported a series of five patients with uveal melanoma treated with proton beam radiotherapy, in which secondary enucleation was performed because of clinical complications and tumor growth. In two patients the authors described optic nerve atrophy; moreover, in these cases the tumor was away from the optic disc [25]. In our series, three patients showed optic nerve degeneration at the histologic examination and corresponding optic nerve thinning with T2 hyperintensity under MRI; in these cases, the tumor involved the optic disc. Whether optic nerve degeneration represents the result of the location of the lesion covering the optic nerve head, the consequence of the radiotherapy, or both, is debatable. The slightly hyperintense signal of the optic nerve we observed on T2-weighted images resembles the appearance of the subacute and chronic phases of Wallerian degeneration-a progressive anterograde disintegration of axons associated with demyelination following an injury to the cell body or the proximal portion of the axon [39]. In our case the optic nerve, formed by the axons of the ganglion cells of the retina, represents the first neuron of the visual pathway, extending from the ganglion cell layer of the retina to the lateral geniculate body of the thalamus. Ganglion cell injury-caused by tumor invasion, by radiation, or by both-determined, in turn, the degeneration of the axons constituting the optic nerve. These changes were displayed under MRI in the form of hyperintensity on T2-weighted images, volume loss (nerve thinning), and atrophy. Neovascular glaucoma is one of the most serious complications of radiotherapy, as well as a leading indication for secondary enucleation [25]. Avery et al. found iris neovascularization in 39% of the irradiated eyes undergoing secondary enucleation after plaque brachytherapy, and postulated that this alteration could be the result of radiationinduced ischemia [19]. In the case reported by Ferry et al. of a ciliary body melanoma enucleated after proton beam radiotherapy, the patient exhibited rubeosis iridis-iris neovascularization involving both the treated and the untreated areas [22]. Boyd et al. investigated the pathogenesis of iris neovascularization, and its link with irradiation, in 11 patients who underwent secondary enucleation after proton beam radiotherapy. Differently from the previous author, they found that neovascularization more often affected the nonirradiated side of the iris, and when it was observable on both sides of the iris, it was more conspicuous on the untreated side [21]. The pathogenesis of iris neovascularization and neovascular glaucoma has been clarified in the last few decades. It has been postulated that the detached retina, the tumor, and even the residual scar resulting from irradiation of the neoplasm, synthesize angiogenic factors that, in turn, would be responsible for iris neovascularization [21,37]. This theory is consistent with our own case series, in which the patient with neovascular glaucoma showed only a residual fibrotic remnant of the choroidal melanoma after radiotherapy. Neovascular glaucoma was characterized by a noticeable neovascularization of the iris and the ciliary body, and at the histological examination appeared as copious congested end ectatic vessels. Under MRI the angiogenic process manifested in the form of a considerable enhancement of the ciliary body (and the anterior portion of the choroid) on fat-suppressed T1-weighted images obtained after contrast agent administration. Cataracts are another complication ascribable to radiotherapy, affecting about 33% of patients treated with proton beam radiotherapy. The physical properties of protons allow for the sparing of healthy tissues located both on the sides of and behind the lesion; however, the amount of radiation at the entrance of the beam can be comparatively high, thereby promoting the development of a cataract [37]. In their series Kincaid et al. described a dense cataract in one patient. In this case, at the histological examination, the lens cortex displayed vacuolation, fragmentation of fibers, and development of globules [25]. In our series one patient experienced a cataract; in this case, under MR imaging, the involved lens demonstrated a peripheral hyperintense rim on T1-weighted sequences. This particular appearance somehow resembles cortical laminar necrosis, a pathological phenomenon associated with hypoxia, status epilepticus, infections, metabolic diseases, and drug intoxication. Cortical laminar necrosis is characterized by a high signal intensity on T1-weighted sequences, with gyral distribution at the level of cortical brain lesions; gyriform T1 hyperintensity is related to the storage of necrotic debris and denatured proteins [40][41][42]. Similarly, it is conceivable that the hyperintense T1 signal we observed in the peripheral portion of the lens may be the result of radiation-induced damage to the proteins of the cortical fibers and the accumulation of debris in the subcapsular region of the lens [28]. In previous articles, some authors reported a noticeable impediment during enucleation because of the dense fibrotic adhesions resulting from radiotherapy [22,43]. In our series, we observed similar findings in one patient who underwent secondary enucleation three years after proton beam radiotherapy because of local recurrence of a choroidal melanoma. The histologic examination showed the thickening of the sclera in close proximity to the insertion of the superior rectus muscle, in correspondence to the tantalum clips used to delimit the tumor borders before proton beam treatment. Under MR imaging the soft tissue thickening along the outer edge of the sclera was better visible on T2-weighted images, in which fibrotic alteration appeared hypointense because of collagen fiber deposition. To our knowledge, to date, ours is the first study making a radiologic-pathologic correlation of MR findings and histopathologic data of uveal melanomas treated with proton beam radiotherapy undergoing secondary enucleation. To elucidate the MR appearance of alterations induced by radiotherapy on neoplastic and healthy ocular tissues has twofold relevance, since it helps us to understand the pathological basis of the effects of radiation, and also enhances the radiologist's diagnostic confidence in the evaluation of follow-up MR examinations of patients with uveal melanoma. Indeed, during the last few years radiologists have acquired an increasingly important role in the multidisciplinary approach to patients with uveal melanoma. Although the detection of radiotherapy-related complications is not the primary aim of MR imaging, the thorough assessment of the effects of radiotherapy on both neoplastic and ocular tissue, and the confirmation of the clinical findings of the MR examination, may be crucial in patients' multidisciplinary management and, in particular, in choosing whether or not to perform enucleation. Admittedly, our study has various limitations. The small sample size and retrospective design were the main limitations of our study. Nevertheless, it should be noted that uveal melanoma is a rare neoplasm and, fortunately, enucleation has become increasingly uncommon in the last decade. MR examinations were performed at only one time point, just before enucleation; no MRI was performed at the time of diagnosis, before radiotherapy. The potential different orientation between MR images and histopathology slides could have made the radiologic-pathologic correlation in certain cases difficult; nevertheless, the use of specific anatomic landmarks (optic disc, ciliary body) has made it possible to overcome this obstacle. We deliberately did not take into account some extratumoral ocular findings, such as retinal detachment and choroidal detachment, because these findings can be related to the presence of the tumor and, therefore, already be present before radiotherapy. Lastly, we exclusively evaluated histopathological alterations with a counterpart in MR imaging; therefore, we did not take into account some purely histologic alterations observable in irradiated tumors, such as mitotic figures and balloon cells. Conclusions The radiologic-pathologic correlation between MR images and histopathological data after secondary enucleation enabled us to elucidate the MR semeiotics of radiotherapyinduced changes on neoplastic tissues and healthy ocular structures. Because of their relevant role in the multidisciplinary approach to uveal melanoma, and the responsibility that results from this, radiologists should be aware of the effects of radiotherapy on tissues and of their appearance at MR imaging. Enhancing accuracy in evaluating radiationinduced changes can help the multidisciplinary team make crucial clinical decisions and, ultimately, improve the therapeutic management of patients with uveal melanoma. Moreover, although the MR protocol in the study of uveal melanoma is somewhat standardized, radiologists need to know the role and the diagnostic contribution of each MR pulse sequence in performing different specific tasks, in order to fully exploit the multiparametric capabilities of MRI. Overall, T2-weighted sequences perform better in detecting radiation-induced necrosis and fibrosis, as well as optic nerve degeneration and periocular fibrotic adhesions, whereas T1-weighted sequences are preferable for identifying alterations to the lens, vitreous hemorrhage, and inflammatory complications. Contrast-enhanced fat-suppressed T1-weighted sequences are irreplaceable in assessing iris neovascularization and confirming the presence of inflammatory complications.
2021-07-17T00:09:33.827Z
2021-05-10T00:00:00.000
{ "year": 2021, "sha1": "64f424060cc3d40dd6ef764a26452fccb9860a47", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3417/11/9/4310/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "d55b50c05ae4b5e77cf287b88faed1ed10427b68", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
238249563
pes2o/s2orc
v3-fos-license
A Systematic Review and Meta-Analysis of the Relationship Between the Radiation Absorbed Dose to the Thyroid and Response in Patients Treated with Radioiodine for Graves' Disease Background: Patients with Graves' disease are commonly treated with radioiodine. There remains controversy over whether the aim of treatment should be to achieve euthyroidism or hypothyroidism, and whether treatments should be administered with standard levels of radioactivity or personalized according to the radiation absorbed doses delivered to the thyroid. The aim of this review was to investigate whether a relationship exists between radiation absorbed dose and treatment outcome. Methods: A systematic review and meta-analysis of all reports published before February 13, 2020, were performed using PubMed, Web of Science, OVID MEDLINE, and Embase. Proportion of patients achieving nonhyperthyroid status was the primary outcome. Secondary outcomes were proportion of patients who were specifically euthyroid or hypothyroid. A random-effects meta-analysis of proportions was performed for primary and secondary outcomes, and the impact of the radiation absorbed dose on treatment outcome was assessed through meta-regression. The study is registered with PROSPERO (CRD42020175010). Results: A total of 1122 studies were identified of which 15, comprising 2303 Graves' disease patients, were eligible for the meta-analysis. A strong association was found between radiation absorbed dose and nonhyperthyroid and hypothyroid outcomes (odds ratio [OR] = 1.11 [95% confidence interval {CI} 1.08–1.14] and OR = 1.09 [CI 1.06–1.12] per 10 Gy increase). Higher rates of euthyroid outcome were found for radiation absorbed doses within the range 120–180 Gy when compared with outside this range (n = 1172, OR = 2.50 [CI 1.17–5.35], p = 0.018). A maximum euthyroid response of 38% was identified at a radiation absorbed dose of 128 Gy. Conclusions: The presented radiation absorbed dose–response relationships can facilitate personalized treatment planning for radioiodine treatment of patients with Graves' disease. Further studies are required to determine how patient-specific covariates can inform personalized treatments. Introduction H yperthyroidism has been widely treated with [ 131 I]NaI (radioiodine) since 1941 (1). However, debate continues as to whether the aim of treatment should be to achieve hypothyroidism or euthyroidism (2)(3)(4)(5)(6). Additionally there is a lack of consensus on the optimal strategy to achieve either outcome. The most common approach is based on the administration of standard levels of radioactivity. However, a personalized approach based on calculated activities to deliver a specified radiation absorbed dose to the thyroid may deliver a euthyroid outcome where required (3). Recent guidelines from the National Institute for Health and Care Excellence highlighted the lack of randomized controlled trials (RCTs) in the use of radioiodine for the treatment of benign thyroid disease (6). The aim of treatment of hyperthyroidism remains controversial. The American Thyroid Association (4) and the European Thyroid Association (5) recommend a single administration of radioactivity sufficient to render the patient hypothyroid (typically between 370 and 555 MBq). However, the European Association of Nuclear Medicine (EANM) guidelines (3,7) consider hypothyroidism a side effect of the treatment (8,9), which requires life-long thyroid hormone replacement and regular thyrotropin monitoring. An audit of local general practitioners in the United Kingdom found that 21% of patients were over treated with the thyroid replacement drug levothyroxine, while undertreatment was observed in 9% of patients (10). Both outcomes potentially have negative health impacts for patients. A patient survey conducted by the British Thyroid Foundation found that *80% of patients were dissatisfied with their medication (11). The EANM guidelines state that treatment according to disease-specific prescribed radiation doses may achieve a euthyroid state, whereby the patient would not require thyroid hormone replacement (3). Treatment protocols are currently based on evidence from single-center studies and vary widely. In performing this review, we aimed to consolidate the current literature regarding radiation absorbed doses to the thyroid for radioiodine treatment of hyperthyroidism and to investigate whether a relationship exists between these radiation absorbed doses and treatment outcome. Search strategy and selection criteria A comprehensive systematic review and meta-analysis of published studies were performed to evaluate the clinical outcomes of radioiodine therapy for hyperthyroidism with respect to the radiation absorbed doses to the thyroid. Articles published before February 13, 2020, were included. No restrictions were applied on language or type of study design. Only studies were included that reported radiation absorbed dose to the thyroid, follow-up time, and treatment outcomes for adult patients. Only full-text articles published in peerreviewed journals were assessed. PubMed, Web of Science, OVID MEDLINE, and Embase were searched following the principles and checklist provided by PRISMA (preferred reporting items for systematic reviews and meta-analyses) (12). The databases were searched for the following terms: (''iodine'' OR ''radioiodine'' OR ''I131'' OR ''I-131'' OR ''131I'') AND (''graves' disease'' OR ''hyperthyroidism'') AND (''dosimetry'' OR ''absorbed dose''). Study authors were not contacted and trial registries were not searched. Details of the protocol for this systematic review were registered on PROSPERO (CRD420 20175010). Ethical approval was not relevant for this study, since it is solely based on literature. Two reviewers ( J.T. and G.D.F.) performed the initial search and screened results for duplicates. Two blinded reviewers ( J.T. and G.D.F.) screened the remaining studies based on title and abstract for inclusion. Discrepancies be-tween the selected studies were resolved as a joint decision by the two reviewers. Four reviewers ( J.T., G.D.F., L.C.P., and P.M.D.G.) extracted data independently and collated the results in MS Excel spreadsheets. Data were extracted on a subpopulation level for each treatment arm, corresponding to different radiation doses to the thyroid, where available. Data were extracted for the full study population in cases where data for different treatment arms were not reported. Data analysis For each study, the following variables were extracted: number of subjects, disease type, discontinuation of antithyroid medication before treatment (yes-all/yes-some/none), presence of ophthalmopathy (yes-all/yes-some/none), followup period (months), median or mean age (years), proportion of male patients (percentage), median or mean amount of radioactivity (MBq), radiation absorbed dose to the thyroid (Gy), and proportion of patients euthyroid/hypothyroid/ hyperthyroid at all follow-up times (percentage). The aim of treatment was recorded as either nonhyperthyroid (encompassing both euthyroid and hypothyroid), specifically euthyroid, or specifically hypothyroid. Dosimetry methodology was also extracted. The main summary measures used were proportions of patients (with confidence intervals [95% CIs]) reaching specific endpoints after radioiodine treatment, relative to the size of the treatment arm subpopulation. The primary outcome used was proportion of patients who were nonhyperthyroid. Secondary outcomes were proportion of patients who were specifically euthyroid or hypothyroid. These were taken to be mutually exclusive and were individually defined in each study. Where the proportion of patients with euthyroid outcome was not reported, the proportion was determined as the difference between the patients rendered nonhyperthyroid and hypothyroid. Patients who required further radioiodine treatment were classed as hyperthyroid at follow-up. Two reviewers ( J.T. and L.C.P.) assessed risk of bias on a study level using the critical appraisal checklist developed by the Joanna Briggs Institute (13). Studies were classed as having a low, intermediate, or high risk of bias and studies were only included if classed as having low or intermediate risk of bias in the further data synthesis. The meta-analysis was performed separately for Graves' disease and for any other hyperthyroid conditions. Only the response at last follow-up was included for the meta-analysis. The majority of included studies were uncontrolled and retrospective. Therefore, a random-effects meta-analysis of proportions was performed for nonhyperthyroid, euthyroid, and hypothyroid outcomes. DerSimonian and Laird's method was employed with a logit transformation (14,15). The I 2 test was used to assess heterogeneity between studies. Metaregression was performed to assess the impact of the extracted variables on the odds of achieving the respective outcomes. For the euthyroid outcome, where a nonmonotonic relationship is expected (16), a categorical variable was included to represent whether the radiation absorbed dose was within or outside a particular range. Dose-response relationships were fitted based on a twoparameter log-logistic model (17) using the maximum likelihood principle for the nonhyperthyroid and hypothyroid 1830 TAPROGGE ET AL. outcomes. A sensitivity analysis was performed to identify whether results remained significant if only studies classed as having low risk of bias were included. All statistical analyses were performed using R Statistical Software (version 3.5.2; R Foundation for Statistical Computing, Vienna, Austria) and the add-on package drc (18). The value p < 0.05 was considered statistically significant. Results A total of 1122 studies were identified for the systematic review of which 419 were excluded due to presentation of duplicate data. A further 668 studies were excluded for not satisfying the eligibility criteria based on title and abstract. Of the remaining 35 studies, a total of 20 full-text articles (16,(19)(20)(21)(22)(23)(24)(25)(26)(27)(28)(29)(30)(31)(32)(33)(34)(35)(36)(37) were deemed eligible for the systematic review following independent analysis (Fig. 1). A summary of the study characteristics is presented in Table 1. Thirteen studies reported a patient cohort with Graves' disease, 5 reported a mixture of hyperthyroid conditions including Graves' disease, 1 study reported only hyperfunctioning thyroid nodules, and 1 study considered only patients with toxic nodular goiter. One study (24), comprising a mixture of hyperthyroid conditions, was excluded from the quantitative synthesis due to a high risk of bias identified from the critical appraisal checklist developed by the Joanna Briggs Institute. The remaining studies were classed as low or intermediate risk of bias (Table A1 in Supplementary Data). A total of 2328 patients were reported as having Graves' disease, while 75, 173, and 57 patients had thyroid nodules, toxic nodular goiter or toxic adenoma, respectively. Only four studies included patients with hyperfunctioning thyroid nodules or toxic nodular goiter, which was insufficient to perform a meta-analysis. Of the studies reporting outcomes for Graves' disease, the subpopulations, as stratified by radiation absorbed dose, ranged in size from 9 to 284 patients, with a median of 42 patients. The stated aim of treatment varied between studies. In eight studies, the aim was to resolve hyperthyroidism by rendering patients either euthyroid or hypothyroid. In 4 studies, the aim was to explicitly induce euthyroidism, in 1 study, the aim was to induce hypothyroidism, and in 5 studies, the aim was not clearly reported. A range of dosimetry methodologies (Table A3 in Supplementary Data) were employed across the studies reporting outcomes for Graves' disease, with the majority (15/18) using a variation of the method proposed by Marinelli (38), which has been adopted into EANM guidelines (3,7). Two studies (27,34) used a method based on the volume-reduction methodology proposed by Traino et al. (39) and one study used a fixed activity administration with post-therapy dosimetry (26). Seven studies carried out post-therapy verification, whereas 11 studies based the reported radiation absorbed dose on a pretherapy tracer study. One study excluded patients with ophthalmopathy (31), while one study adjusted the prescribed radiation absorbed dose based on the presence of ophthalmopathy (34). Only one study reported outcomes separately for patients with ophthalmopathy (32). Less than one-third (5/18) of studies included a last follow-up of >12 months. The median last follow-up was 12 months (range 3-120 months). For studies reporting outcomes for Graves' disease, a forest plot for the nonhyperthyroid outcome is included in the Supplementary Data (Fig. B1). The random-effects meta-analysis for this outcome resulted in an I 2 of 91.1%, suggesting that a pooled estimate of proportion across these studies is of limited use. A strong association was found in meta-regression between the radiation absorbed dose to the thyroid and nonhyperthyroid and hypothyroid outcomes at the last reported follow-up (odds ratio [OR] = 1.11 [CI 1.08-1.14] and OR = 1.09 [CI 1.06-1.12] per 10 Gy increase in radiation absorbed dose, respectively, R 2 = 55.0% and 53.7%, both p < 0.001). The absorbed radiation dose-response relationships for each outcome are shown in Figure 2. Given that, in the majority of studies, the administered radioactivity was calculated to deliver a prescribed radiation absorbed dose to the thyroid, these two variables are not independent (Pearson correlation coefficient r [15] = 0.85, p < 0.001). A graph of administered radioactivities against prescribed radiation absorbed doses is presented in the Supplementary Data (Fig. B2). As a result, administered radioactivity was excluded from the univariate analysis. The proportion of patients with nonhyperthyroid and hypothyroid outcomes was seen to plateau with increasing radiation absorbed doses, with limited benefit >300 Gy (Fig. 2). An association with euthyroid outcome was found for radiation absorbed doses within the range 120-180 Gy when compared with those outside this range (n = 1172, OR = 2. If results were not reported for the different groups, for example, for different radiation absorbed dose groups or for patients grouped by disease type, the population result was presented and is indicated by asterisk. #, number of study subjects; CI, 95% confidence interval; ATD, use of Antithyroid drugs during radioiodine administration; Eu, euthyroidism outcome at follow-up; EuG, euthryoid goiter; FU, reported follow-up time; GD, Graves' disease; HN, homogeneous uptake with no indication of GD; HTN, hyperfunctioning thyroid nodules; hyper, hyperthyroidism outcome or further radioiodine treatment at follow-up; hypo, hypothyroidism outcome at follow-up; M, mean; Md, median; NA, not applicable to study; NR, not reported in study; OP, presence of ophthalmopathy in study population; Prev RAI, previous radioactive iodine administrations; Q25, 25th quartile; Q75, 75th quartile; Rg, range; Rad Act Admin, radioactivity administered to patients; SD, standard deviation; TA, toxic adenoma; TNG, toxic nodular goiter; Yes, yes-all, that is, applicable to the full study population; YS, yes-some, that is, only applicable to a fraction of the study population. of 38% [CI 26-50%] was identified at a radiation absorbed dose of 128 Gy. Euthyroid, hypothyroid, and nonhyperthyroid responses at 150, 200, and 300 Gy are presented in Table 2. All ORs calculated in the sensitivity analysis (Table A2 in Supplementary Data) agreed with the results in the full analysis to within the stated CIs. These findings indicate that a radiation absorbed dose to the thyroid of 128 Gy achieves a euthyroid state, without the need for thyroid hormone replacement drugs, in 38% of patients and resolution of hyperthyroidism in 70% of patients at a median follow-up of 12 months. The remaining 30% of patients would require further treatment to resolve hyperthyroidism. Several studies have shown that unresolved hyperthyroidism is associated with increased risk of cardiovascular mortality (42,43). Therefore, if the clinical priority is resolution of hyperthyroidism, a higher population response rate can be achieved with a higher radiation absorbed dose. However, this will result in more patients becoming hypothyroid. To achieve euthyroidism rates higher than 38%, personalized radiation absorbed dose prescriptions based on patientspecific factors such as the radiation absorbed dose rate (44), sex (8), thyroid volume (45), presenting triiodothyronine (8), antithyroid medication (46), and duration of the Graves' disease (47) may be required. The exact role of these factors should be further investigated. Limitations of the study include the lack of data from RCTs, with only one RCT included (16). Treatment outcomes were not reported at consistent follow-up times across the studies, therefore, outcomes at last follow-up were used in our meta-analysis. The median last follow-up at 12 months may not represent the longer term effect of treatment with radioiodine. It has been shown that incidence of hypothyroidism increases with time after treatment, although this may plateau out (29). However, follow-up time was not found to be significantly associated with outcome in our meta-analysis. Further studies with long-term follow-up are required to determine how long the euthyroid state can be maintained after radioiodine treatment. Dosimetry methodologies vary between studies, which partially explains the observed variation in response rates for a given radiation absorbed dose. Standardization of dosimetry methodology between centers, which has shown to be feasible (48), would contribute toward reducing this variation in future studies. The lack of available data for other hyperthyroid conditions limited the scope of the meta-analysis to Graves' disease. No patientspecific covariates could be extracted as they were either missing or only reported as population averages. The effect of follow-up time and patient-specific factors such as disease type, thyroid volume, or free triiodothyronine on treatment outcome should be investigated in future studies. Conclusions In this study, a highly significant relationship was demonstrated between radiation absorbed dose and nonhyperthyroid, euthyroid, and hypothyroid outcomes in the treatment of Graves' disease using radioiodine. This could, therefore, serve as a basis to plan treatment, based on the required outcome. Comprehensive and standardized data collection in future studies would benefit the field. Further studies are required to determine the clinical efficacy and cost-effectiveness of dosimetry-based patient-specific treatment planning and to further investigate the potential role of patient-specific covariates that may be used for stratification.
2021-10-03T06:16:57.138Z
2021-10-01T00:00:00.000
{ "year": 2021, "sha1": "91b0fb8e7a9824232db9b01af0d937fd221aff85", "oa_license": "CCBY", "oa_url": null, "oa_status": "CLOSED", "pdf_src": "PubMedCentral", "pdf_hash": "5ee4e42b804f4f01902b35ac305d83e37add8928", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
18257048
pes2o/s2orc
v3-fos-license
A retrospective, cross-sectional study reveals that women with CRSwNP have more severe disease than men Up to 50% of patients with chronic rhinosinusitis (CRS) have comorbid asthma, and we have reported that a subset of CRS patients who have nasal polyps (CRSwNP) have elevated autoantigen-specific antibodies within their nasal polyps (NP). While increases in the prevalence and/or severity of both asthma and autoimmunity in women are well characterized, it is not known whether CRSwNP is more severe or frequent in women than men. We sought to determine whether CRSwNP demonstrated sex-specific differences in frequency and/or severity. Using a retrospectively collected database of tertiary care patients (n = 1393), we evaluated the distribution of sex in patients with CRSwNP with or without comorbid asthma or aspirin hypersensitivity. We further compared the severity of sinus disease between men and women with CRSwNP. Although women comprised 55% of CRS patients without NP (CRSsNP), a significantly smaller proportion of CRSwNP patients were female (38%, P < 0.001). Interestingly, women with CRSwNP were significantly more likely than men to have comorbid asthma (P < 0.001), and 61% of patients with the most severe form of disease (aspirin-exacerbated respiratory disease (CRSwNP plus asthma plus aspirin sensitivity)) were women (P < 0.05). Women with CRSwNP were significantly more likely to have taken oral steroids, and were more likely to have a history of revision surgeries (P < 0.05) compared to men. These data suggest that women with CRSwNP have more severe disease than men in a tertiary care setting. Future studies are needed to elucidate the mechanisms that drive disease severity in men and women, paving the way for the development of personalized treatment strategies for CRSwNP based on sex. Introduction Chronic rhinosinusitis (CRS) is an inflammatory disease of the upper airways that affects up to 30 million people in the United States. It is associated with a significant impairment in quality of life and places a large financial burden on the health care system, with over $6 billion spent annually on clinical and surgical management [1][2][3][4]. A specific subgroup of patients with CRS also has nasal polyps (CRSwNP), and up to 50% of this group has comorbid asthma [5]. Despite the high prevalence of CRS, the mechanisms that underlie its pathogenesis and its association with asthma remain unclear. Dysregulation of both the innate and adaptive immune responses has been hypothesized to promote the chronic inflammation observed in CRSwNP. We have previously demonstrated that B cell activating factor of the TNF family (BAFF), a key B cell survival factor from the TNF family, as well as B cell attracting chemokines, CXCL12 and CXCL13, are highly elevated in nasal polyp tissue from patients with CRSwNP [6,7]. B lineage cells (B cells, plasmablasts, and plasma cells) and their antibody products, in particular autoantibodies, are also highly elevated in nasal polyps [5,[8][9][10][11]. Together, these data suggest that B cell responses may be critical components in CRSwNP pathogenesis, and additional studies are needed to further investigate how they impact disease. B cell activation and antibody production can be induced by the female sex hormone, estrogen [12][13][14]. Extensive studies in a murine model of systemic lupus erythematous (SLE), an autoimmune disease in which B cells play an important role, have shown this model for disease to be more strongly manifested in females, as is also found in human patients with SLE. It has been further demonstrated that female sex hormones, especially estrogen, are capable of driving this gender bias toward females [14][15][16]. Additionally, human epidemiological studies have shown that several autoimmune diseases, including SLE, are more prevalent and/or severe in women [14]. Although asthma is not viewed as an autoimmune disease, it is also more prevalent, and it can be more severe, in women [15][16][17]. Despite the fact that CRSwNP has features of asthma and autoimmunity, along with elevations of B lineage cells and autoantibodies in nasal polyps, no studies have investigated whether CRSwNP affects females disproportionally. As a result, we sought to determine whether the frequency or severity of CRSwNP varied by sex in our study population. Patients Demographic and clinical history data were collected from all non-CRS controls and patients with CRS (both with (CRSwNP) and without nasal polyps (CRSsNP)) who were treated in the Allergy-Immunology and the Otolaryngology Clinics of the Northwestern Medical Faculty Foundation (NMFF) or the Northwestern Sinus Center at NMFF and recruited to participate in studies on CRS between 2003-2013 (n ¼ 1393) (Table 1). Control patients were undergoing surgery for non-CRS indicated procedures, such as cranial tumor resection and septoplasty (Table 2). Some control patients provided nasal epithelial cells or nasal lavage samples, which were obtained in the clinic, but did not undergo surgery (27.2%, Table 2). All CRS subjects met the criteria for CRS as defined by the American Academy of Otolaryngology-Head and Neck Surgery Chronic Rhinosinusitis Task Force [18], such that the diagnosis of CRS was based upon the presence of clinical symptoms (i.e., nasal congestion, rhinorrhea, facial pressure, hyposmia) persisting for more than 12 weeks in addition to having objective evidence of chronic inflammatory disease on sinus CT imaging or nasal endoscopy. Patients with Aspirin Exacerbated Respiratory Disease (AERD) had the clinical triad of CRSwNP, asthma, and a documented history of developing respiratory symptoms following ingestion of either aspirin or a non-steroidal anti-inflammatory drug (NSAID) [19,20]. Patients with AERD were considered a separate subgroup from those with CRSwNP, and they were not included in the data analyses of patients with CRSwNP unless indicated. Patients were considered to have asthma if they had an asthma diagnosis documented by an allergist, pulmonologist or otolaryngologist. For atopy, patients needed to have a positive skin prick test to at least one of the following allergens: tree pollens, grass pollens, ragweed pollen, dust mite, cat, dog, molds or cockroaches. Severity of sinus inflammation was determined by clinical radiologists interpretation of sinus mucosal thickening on sinus Computed Tomography (CT) imaging as being mild, mild-moderate, moderate, moderate-severe, or severe. Additionally, sinus mucosal thickening was assessed by two independent reviewers using the Lund-Mackay (LM) scoring system [21]. Patients with an established immunodeficiency, pregnancy, coagulation disorder, solid organ transplant, classic allergic fungal sinusitis, or cystic fibrosis were excluded from the study. All subjects provided informed consent, and the study was approved by the Institutional Review Board of Northwestern University Feinberg School of Medicine. Statistical analysis All calculations were done using Graphpad Prism v5.0b. The Chi-squared test was used for comparisons of prevalence among different groups. The Mann-Whitney U test was used to compare median values between 2 groups, and the Kruskal-Wallis test with Dunns correction was used to compare medians among more than 2 groups. A P-value of less than 0.05 was considered significant. Sex-specific differences in the prevalence of CRS We first wanted to determine whether there were any sexspecific differences in the prevalence of CRS among our patient population. We examined patient records from control (n ¼ 367), CRSsNP (n ¼ 490), CRSwNP (n ¼ 492), and AERD (n ¼ 44) patients who had previously been recruited from tertiary care centers at Northwestern for our studies on CRS between 2003-2013. We found that women comprised approximately half of control (45%) and CRSsNP patients (55%), but women made up a significantly smaller proportion of CRSwNP patients compared to CRSsNP (38%, P < 0.001; Fig. 1A). In contrast, 61% of patients with AERD, the most severe form of CRSwNP, were women, and this was significantly higher than the proportion of women with CRSwNP (P ¼ 0.032). These data suggest that women with CRSwNP may have more severe disease than men. In support of this, we found that if we combined the AERD patients with the CRSwNP patients, women were more than 2.5 times as likely to have an AERD diagnosis compared to men (P ¼ 0.003; Fig. 1B). However, for the remainder of our analyses, AERD patients were excluded from the CRSwNP group. Interestingly, while we found that patients with CRSwNP were slightly older than control and CRSsNP patients ( Table 1, and Fig. S1A), we did not find any differences in age between men and women of any patient group (Fig. S1B). Sex-specific differences in the frequency of asthma and atopy Patients with more severe forms of CRSwNP are also more likely to have comorbid asthma [22], and asthma is known to be more prevalent in women [15][16][17]. We also found that the prevalence of asthma was significantly higher in CRSwNP patients compared to CRSsNP and control patients in our study group (Table 1). However, no studies to date have investigated whether asthma is more common in women with CRSwNP compared to men. In order to address this, we assessed the frequency of comorbid asthma among men and women in our study population. We found that women with CRSwNP were significantly more likely to have comorbid asthma than men with CRSwNP (66% vs. 46%; P < 0.001, Fig. 2A). We also assessed the frequency of asthma in CRSsNP and control patients. Although there was a trend toward higher asthma frequency, there was no significant difference in asthma frequency between the men and women of these groups ( Fig. 2A). These data suggest that the increased prevalence of asthma in women with CRSwNP may not simply be due to the overall increase in asthma among women, but may be linked to airway disease severity. We also assessed the frequency of atopy among men and women in each patient group but found no sex-specific differences (Fig. 2B). Sex-specific differences in race and oral steroid use among CRSwNP patients It has been well documented that asthma is more prevalent among women, but it is also known that African-Americans have a higher prevalence of asthma than Caucasians [16]. Although we found no differences in the proportion of Caucasian patients among our study groups (Table 1), we wanted to determine whether women with CRSwNP were more likely to be African-American and whether they were more likely to have taken oral steroids, a common feature of patients with severe asthma [17]. We found that women with CRSwNP were significantly more likely to be African-American, compared to men (18% vs. 10%, respectively, P ¼ 0.027; Fig. 3A), but this was not affected by the asthmatic status of the men or women (data not shown). In addition, we found that CRSwNP women were significantly more likely to have been prescribed perioperative oral steroids compared to men (29% vs. 20%, respectively, P ¼ 0.034; Fig. 3B). Moreover, the frequency of oral steroid use was highest in asthmatic CRSwNP women compared to nonasthmatic women and men with and without asthma (P < 0.001, P ¼ 0.015, and P < 0.001, respectively; Fig. 3C). Interestingly, asthma status did not have an effect on oral steroid use in men with CRSwNP (Fig. 3C). Together, these data further support the notion that CRSwNP women have more severe disease than men. Sex-specific differences in sinus disease severity in CRSwNP patients Because patient chart data were more complete for those patients recruited for study from 2010 and on, including data on CT scans and number of surgeries performed, we used the charts from the subset of CRSwNP patients from our study group recruited in those years to determine sinus disease severity (n ¼ 226 out of 492 CRSwNP patients). We first examined whether women with CRSwNP had more severe sinus disease compared to men with CRSwNP by using the traditional Lund-Mackay (LM) scoring system, but we found no differences in the overall average LM score between the sexes (Fig. 4A). In contrast, based upon radiologists unbiased interpretation of overall sinus mucosal thickening on diagnostic sinus CT scans, 36% of women with CRSwNP were classified as having severe disease compared to only 17% of men (P ¼ 0.002; Fig. 4B). In addition, when we converted the radiologists' interpretations to a scale of 1-5 (1 being mild and 5 being severe), we found that women with CRSwNP had significantly higher scores than men (P ¼ 0.013; Fig. S2). This suggests that despite the fact that CRSwNP was more common among men, women with CRSwNP on average had more severe sinus disease than men. Another indicator of CRSwNP disease severity is the need for revision surgeries. We assessed the total number of surgeries for men and women with CRSwNP, as well as the percentage of men and women that ever required revision surgery. We found that CRSwNP women were significantly more likely to require revision surgery than men (P ¼ 0.039; Fig. 5A), and they had a history of more total revision surgeries than men (P ¼ 0.018, Fig. 5B). Interestingly, CRSwNP women with comorbid asthma were significantly more likely to require revision surgery compared to nonasthmatic women (50% vs. 26%, respectively, P ¼ 0.024, Fig. 5C), but revision surgery frequency was not different based upon asthma status in men (Fig. 5C). Together, these data further support the hypothesis that women with CRSwNP have significantly more severe disease than men. . CRSwNP women were more likely to have comorbid asthma (64%) compared to CRSwNP men (46%). Asthma frequency was not different between men and women in the control or CRSsNP groups. (B). There was no difference in the frequency of atopy between men and women in any group. ***P < 0.001 by Chi-squared test. Discussion It is well established that asthma as well as many autoimmune diseases are more prevalent and/or severe in women [14,15]. B cells are known to play a key role in these diseases, and estrogen is known to promote B cell activation and antibody production in murine models of autoimmune disease [12][13][14]. We have previously demonstrated that B cells and antibodies likely play a key role in the pathogenesis of CRSwNP, but the mechanisms responsible for the activation of B cells and production of antibodies in NP are unclear [5,6,10,23]. Despite the fact that CRSwNP has features of both asthma and autoimmunity, it has not been determined whether there are sex-specific differences in the prevalence and/or severity of this disease. In the current study, we assessed the frequency and severity of CRSwNP among men and women previously recruited from our tertiary care clinics at Northwestern to study of CRS. Within this selected population, the frequency of CRSsNP between the sexes was similar, while men more commonly had CRSwNP than women. Interestingly, however, it was women with CRSwNP who had more severe sinus disease, were more likely to require revision surgeries, who were more likely to have comorbid asthma, and who were more likely to have been prescribed perioperative oral steroids (Figs. 2-5). Also, we found that . Women with CRSwNP were more likely to be African-American compared to men (18% vs. 10%). (B). Women with CRSwNP were more likely to be prescribed perioperative oral steroids than men (29% vs. 20%). (C). Women with CRSwNP with asthma had the highest frequency of oral steroid use (39%). *P < 0.05, ***P < 0.001 by Chi-squared test. the most severe form of CRSwNP, AERD, was much more common in women than men, which is in agreement with other published studies [24,25] (Fig. 1). The increased frequency of asthma among CRSwNP women could simply be due to the overall increase in asthma prevalence among adult women compared to men in the general population. According to the National Center for Health Statistics 2014 National Health Interview Survey, the prevalence of asthma among women 35 and older in the USA was 8.8%, compared to 5.2% for men in the same age range [30]. It is also well established that patients with CRS have an increased prevalence of asthma compared to the general population [31], and the data in this work confirm that finding. Importantly, however, we found that the frequency of asthma was 46% in CRSwNP men and 66% in CRSwNP women. This increased frequency of 20% is far higher than the 3.6% increase in asthma prevalence documented in the general population of women compared to men. We also found that the frequency of asthma was 6% in control women, and 3% in control men, which is in line with the estimates for the general population above. Moreover, we found that the frequency of asthma was 27% in CRSsNP women, and 21% in CRSsNP men, which highlights the fact that CRSwNP women have a greater increase in asthma frequency than would be explained by the increased prevalence of asthma in patients with upper airway disease. The severity of CRS disease can be measured using a variety of different techniques, ranging from assessment of patients' clinical symptoms (e.g., SNOT-22) [26], to evaluation of patients sinus thickening on CT scan. For the latter, several different radiological scoring systems are available for quantifying CRS disease, with the Lund-Mackay (LM) scoring method being one more commonly utilized [21]. In this system, individual sinuses are assigned a number based on sinus mucosal thickening: 1) absent; 2) partial opacification of the sinus; or 3) complete opacification of the sinus. The scores for each individual sinus are then added to scores assessing patency of the ostiomeatal units to generate a final total value. In our study, we found no differences in the averaged total LM score between men and women with CRSwNP (Fig. 4A). One of the drawbacks in using this scoring system, however, is that there is no delineation between degrees of partial sinus opacification, such that a sinus with 75% opacification receives the same score as one with only 10% opacification. Such a limited scoring system is not able to account for varying degrees of sinus inflammation observed in clinical populations. In contrast, the radiology-based scoring system used in our study has a more expanded classification for sinus opacification, as interpreted by clinical radiologists, who were not affiliated with this study. This method allows for the spectrum of sinus disease to be more fully characterized in our study population, making differences between men and women more evident. The clinical radiology scores showed that severity is unequivocally worse in women with CRSwNP than in men with CRSwNP (Fig. 4). While our results strongly suggest that women with CRSwNP have more severe disease than men, the mechanisms responsible for this difference are not clear. As mentioned previously, estrogen has been shown to play a role in the sex bias seen in asthma and autoimmunity [14,15]; thus it is possible that estrogen plays a role in CRSwNP disease severity as well. In addition, our preliminary studies indicate that nasal polyp tissues from CRSwNP women have higher levels of inflammatory markers, such as eosinophil cationic protein (ECP), and increased levels of autoantibodies compared to polyp tissue from men (data not shown), suggesting that regulation of some inflammatory processes may be altered in women. Interestingly, estrogen has been shown to directly activate eosinophils in vitro, and eosinophils can support the survival of antibodysecreting cells in the bone marrow [27][28][29]. Furthermore, estrogen has been shown to promote production of autoantibodies by allowing autoreactive B cells to escape tolerance [14]. Thus, we speculate that estrogen may be a critical sex-specific factor that promotes the accumulation of ECP and autoantibodies in nasal polyps of women. Our ongoing studies are focused on elucidating the roles of estrogens and the mechanisms that may be responsible for the sex-specific differences in CRSwNP disease severity seen in this study. Finally, it is important to note the limitations of this work. First, these results are based on a selective population of patients, those who actively sought specialized care for their CRS disease and often required surgical intervention. Although it is a weakness of our study that they may not represent the general population of people with CRSwNP, it may be a strength of the study that the patients in our cohort are likely to be those patients with the most severe forms of the disease. It is reasonable to expect that patients with the most severe disease disproportionately utilize medical care and drive CRS health care costs. In addition, we examined records from a large number of patients, over a 10-year time span, which strengthens our analyses. There are myriad reasons why women in our study could have more severe . CRSwNP women were more likely to require revision surgery than men (42% vs. 29%). (B). Women with CRSwNP required more surgeries on average than men. Line represents the mean. (C). CRSwNP women with asthma had the highest frequency of revision surgeries (50%). *P < 0.05 by Chi-squared test (A and C) or by Mann-Whitney U test (B). Women with CRSwNP have more severe disease than men W. W. Stevens et al. 20 disease; it may be that women with CRSwNP only seek out care once their disease passes a certain threshold of morbidity, or that socio-economic factors have a greater influence on their decisions to seek care. However, it has been documented that women with asthma visit all heath care providers more than men, and women are more likely to seek care sooner than men [32,33]. Thus the above explanations seem unlikely to account for the differences we have described in our study. Importantly, the epidemiologic studies required to elucidate the possible factors for these differences in disease severity have yet to be performed in CRS. As such, this work represents an important initial analysis that raises awareness of marked sex-specific differences in CRS, and hopefully will stimulate mechanistic studies and larger epidemiological studies in the field. In summary, we have found that CRSwNP is more frequent in men in a large tertiary care population, but that women suffer from more severe disease. These findings suggest that the mechanisms that underlie the pathogenesis of CRSwNP in men and women may be different, and they may provide novel insights for the development of improved therapeutic strategies for men and women with CRSwNP. Author Contributions K. E. Hulse wrote the manuscript, analyzed the data, and designed the study; W. W. Stevens assisted with writing the manuscript, analyzing data, and collecting patient information; A. T. Peters assisted with recruiting patients and collecting patient information; L. Suh, J. E. Norton, and R. Carter assisted with collection of patient samples; R. C. Kern, D. B. Conley, R. K. Chandra, B. K. Tan, L. C. Grammer, and K. E. Harris assisted with patient recruitment and sample collection; A. Kato, M. Urbanek and R. P. Schleimer assisted with study design and writing the manuscript. SUPPORTING INFORMATION Additional supporting information may be found in the online version of this article at the publisher's web-site. Figure S1. Median age of the patients in each group at the time of study participation. (A). CRSwNP patients were older than control and CRSsNP patients. There was no difference in age between CRSwNP and AERD patients. (B). There was no difference in age between men and women in any patient group. Boxes represent medians with 25th and 75th percentiles, whiskers represent the min and max values. ***P < 0.001 by Kruskal-Wallis test with Dunn's correction. Figure S2. Radiology score for CRSwNP in men and women. Women with CRSwNP had higher median radiologic scores (scale 1-5) based on clinical radiologist CT scoring. Boxes represent medians with 25th and 75th percentiles, whiskers represent the min and max values. *P < 0.05 by Mann-Whitney U test.
2016-05-12T22:15:10.714Z
2015-02-12T00:00:00.000
{ "year": 2015, "sha1": "4159554550f91808dba157e2a3714694f21f29e3", "oa_license": "CCBYNC", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/iid3.46", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4159554550f91808dba157e2a3714694f21f29e3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
136188014
pes2o/s2orc
v3-fos-license
Ecofriendly Synthesis of nano Zero Valent Iron from Banana Peel Extract In this study, nano Zero Valent Iron (nZVI) were synthesized from banana peel extract (BPE) and ferrous sulfate. During the synthesis of nZVI both the precursor and the reducing agent were mixed in a clean sterilized flask in 1:1 proportion. For the reduction of Fe ions, 5 ml of filtered BPE was mixed to 5 ml of freshly prepared 0.001 M – 0.005 M aqueous of FeSO4 solution with constant stirring at room temperature. Within a particular time change in colour from brown to black color obtained by nanoparticles synthesis. A systematic characterization of nZVI was performed using UV-Vis. UV–visible absorption is used to investigate SPR. Characteristic surface plasmon absorption band was observed at 210 nm for the black colored nZVI synthesized from 0.001–0.005 M ferrous sulfate with BPE concentration 5 ml. It has been found that the optimum concentration for the synthesis of nZVI is 0.001M Fe2+ ions. There is small decrease in the intensity of SPR band from 0.001 to 0.005 M. The characterization size of nZVI was performed using TEM. The result shows that formation of particles size of nZVI was more 100 nm. Introduction The term nano comes from the Greek word meaning very small. Nano-size shows one billionth of a meter, or 10 -9 [1][2][3]. Nanotechnology is defined as the manipulation of the material through a particular chemical and / or physical processes to synthesize materials with specific characteristics, which can be used in particular applications. Nanotechnology is the art of how to create and manipulate materials at the nanoscale (1-100 nm). Nanotechnology is a field of science that is growing rapidly with a variety of applications in science and technology [9]. 1 ICSAS IOP Publishing IOP Conf. Series: Journal Nanotechnology relates to the synthesis of nanoparticles about size, shape, chemical composition and dispersity aimed for human benefits. At this time, chemical and physical methods have been successfully produce pure, nanoparticles are well defined, but it is expensive and harmful to the environment. Utilization of biological organisms such as plant extracts, biomass plants and microorganisms could be an alternative to chemical and physical methods for the production of environmentally friendly nanoparticles [2]. Nanobiotechnology is a related field between biology and nanotechnology. It is an alternative for the synthesis of nanoparticles that are environmentally friendly by using biological resources such as plants and microorganisms [10][11]. Green synthesis nanoparticle has been achieved by using plant extracts for reducing and capping agents. Many research has been done in the biosynthesis of silver nanoparticles with plant parts such as Punica granatum peels [13], Citrus sinensis peel [14], lemon leaves [15], Myrica esculenta leaf [16], Wrightia tinctoria leaves [17] and mango peel [18]. In this study, we have reported for ecofriendly synthesis of nano Zero Valent Iron (nZVI) using the peels extract of the banana plant. Aqueous ferrous sulfate solution reacts with banana peel extract, causing rapid formation of very stable, crystalline nZVI. The rate of synthesis of the nanoparticles very quickly, which justifies the use of crop residues in the biosynthesis of the iron nanoparticles through ecofriendly methods and more secure. In the next section we have described the synthesis of nZVI by discoloration, changes in absorbance and particle size are formed after reduction. Preparation of Banana peel extract Banana peel extract was used as a reducing agent for the synthesis of nZVI. The fresh banana peels were washed repeatedly with distilled water to remove dust and dirt in it. Approximately 25 g of peel was taken in a glass of 250 ml containing 75 ml of double-distilled water and then boiled peel at 80 ° C for 10 minutes and filtered through Whatman No 1 filter paper. The resulting filtrate is stored at 4 ° C and is used as a reducing and stabilizing agents. Synthesis and Characterization nZVI In this study, the nZVI synthesis of both the precursor and the reducing agent is mixed into clean sterilized flask in a 1: 1 proportion. For the reduction of Fe ion, 5 ml filtered BPE mixed with 5 ml of freshly prepared 0.001 M -0.005 M aqueous solution of FeSO 4 with constant stirring at room temperature. In particular time changes the color changes from brown to black shows the synthesis of nanoparticles. The formation of nanoparticles can be observed by UV-Visible spectroscopy at a wavelength of 150-550 nm. Observation of ultraviolet-visible spectra obtained using Shimadzu UV-1650pc Spectrophotometer. The images of nZVI were analyzed with a Philips EM 400T Transmission Electron Microscopy (TEM) operated at 100 kV. Green synthesis of nano Zero Valent Iron and characterization The detailed study of the green synthesis of nZVI was performed using BPE. The formation of colloids nZVI investigated with observation of the color change of the solution. The emergence of black colour in the reaction showed the formation of nZVI. Figure 1 shows the image discoloration of synthesis reaction nZVI: Tube A contains ferrous sulfate, tube B contains banana peel extract and tube C contains nZVI in colloidal form. Banana peel contains polyphenols which can reduce Fe ions into Fe 0 . Polyphenols are biology components that interact with the metal salts through functional groups, OH and mediate their reduction to nanoparticles [19]. [20]. Samples were characterized using transmission electron microscopy (TEM). TEM analysis is presented in Figure 3. From the image confirm that the density grid elements in each zone is related with the intensity shading. The size similar nZVI can be seen in Figure 3. However, due to the agglomeration causes the particle size nZVI becomes larger 100 nm [21]. These results require further study and research. Conclusion In conclusion, nano Zero Valent Iron can synthesized directly betwen ferrous sulfate with banana peel extract in aqueous media without the addition of chemicals (capping agent). So this process is environmentally friendly nanoparticle synthesis. However, the size of particles nZVI was more 100 nm.
2019-04-29T13:16:09.755Z
2017-01-01T00:00:00.000
{ "year": 2017, "sha1": "17563372848fe8253a7a0264d87b9f90fba7d111", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/795/1/012063", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "f922abae1d26159f79d2563b7cf28569326ab91b", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
21939539
pes2o/s2orc
v3-fos-license
Predictability of Recurrence using Immunohistochemistry to delineate Surgical Margins in mucosal Head and Neck Squamous Cell Carcinoma (PRISM-HNSCC): study protocol for a prospective, observational and bilateral study in Australia and India Objectives Treatment failure and poor 5-year survival in mucosal head and neck squamous cell carcinoma (HNSCC) has remained unchanged for decades mainly due to advanced stage of presentation and high rates of recurrence. Incomplete surgical removal of the tumour, attributed to lack of reliable methods to delineate the surgical margins, is a major cause of disease recurrence. The predictability of recurrence using immunohistochemistry (IHC) to delineate surgical margins (PRISM) in mucosal HNSCC study aims to redefine margin status by identifying the true extent of the tumour at the molecular level by performing IHC with molecular markers, eukaryotic initiation factor, eIF4Eand tumour suppressor gene, p53, on the surgical margins and test the use of Lugol’s iodine and fluorescence visualisation prior to the wide local excision. This article describes the study protocol at its pre - results stage. Methods and analysis PRISM-HNSCC is a bilateral observational research being conducted in Darwin, Australia and Vellore, India. Individuals diagnosed with HNSCC will undergo the routine wide local excision of the tumour followed by histopathological assessment. Tumours with clear surgical margins that satisfy the exclusion criteria will be selected for further staining of the margins with eIF4E and p53 antibodies. Results of IHC staining will be correlated with recurrences in an attempt to predict the risk of disease recurrence. Patients in Darwin will undergo intraoperative staining of the lesion with Lugol’s iodine and fluorescence visualisation to delineate the excision margins while patients in Vellore will not undertake these tests. The outcomes will be analysed. Ethics and dissemination The PRISM-HNSCC study was approved by the institutional ethics committees in Darwin (Human Research Ethics Committee 13-2036) and Vellore (Institutional Review Board Min. no. 8967). Outcomes will be disseminated through publications in academic journals and presentations at educational meetings and conferences. It will be presented as dissertation at the Charles Darwin University. We will communicate the study results to both participating sites. Participating sites will communicate results with patients who have indicated an interest in knowing the results. Trial registration number Australian New Zealand Clinical Trials Registry (ACTRN12616000715471). Objectives Treatment failure and poor 5-year survival in mucosal head and neck squamous cell carcinoma (HNSCC) has remained unchanged for decades mainly due to advanced stage of presentation and high rates of recurrence. Incomplete surgical removal of the tumour, attributed to lack of reliable methods to delineate the surgical margins, is a major cause of disease recurrence. The predictability of recurrence using immunohistochemistry (IHC) to delineate surgical margins (PRISM) in mucosal HNSCC study aims to redefine margin status by identifying the true extent of the tumour at the molecular level by performing IHC with molecular markers, eukaryotic initiation factor, eIF4Eand tumour suppressor gene, p53, on the surgical margins and test the use of Lugol's iodine and fluorescence visualisation prior to the wide local excision. This article describes the study protocol at its pre -results stage. Methods and analysis PRISM-HNSCC is a bilateral observational research being conducted in Darwin, Australia and Vellore, India. Individuals diagnosed with HNSCC will undergo the routine wide local excision of the tumour followed by histopathological assessment. Tumours with clear surgical margins that satisfy the exclusion criteria will be selected for further staining of the margins with eIF4E and p53 antibodies. Results of IHC staining will be correlated with recurrences in an attempt to predict the risk of disease recurrence. Patients in Darwin will undergo intraoperative staining of the lesion with Lugol's iodine and fluorescence visualisation to delineate the excision margins while patients in Vellore will not undertake these tests. The outcomes will be analysed. Ethics and dissemination The PRISM-HNSCC study was approved by the institutional ethics committees in Darwin (Human Research Ethics Committee and Vellore (Institutional Review Board Min. no. 8967). Outcomes will be disseminated through publications in academic journals and presentations at educational meetings and conferences. It will be presented as dissertation at the Charles Darwin University. We will communicate the study results to both participating sites. Participating sites will communicate results with patients who have indicated an interest in knowing the results. trial registration number Australian New Zealand Clinical Trials Registry (ACTRN12616000715471). IntrOductIOn Head and neck cancer is the eighth most common cancer in the world with approximately 650 000 new cases reported annually. The vast majority (more than 90%) are head and neck squamous cell carcinomas (HNSCCs) that arise from the epithelium lining of the sinonasal tract, oral cavity, year of wide local excision; hence, the follow-up period of a minimum of 1 year is a satisfactory end point to assess this outcome. ► Patients may be lost to follow-up in case of death or change of address. Open Access pharynx and larynx. HNSCCs are not homogenous; on the contrary, their distinctive molecular genetic profiles have shown them to be heterogeneous that differ in risk factors, pathogenesis and clinical behaviour. 1 Despite aggressive treatment regimens with wide surgical excision, radiotherapy and chemotherapy which are all associated with substantial morbidity, the 5-year survival rates for head and neck cancer have not been significantly changed in the last three to four decades. Much of this is attributed to the advanced stage of the disease at presentation, high rates of loco-regional recurrence from inadequate resection ensuing from compromised surgical margins of the tumour and distant metastases. The numerous anatomic sites and the diversity of histological types in these locations also have a contributory role in treatment outcomes. 2 3 Hence, early diagnosis and complete resection remain the key to prognosis, recurrence and survival in cancer management. The completeness of tumour resection is assessed by obtaining tumour-free margins, which is associated with decrease in the rates of recurrence. 4 The intraoperative assessment of the tumour margin has conventionally been by naked eye examination and palpation along with available imaging techniques. Vital staining done by applying Lugol's iodine on the tumour and surrounding area highlights the extent of tumour including premalignant conditions like dysplasia and carcinoma in situ, thus elucidating the surgical margin 5 6 which can be completely missed with naked eye observation. The use of visually enhanced lesion scope (VELscope), a simple non-invasive handheld device, allows direct visualisation of alterations such as dysplasia to tissue fluorescence. 7 In many institutions, the adequacy of surgical resection of the primary tumour is traditionally determined intraoperatively by histopathological diagnosis of H&E -stained frozen sections of the surgical margins. The formalin-fixed specimens of the excised tumour and remaining frozen section samples of the margins are histologically assessed and have been used as a potential indicator for recurrences and prognosis. However, the predictive ability of histopathological diagnosis alone has proven to be far from satisfactory. 8 9 This has been attributed to the undetectable subclinical molecular changes that occur within cells in the proximity of the visible tumour as HNSCC is known to develop second tumours that are multifocal in origin. This phenomenon has been explained by Slaughter et al 10 as 'field cancerisation' where multiple cell groups independently undergo neoplastic transformation under the stress of regional carcinogenic activity. These genetic alterations may lack the evidence of histopathologic dysplasia and appear to show uninvolved mucosa that account for local recurrence and incomplete surgical resection. 1 The initiation and progression of HNSCC is a multistep process that involves progressive acquisition of genetic and epigenetic alterations. Therefore, molecular analysis of surgical margins will perhaps play an increasingly important role in establishing tumourfree surgical margins. 8 11 However, most markers lack the sensitivity and ease of applicability for effective clinical use. 12 Mutations and overexpression of the tumour suppressor gene p53 are found in 40%-60% of HNSCC. 8 13 The eukaryotic protein synthesis initiation factor, eIF4E (also known as 4E), has been found to have 100% overexpression in tumours of breast, head and neck and colon. 9 Overexpression of eIF4E in more than 5% of the basal cell layer of histologically tumourfree surgical margins of the HNSCC predict significant increase in the risk of recurrence. 9 13 Nathan et al 13 found a strong correlation between tumour recurrence and overexpression of p53 and eIF4E in histologically tumour-free margins. They concluded that molecular assessment of margins was more reliable than that with routine H&E hence has the potential to guide clinicians in obtaining tumour-free wide margins for complete excision of the lesion. ObjEctIvE The aim of the project is to conduct a prospective follow-up study of patients with head and neck cancer to: ► study the expression of the molecular markers p53 and eIF4E by immunohistochemistry (IHC) on histologically tumour-free surgical margins of the excision biopsies of HNSCC in patients from the Royal Darwin Hospital (RDH), Darwin, Australia and Christian Medical College (CMC), Vellore, India, ► determine the correlation of expression of p53 and eIF4E on histologically tumour-free margins with clinical outcomes, such as local recurrence and survival, ► determine the sensitivity and specificity of the molecular markers, p53 and eIF4E, on surgical margins in the assessment of adequacy of surgical excision and predictability of recurrence, ► study the outcomes of intraoperative use of vital staining and fluorescence visualisation, ► determine the epidemiological trend in Darwin and Vellore. MEthOds And AnAlysIs study design The predictability of recurrence using IHC to delineate surgical margins (PRISM) study is a prospective observational study in two countries Australia and India based at the RDH, Darwin and CMC and Hospital, Vellore. target population All patients diagnosed with mucosal HNSCC at RDH and CMC with a curative intent are potential candidates. Inclusion criteria ► All patients at RDH, Darwin and CMC, Vellore during the recruitment period with a confirmed diagnosis of mucosal HNSCC on initial biopsy. ► Wide local excision biopsy with mucosal surgical margins≥5 mm on histopathological examination. Exclusion criteria ► Patients diagnosed with any other histological type of mucosal head and neck cancers. ► Wide local excision biopsy specimens with surgical margins that show dysplasia, carcinoma in situ and are positive(<1 mm) and close for invasive tumour (1-5 mm) on histopathological examination. ► Patients with metastatic disease except a single regional lymph node with no extracapsular spread. The patients diagnosed to have mucosal HNSCC by clinical evaluation and biopsy at the RDH, Darwin, Australia and CMC and Hospital, Vellore, India will be initially selected based on the selection criteria for the study. All patients will undergo the relevant imaging (CT and/ or MRI) tests and an assessment of the eligibility will be determined by using the exclusion criteria. Consent to perform the tests on patients being prepared for excision surgery will be procured by the local site investigators MT (Darwin) and JR (Vellore). (figure 1) Intraoperative assessment Patients in RDH will undergo a VELscope examination and Lugol's iodine staining to mark the extent of tumour and identify surgical margins. These tests will not be performed in CMC. Postoperative assessment Five surgical margins of the excised tumour will be colour coded using marking ink, labelled with sutures, numbered and photographed. The surgeons at both sites will mark the margins 1, 2, 3, 4 and 5 with black, red, blue, green and yellow, respectively. Paraffin sections from the primary tumour and all the surgical margins will be routinely reported by the resident pathologists at the Pathology Department at RDH, Darwin and Department of Pathology at CMC, Vellore. The patients with histologically tumour-free margins that satisfy the selection criteria will finally be included for further analysis by IHC using p53 and eIF4E antibodies on the mucosal margins. An excision margin is free of tumour when it is ≥5 mm away from the tumour. Coauthors SM and/or MeT will countercheck the eligibility criteria of the sections selected for IHC. Immunohistochemical staining for p53 will be performed using avidin-biotin-peroxidase enzyme complex with a prediluted monoclonal anti-p53 antibody (Ventana). A positive p53 control (figure 2) standardised in the laboratory will be used in the assessment of the mucosal surgical margins. Positive p53 staining of the malignant cells will be indicated by an unequivocal brown stain of the nucleus. Immunohistochemical staining for eIF4E will be carried out with a polyclonal antibody to eIF4E at 1:500 dilution. Positive eIF4E control (figure 3) has been standardised on breast tissue with infiltrating duct carcinoma. A brown perinuclear staining of the tumour cells indicates a positive eIF4E stain. The tumour and margins will be graded and scored for both p53 and eIF4E according to the intensity and percentage of cells. Cases positive will also be evaluated using a 10× objective in at least 10 fields by light microscopy. Areas containing the most uniformly stained tissue will be chosen for evaluation. Immunoexpression will be quantified for (1) per cent of immunopositive neoplastic cells per 10 fields and (2) average intensity of immunostaining in the positive neoplastic cells per 10 fields. The per cent positive cells will be graded on scale of 1-4 (1=1%-25% positive; 2=26%-50% positive; 3=51%-75% positive and 4=76%-100% positive). Immunostaining intensity will be graded 1-3 (1=weak; 2=moderate and 3=strong). Prior to embarking on interpretation, coauthors SJ and GC will come to a consensus on scoring and interpretation of the staining. Subsequently, each case will be read by SJ and supervised/counterchecked by GC. The two observers will be blinded to follow-up information. Follow-up All patients will be followed-up and reviewed clinically every 3 months for the first year and at 6-month interval in the second year. In case of any suspicion, a biopsy to rule out recurrence will be performed. Evaluation of outcomes The primary outcomes are to (1) list the patients whose surgical margins are reported free of tumour with routine H&E staining that show positive immunohistochemical staining with p53 and/or eIF4E, (2) list the patients with disease recurrence and metastasis and (3) evaluate the use of Lugol's iodine and VELscope in the patients from Darwin. The secondary outcomes are to correlate recurrence of disease to positivity with p53 and eIF4E and correlate metastasis to positivity with p53 and eIF4E. During follow-up reviews, patients will be assessed by local examination, biopsy of a suspicious lesion and MRI scans. The outcomes will be evaluated based on data collected from patient files with regards to period of tumour-free survival, time taken for recurrence and/or metastasis, disease specific survival and overall survival. dAtA MAnAgEMEnt The data collection and entry on an excel spreadsheet based on the study proforma will be stored by SJ in a password protected computer and a portable external hard drive. statistical analysis The data on the surgical margins will be analysed statistically with SPSS software. Contingency table and the Χ 2 test will be used to evaluate the association of eIF4E and p53 in the surgical margins with race, sex, stage, lymph node status, histological grade, postoperative radiation and eIF4E and p53 expression in the tumour and margins. A univariate analysis of clinical factors will be performed using the Cox model to identify those variables significantly associated with prognosis. Multivariate analysis will be performed to test for simultaneous effect of two or more factors. Event-time distributions for recurrence will be estimated by the Kaplan-Meier method and compared by the log rank test to determine the individual and combined effect of eIF4E and p53 expression in the margins. Similar curves will be performed to determine the effect of nodal status with eIF4E and p53 levels in the margins as nodal status is a significant prognostic factor in HNSCCs. The consistency of protocol at both the sites will be assessed and the study will be periodically reviewed. dIscussIOn The PRISM-HNSCC study is a bilateral research project conducted in two countries that have a huge burden of the disease. Among the states and territories in Australia, Northern Territory has the highest incidence of HNSCC and the RDH is the largest public hospital that facilitates the treatment and management of the disease. 14 The actual burden of head and neck cancer in India is much greater than that reflected in the existing literature; however, it is the the most common malignancy encountered in Indian males. 15 According to WHO, lip and oral cancers is the third most common cancer in India with nearly 68% mortality in 2012. 16 Head and neck cancer is considered to progress through a multistep process from normal histological features to hyperplasia, mild dysplasia, moderate dysplasia, severe dysplasia, carcinoma in situ, invasive carcinoma and metastasis. 3 Malignant transformation in cells is microscopically invisible with H&E stain which may be identified more accurately with molecular markers, especially in head and neck cancer, where, as a result of field cancerisation, the entire mucosa has often undergone atypical changes. 1 3 9 A retrospective study conducted in Darwin suggested the efficacy of IHC with eIF4E and p53 antibodies on surgical margins of HNSCC in assessing the completeness of surgery; however, the sample size was very small for a concrete conclusion. 14 Hence, a larger sample and prospective study was warranted to validate the above finding. The aim in this study was also to evaluate the use of vital staining and VELscope. These methods are currently being studied by McCaul et al 17 and Poh et al, 18 respectively. The uniqueness of this project is the ability to study the outcomes and evaluate the efficacy of all three methods put together. Staining with Lugol's iodine solution has been shown to be effective in intraoperatively delineating the extent and precise border of the cancerous and dysplastic epithelium of the mucosal surface. It is cheap and hence can be used as a cost-effective, easy and quick screening test particularly in resource poor countries in detecting premalignant mucosa of individuals who consume tobacco, alcohol and have other lifestyle risk factors. 5 6 VELscope has up to 55% accuracy in enhancing the direct visualisation of dysplastic mucosa. When combined with Lugol's iodine, there is a potential for increasing the accuracy of the screening method. However, there is a capital expenditure with purchasing the equipment that may eventually be cost-effective in avoiding recurrence. 7 Molecular analysis by performing IHC on surgical margins with eIF4E and p53 has been suggested to predict recurrence in previous studies; however, the role of p53 is controversial. Besides being a prognostic marker, eIF4E can also be targeted for therapeutic intervention. 8 13 19 The TP53 and retinoblastoma pathways are almost universally disrupted in HNSCCs, indicating the importance of these pathways in head and neck tumourigenesis. More than 50% of HNSCC harbour TP53 gene mutations and over 50% demonstrate chromosomal loss at 17 p the site where the TP53 gene resides. 1 The eukaryotic protein synthesis initiation factor eIF4E has been found to be elevated in carcinoma breast and HNSCC, but not in benign lesions or normal mucosa. Recurrence of HNSCC was found to be more common in patients with elevated eIF4E in surgical margins. No other marker has provided evidence for being effective in detecting malignant alteration in cells. Since recurrence in HNSCC usually occurs within the first 2 years, the prognostic value of eIF4E can be used in a relatively short follow-up time. 9 Since both the institutions receive HNSCC patients representative of sample population, the results can be validated to impact. This collaborative trial between two countries has set a precedence to build and continue the partnership for future studies, education and guide protocols in diagnosis and treatment. EthIcs And dIssEMInAtIOn All patients (or their legally authorised representative) included in this study will sign a consent form that describes this study and provides sufficient information for patients to make an informed decision about their participation. The written consent from every patient, at both centres will be obtained on the Human Research Ethics Committee (HREC)/IRB-approved consent form, before that patient's biopsy specimen undergoes IHC. Any protocol amendments will be communicated to investigators, HREC/IRB, participants and Australian New Zealand clinical trials registry, as deemed necessary. Clinical and histopathological information about study participants will be accessible only to the site investigators and kept confidential by them. Identifiable data collected from electronic and hardcopy patient files by SJ will be stored securely on a password protected computer and external hard drive. Deidentified data will be used for analysis and interpretation of the results. Paraffin sections and slides will be stored in the departmental repository. Results of the study will be submitted for publication and presented as a dissertation and at departmental meetings and conferences.
2018-04-03T04:11:35.375Z
2017-10-01T00:00:00.000
{ "year": 2017, "sha1": "c463ee8cb34abb77feee80ce6df2a7fbe5f62584", "oa_license": "CCBYNC", "oa_url": "https://bmjopen.bmj.com/content/bmjopen/7/10/e014824.full.pdf", "oa_status": "GOLD", "pdf_src": "Highwire", "pdf_hash": "c463ee8cb34abb77feee80ce6df2a7fbe5f62584", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
153320301
pes2o/s2orc
v3-fos-license
Legal Opportunities of Bank Interests: Reinventing Analysis of the Mashlahat Theory of Al-Syathibi This study described the analysis of legal opportunities in a bank saving interest. In general, banks in Indonesia are very influential on the dynamics of the economic life of society. Existence evenly spread to remote. This has implications on the ease of access and services to the wider community. If the determination of the bank savings raised interest prohibition, can lead to economic turmoil in the form of a massive diversion of funds by customers that can systemically affect the stability of national economy. Considering the majority of government agencies (including ministries of religion) and private sectors to accommodate the livelihood of the general public, especially civil servants and private employees, which is increasingly dependent on the existence of a conventional bank. Mashlahat theory put forward by al-Syatibi can provide legal opportunities, both macro (general) and micro (personal). In this study, the author find that there are legal opportunities of bank savings interest in the analysis al-Syatibi's theory of Mashlahat contained master piece about the concept of ijtihad problems. According toHamka Haq, among the many writings of al-Syatibi, only al-Muw faq t and al-I'tis m which was published, 7 while others were only mentioned in a few historical records. 8 Based on this data, it can be understood that al-Syatibi actually was a productive scholars in writing scientific papers, but most of his works did not circulate in the wider community. Observing the dynamic development of contemporary Islamic law, one aspect that has always been controversial is the bank interest. On this study, the authors focus on discussing the theoretical approach of Mashlahat al-Syathibi. Thus, in a series of further subsequent analysis, it can explain and find opportunities of interest on savings bank law. B. Overview of banking Banking is a financial institution that includes banks with business activities as well as the manner and process of carrying out its business, which collects funds safely and securely in the channel system. According to Abdurrahman, "Bank is a type of financial institution that is carrying out a wide range of services, such as providing loans, circulating currency, currency supervision, acts as a storage place for valuable goods, business finance companies and others". 9 Somary, "Bank is an active agency providing credit to customers, either in the form of short-term loans, medium-term and long-term". Hasibuan (1992). Bank is a financial institution, the producer of money, collecting funds, and lenders, facilitate payment and billing, monetary stabilizers and a dynamic economic growth. In Act No. 7 of 1992 concerning banking which has been changed in the law No. 10 of 1998 on banking. Stated that "banks are business entities that raise funds in the community in the form of credit and other forms in order to improve the lives of many. 10 Bank at the present time has exceeded the world's center of economic life, because it is responsible for providing credit, issue money and checks, occupy an important place in a matter of its relation with State finances is dependent on the progress of the State itself, and it is a fact that can not be inevitable that we can not escape from the bank in the care of a particular economy. In the modern economy world, the bank is a vital economic institution, without banking institution, the economy will not be trouble-free against the fact that we must insist that all of this is perceived seek banking system, with the true history of Islam that Muslims are allowed to muamalah with banks that exist today considering an emergency condition, or in urgent situations that can not be avoided. According to the aforementioned opinions, it can be concluded that the economic institutions that its main task is to maintain financial traffic, then to establish a banking institution for islamic law is obligatory as kifayah. C. The Controversyof Interest Savings at Bank in the view of Ulama The Islamic view about the interest of money in the bank, there are still different opinions about that, whether it is unlawful, therefore, the scholars differ about its position in Islam, and the scholars gave different opinions about the status of the money interest. Muhammad Abu Zahra, a professor of Islamic law at Cairo University, confirmed his opinion that bank interest is riba nasi'ah that are forbidden in Islam. 11 Dr. Yunus found 'bank interest is forbidden riba, the existing banking was originally born out of capitalist economy, when the Islamic economic system is implemented as well as possible, by itself would eliminate the interest system in banking institutions until the present day. 12 Ahmad Abdul Aziz al-Najjar said that the important reasons that Islam abhors transaction using the system of interest are: 1)T h e principle of the Islamic conception actually collides directly with the interest rate system with all the obvious consequences on the lives, thoughts and mental humanity, in other words there is no place for interest practise in islam 2)T h e interest system is a disaster for the human race, not just against mental aqidah and the human mind, but also for the sanctity of business and life practice. 3) Relationship on the basis of interest only undermine conscience, views and feelings of someone towards his friend in the community. 13 On the other hand, Sheikh Mahmud Syalthout expressed the opinion that the interest savings in the bank is lawful for the following reasons: By setting the laws of syara' and the correct rules, we found savings in the bank's interest lawful and not unlawful, because money saved is not debts of the owner A. Chatib,Bank dalam Islam… , to the bank, but the owner of the money voluntarily come to the bank and requested for the money to be received and money owners also know that the bank play the money in the rare trade field, even more there is no loss. 14 Ahmad Abdillah said that allowing for the collection of money if the interest had been mentioned first, if the tariff is already known in advance, if the tariff is already known by people, and people who came also to borrow it, shows willingness by such considerations, and interest levied by banks are not forbidden because the bank has always stated the charge. 15 A. Hassan, a founder of Persatuan Islam (Persis), expressed his opinion as followed: V erses of the Qur'an and hadith which forbids interest nothingexplains its boundary but QS Al-Imran / 3: 130 that prohibits multipe riba, and this verse muqayyad, and other verses are absolute. According to the rules Ushul fiqh that there are two matters in a statement that one muqayyad, and the other is absolute then used muqayyid, for example, a doctor forbids eating a lot ofrice which means we could eat rice in small quantities. 16 D. A short Story of al-Syathibi and Theory of Mashlahat al-Syatibi full name was Ab Ishaq Ibrahim Ibn Musa al-Gharnatiy. He came from Arab, Lakhmitribal, while his famously known name is al-Syatibi taken from the name of the country of origin family, Syat bah (Xariva or Jativa). 17 Although attributed to the country, alleged he was not born there, because, accordingly, Jativa city has fallen into the hands of Christians, and all Muslims have been expelled from there since the year 1247 (645 H), or nearly a century before life of al-Syatibi. Al-Syatibi family may have left the country when it happened and then settled in Granada. For the Romans, the city terbut named Saetabis. The city is located in the eastern part of Spain, and including the region of Valencia. Syat bah city was very famous in the Middle Ages, with its paper industry, which not only exported to Spain, but also to every corner of the world, including Egypt. In the reign of Islam, Syat bah city was the second largest city in the region Velencia, and its inhabitants when it was much more crowded compared now. The city's history began when united with Velencia as a part of a separate kingdom that was built in the eleventh century by 'Abd. In addition, al-Syatibi also learned many sciences of philosophy and other Islamic sciences, especially islamic jurisprudence (Ushul Fiqh) from renowned teachers. This is an indication that al-Syatibi has a depth knowledge and breadth thought, causing him to appear as a mujtahid. With a depth of knowledge and breadth of his thinking, then al-Syatibi, spawned a variety of papers, as has been mentioned among other well-known work is al-Muw faq t which discuss more about fiqh. Al-Syatibi divided ijtihad into two, namely ijtihad which lasts until the end, as it relates to tahqiq al-Manat, and ijtihad which the object can be stopped before the Day of Judgment, because it is related to tanqƯh al-manat and takhrij al-Manat. 21 Tahqiq al-Manat is the view / thought to know is illat in parts of the desired form of qiyas towards its origin. While tanqƯh al-Manat is thought to establish a fixed illat with nash or ijma'. While, Takhrij al-Manat is the view Furthermore, al-Syatibi in al-Muw faq t stated that after the ijma 'and qiyas, then the next method is al-mashlahah, namely al-mas lih al-mursalah defined as a method of ijtihad in force when something has no source of nash in islamic law, in this case if also no source of consensus and in others such as qiyas 22 . al-MasƗlih al-mursalah (maslahah) has actually been practiced since the time of the Prophet. It is, have been carried out by friends, and the Prophet themselves justify it. Textually companions have violated syara ', because they do outside conditions. However companions saw a great al-maslahah behind it, and will not cause mafsadat if it did, then the text is left behind and switch to the context, because in certain circumstances the text is irrelevant, and that is what condition wants to switch to a more maslalah, although there are no rules explicitly. Maqashid al-Shariah was understood as the values and objectives of targeted syara' that is implied in all legal reasoning. In this case, as al-maslalah as one of the goals of the Shari'a should be enforced. Say so, because many cases at the time of the Prophet and the Companions were raised in the literature, one of which can be stated as an example, namely; once the prophet of Allah so that the Muslims issued a rule to keep the meat of sacrificial except within certain limits, a provision to three days. A few years later, a number of friends violate that rule. The case was submitted to the Prophet. Then, he justifies his friend actions, while explaining that meat was kept based on the interests of our guests were made up of poor people. Now there is no more coming, then save. 23 From this information, that maqashid al-shari'ah has been taken into consideration in determining the law. The needs of society towards the law in a particular period can be changed with the law in another form, due to the development time. The condition of society at the time of the Prophet was not like the conditions experienced by the people today, in turn, will bring a logical consequence of the development of the law itself, in which the Islamic world has been in contact with civilization, and of course ijtihad to conform with the According to al-Syatibi, mashalah which is the purpose of God in Islamic law is should be embodied because the safety and welfare would not be possible without mashalah, especially those that are dharuriyah and includes five things, namely; maintenance of religion, life, intellect, lineage, and wealth. 24 Furthermore, al-Syatibi asserts that maqashid al-Shariahwhen associated with welfare, it can be viewed from two aspects. First, God's purpose (maqasid al-Shari'ah) and second, mukallaf purpose (maqasid al-mukallaf). Viewed from the first aspect contains four issues, namely (1) the original purpose syƗru';law establishes that the benefit of the human family in the world and the Hereafter (2) the determination of law as something that should be understood; (3) the determination of law as something that should be implemented; (4) the determination of human law to bring down legal protection. 25 God's purpose in establishing a law for the benefit of man is none other than the man himself. Therefore, God requires us to understand and implement the law based on their ability levels. By understanding and implementing the Shari'a, people will be protected from the chaos caused by lust. The purpose of the law if seen from mukallaf aspect is that every mukallaf should comply the four goals of Sharia as outlined by the shari'ah 'at above. So that the goal can be achieved, namely human welfare in this world and in the hereafter. Various examples of al-maslahah presented by al-Syatibi in al-Muw faq t. Among them is to make the Qur'an in the Manuscripts and the effort to publish it, including this category is codifying Shari'a sciences and much more, for example; Nahwu(arabic grammar) which in this case did not reveal any proposition to the effort. Maslahah in its plural form "MasƗlih", according to al-Syatibi is what underlies the perfection of human life and allows humans to obtain the necessities of life so that they can prosper. This can not be achieved simply by i'tiyƗd (according normally) only, instead an attempt to achieve the welfare of human beings confronting various difficult challenges. As well as a matter of eating, drinking, clothing, housing, transportation, weddings and others, will probably not be obtained except by hard work. 26 Furthermore, al-Syatibi classified maslahah into three levels, namely dharnjriyah, hƗjiyah, and tahsƯniyah. What is meant by dharnjriyah mutlaq is everything there for the sake of life and well-being in the world and the Ibid., p.14 hereafter. If this dharnjriy prosperity is not embodied, than the human life is endangered and for the life hereafter, man threatened by torture. 27 Dar riy prosperity include five things, namely the guaranteed obligations to believe in Allah, ensuring the obligation to live, the maintenance of reasonable health, the conservation of descent and the preservation of property. In this case, the continuance property also from the influence of currency devaluation. While the definition of hajiyah is the fulfillment of all human needs in the form of facilities so that human life can be spared from trouble (masyaqqah). If this second kind of requirement is not met, then human life will face many obstacles that make it difficult, even though that constraints would not destroy his life. The last one is tahsƯniyah are all things that help to perfecting human life decently by rationality and tradition and avoiding human life from defects and deficiencies. Although merely complement, not less important is tahsiniyah prosperity since many associated with the good life ethics (Makarim al-Akhlaq). 28 From the descriptions of al-Syatibi about maslahah as a method of ijtihad, As it is in al-Muw faq t, then he came into the conclusion that the establishment of the life of the world, can only be achieved if maslahah is well implemented. 29 Thus, anything that contains only the benefit of a world without the benefit of the hereafter or not to support the realization of the benefit of the afterlife, according to al-Syatibi it is not maslahah. With this conclusion, it can be understood that the man in realizing maslahah must be free from lust, because the benefit is not measured according to the desire of lust. Based on the above descriptions, illustrated clearly that the needs of human life from the ladder tahsiniyat to daruriyat, none of which do not contain maslahah. Thus the study of al-Syatibi about maqashid al-Shari'ah in his book al-MuwaƗfaqƗt. This group of scholars who consider that the interest is not the same as riba, although basically the same in addition to the capital that loaned. In the world economy, debt-receiveables money has made a habit, oftenly many traders base their company's capital by borrowing money to anyone and with it they are expecting profits in the company. E. Rationalization of the Legal Opportunity of Interest in Bank Borrowing money is a good and effective way in the world of trading today. It is based on the fact that banks base their work on the system of capital borrowing, and the work must be subjected to interest in hopes of obtaining a profit, this means that a portion of the profits will be given to the owner of the money. Interest and riba can arise from the process of debt or credit, therefore borrowing money can be viewed as a fundamental base for the emergence of interest from riba. In essence,ribawas forbidden to prevent man to fall into misery and squalor because its form ribawere coercion and extortion, and indeed its damage far outweigh the benefits. As for earning interest with the economic problems that created people to pursue more profits, and therefore it brings benefits not only to the owners of the principal, but also for borrowers, for people who worked together will benefit greatly, for the people who borrowed the calculation is based on the possibility to profit from the use of the money, and the money that can be used to trade or to buy homes, which may be withheld rent, if the money was loaned it must ask for interest as lost profits. If calculated that if use the loan money will result is loose, and certainly he did not borrow, even though his own money, to be sure not to use the money earlier in the field of business to be disadvantageous. It is logical and a principle for every trader, then picked up the interest of people so it does not mean maltreatment on him and no extortion in it so that it becomes forbidden as riba, so long as they do and act according to the agreement that they agree upon. It is not fair, if people who use the loan money that earns a temporary profit. Author propose two opinions of several scholarls opinion whose their skills are related in emergencies, those opinions are: First, al-Syathibi expressed his opinion that if we had been chained by the unlawful and lawful path to the God is already closed then it is an emergency, as the purposes of Islam not only to make living alone but also include the need for food and clothing and shelter as well as other welfare, if we do not follow them, but even though we are allowed to do that, it should not make us to use it excessively, it should be used normally. 30 Second, K.H. Mas Mansyur expressed his opinion that banks build it, take care of it, working and dealing with the illicit, but reality will prove its 30 Sugardi Gunarto,Usaha Perbankan dalam Perspektif Hukum, (Cet. V;Yogyakarta: Kanisius, 2007), p. 197 importance, if we are not associated with the bank, we will of urgency and recede backward. 31 Theoretically that saving money in a bank and provide loans with interest is a blackmail against one of the parties, but in practice, there has been a lot of evidence we can see that the loan with interest will offer benefits for many individual or public interest. There are several reasons the author inclined to statenthat this is an emergency muamalah, the reasons are: 1)A number of conventional banks provide easy access to capital for microenterprises communities, especially in rural areas. This ease of access has not been optimally accommodated by islamic trademarked banks. 2)T h e discussion about the Indonesian version of Islamic bank with an international version has not finished yet. 3)T h e existence and mechanisms of Islamic banking currently is not considered in line with expectations and needs of the community. Thus, conventional banking tends to be an option that can be "trusted". 32 4)T h e operation of banks which use the system's interest now in fact have rejected the various damaged faced by society at the macro and the micro for the entrepreneurs. 5)I n general, conventional banks in Indonesia have been rooted at each joint economic life of society. Thus, the existence spread evenly up to the corners. This has implications on the ease of access and services to the wider community. If the determination of the prohibition of interest on savings bank raised systematically, it can result in macro-economic shocks in the form of a massive diversion of funds by customers that can affect the stability of national economy sistemically 6) Considering the majority of government agencies (including The Ministries of Religion) and private sector which accommodates the livelihood of the general public, especially civil servants and private employees, which is increasingly dependent on the existence of a conventional bank. Thus, there is still a chance of saving the legal permissibility of interest in conventional banks. 7)T h e rules of jurisprudence (fiqh) : Minhajuddin, Diktat Fiqh Tentang Muamalah Masa Kini: al-Fiqh al-Mu'ashir Fil-Muamalat (Ujung Pandang Berkah : 1990), p.33 Urgently hajat (important purposes) is occupies in a forced place, while the circumstances causing the implementation of forbidden things. F. Conclusion Based on the results of the above studies, it can be concluded that interest in the present context is still likely to be allowed, considering: Basically, saving money in the bank already entrenched in Indonesian society for many different reasons, some of which are intended for the safety of money from theft, fire, etc., and there is also aimed at taking profit through the produce so that the money could be growing solidly and continuously, in the implementation of the existing savings, there are depositors who became permanent and non-permanent. That the loan with interest in fact will provide good for the benefit of individuals or public interest. Borrowing and lending money is a good and effective way in the world of trading today. It is based on the fact that the bank base the works on capital borrowing and lending system, and the work must be subjected to interest in hopes of obtaining a profit, this means that a portion of the profits will be given to the owner of the money. The operation of banks which use the system's interest now in fact have rejected the various damaged faced by society at the macro and the micro for the entrepreneurs. In general, conventional banks in Indonesia have been rooted at each joint economic life of society. Conventional banks also offer an easy acces and prime service to the society. If the determination of the prohibition of interest on savings bank raised systematically, it can result in macro-economic shocks in the form of a massive diversion of funds by customers that can affect the stability of national economy sistemically The majority of government agencies (including ministries of religion) and private sector which accommodates the livelihood of the general public, especially civil servants and private employees, which is increasingly dependent on the existence of a conventional bank. Thus, the author observed the chance of saving the legal permissibility of interest in conventional banks.
2019-05-15T14:31:32.137Z
2015-06-01T00:00:00.000
{ "year": 2015, "sha1": "c4b835ae968ce9d621ed0ebd37cc61376f694d9f", "oa_license": "CCBYSA", "oa_url": "http://journal.iaingorontalo.ac.id/index.php/au/article/download/225/219", "oa_status": "GOLD", "pdf_src": "Neliti", "pdf_hash": "2c438464827667b613f22dd9dfe6ca035423c7bb", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Economics" ] }
14634666
pes2o/s2orc
v3-fos-license
A Voxel-Based Morphometric MRI Study in Young Adults with Borderline Personality Disorder Background Increasing evidence has documented subtle changes in brain morphology and function in patients with borderline personality disorder (BPD). However, results of magnetic resonance imaging volumetry in patients with BPD are inconsistent. In addition, few researchers using voxel-based morphometry (VBM) have focused on attachment and childhood trauma in BPD. This preliminary study was performed to investigate structural brain changes and their relationships to attachment and childhood trauma in a homogenous sample of young adults with BPD. Method We examined 34 young adults with BPD and 34 healthy controls (HCs) to assess regionally specific differences in gray matter volume (GMV) and gray matter concentration (GMC). Multiple regressions between brain volumes measured by VBM and attachment style questionnaire (ASQ) and childhood trauma questionnaire (CTQ) scores were performed. Results Compared with HCs, subjects with BPD showed significant bilateral increases in GMV in the middle cingulate cortex (MCC)/posterior cingulate cortex (PCC)/precuneus. GMC did not differ significantly between groups. In multiple regression models, ASQ insecure attachment scores were correlated negatively with GMV in the precuneus/MCC and middle occipital gyrus in HCs, HCs with more severe insecure attachment showed smaller volumes in precuneus/MCC and middle occipital gyrus, whereas no negative correlations between insecure attachment and GMV in any region were found in BPD group. In addition, CTQ total scores were not correlated with GMV in any region in the two groups respectively. Conclusions Our findings fit with those of previous reports of larger precuneus GMV in patients with BPD, and suggest that GMV in the precuneus/MCC and middle occipital gyrus is associated inversely with insecure attachment style in HCs. Our finding of increased GMV in the MCC and PCC in patients with BPD compared with HCs has not been reported in previous VBM studies. Introduction Borderline personality disorder (BPD) is a highly prevalent axis II psychiatric disorder in general and clinical populations [1], typified by features such as pervasive instability in the regulation of emotion, self-image, interpersonal relationships and impulse control [2]. The estimated prevalence of BPD is 2% in the general population [3], 10% among psychiatric outpatients and 15%-25% among psychiatric inpatients [4]. BPD is also a paradigmatic disorder of adult attachment, with high rates of antecedent childhood maltreatment [5]. Several developmental models have suggested that BPD pathology (i.e., BPD or its features) is shaped by a combination of biological and environmental mechanisms, the latter of which include social and attachment-related disturbances [6]. Previous studies supported that insecure attachment styles are associated with personality disorders, wherein cluster B personality disorders (i.e., antisocial, narcissistic, especially borderline) are prominent [7,8]. Agrawal et al. [9] noted several reports of a significant, strong association between insecure attachment and BPD, notwithstanding variation among studies in measures used and attachment types examined. However, very few studies of BPD to date have examined neural patterns in relation to attachment, a basic behavioral system that processes relationshipbased emotional experience and regulation in subjects with BPD [10]. Childhood trauma is another psychological characteristic that has been hypothesized to lead to BPD [11,12], which was found to associate with many adult psychiatric disorders, including affective, dissociative disorders, substance use disorders and sexual dysfunction [13]. Zanarini et al. [14] suggested that childhood maltreatment (i.e., emotional neglect, physical and/or sexual abuse) by a caregiver is among the most important psychosocial risk and prognostic factors for BPD pathology. Childhood exposure to physical or sexual abuse or severe neglect is related to anxious and avoidant adult attachment [15]. Adult disorganized or unresolved attachment has been related to maltreatment and physical abuse or neglect in childhood [16]. However, the neural correlate of attachment disturbance and childhood maltreatment in subjects with BPD is presently unknown. Therefore, using neuroimaging techniques to examine these two psychological characteristics is necessary to gain a comprehensive understanding of BPD. Several neuroimaging techniques, such as positron emission tomography (PET) and regionof-interest (ROI) morphometry, have been used to enhance understanding of the psychobiology of BPD [17,18]. Most studies of brain volume in BPD have used a priori ROI approach, which enables precise detection of small volume differences [18,19]. While few studies to date have used voxel-based morphometry (VBM) [20], an unbiased, fully automatic technique believed to be superior to other approaches such as ROI analysis [21]. Assessing the whole-brain structural imaging data using Statistical Parametric Mapping (SPM), VBM does not require a priori definitions of anatomical areas, and is independent of hypotheses. Also, it is free of rater bias, inter-rater variability and highly efficient for large samples [22]. Given the advantages of VBM, several researchers have begun to focus on this modality for the examination of BPD [23][24][25]. The VBM studies of BPD conducted to date have revealed gray matter abnormalities in the frontal, temporal, parietal and limbic brain regions [26][27][28]. In the first study to employ VBM for this purpose, Rüsch et al. [25] found reduced gray matter volume (GMV) in the left amygdala in female patients with BPD compared with healthy controls (HCs). In a larger sample (60 female patients with BPD and 60 female HCs), Niedtfeld et al. [26] observed local differences in GMV in the amygdala, hippocampus, and fusiform and cingulate gyri. As volumetric abnormalities in the hippocampus are of major interest in the examination of BPD, many researchers have focused on this region. For instance, O'Neill et al. [29] found volume reductions in the right dorsolateral prefrontal cortex (DLPFC), right caudate and right hippocampus in patients with BPD compared with healthy subjects. Kuhlmann et al. [12] also found reduced GMV in the hippocampus and increased GMV in the hypothalamus in female patients with BPD compared with healthy participants, but no significant alteration in the amygdala or anterior cingulate cortex (ACC). To determine whether brain volume alterations exist in adolescents, Brunner et al. [30] compared adolescent patients with BPD with patients with other psychiatric disorders and HCs. They found reduced GMV in the DLPFC and orbitofrontal cortex in patients with BPD compared with HCs, but no significant GMV difference between the two patient groups. Völlm et al. [28] found GMV differences in the orbitofrontal cortex; middle frontal, precentral and postcentral gyri; temporal pole; and inferior and superior parietal cortices between male patients with BPD and male HCs. In a sample including males and females, Soloff et al. [22] observed significant bilateral reductions in gray matter concentration (GMC) in the ventral cingulate gyrus and several regions of the medial temporal lobe, including the hippocampus, amygdala, parahippocampal gyrus and uncus, in BPD patients compared with HCs (n = 34 each). In a further study, Soloff et al. [31] compared suicide attempters and nonattempters with BPD, as well as high-and low-lethality attempters, with HCs to identify neural circuits associated with suicidal behavior in BPD. They found significant differences in GMC in the insula, orbitofrontal gyrus and middle superior temporal cortex associated with suicidal behavior in male and female patients with BPD. Although all of these above VBM studies have documented evidence of gray matter abnormality in patients with BPD, some results of VBM studies of BPD were inconsistent and even contradictory. For example, Labudda et al. [24] observed no volume difference in the whole brain between BPD patients and HCs in a recent VBM study. Also, increased and decreased GMC in the amygdala have been reported in adult patients with BPD [22,27]. In our opinion, possible reasons for such differences in previous research might include sample size (some studies included fewer than 10 patients with BPD [28]), sample heterogeneity (some samples have included patients with posttraumatic stress disorder (PTSD) or bipolar disorder (BD) as well as those with BPD [26,32]), and use of different statistical significance levels (P = 0.001 uncorrected in several studies [25,32]; P < 0.05 for the false discovery rate (FDR) or family wise error (FWE) for multiple comparisons [12,26,28,30]). Previous VBM studies have rarely examined correlations between brain abnormalities and some important psychological characteristics of BPD, such as attachment and childhood trauma, especially the former. However, several neruoimaging studies have investigated the relationships between brain abnormalities and some psychological measures, and got meaningful results [33][34][35][36][37]. With structural imaging techniques, Tebartz van Elst L et al. [33] found reduced hippocampus and amygdala gray matter volumes in patients with BPD reporting traumatic attachment histories. With functional imaging studies, researchers investigated social attachment and demonstrated that pictures of loved ones would evoked cortical and subcortical responses, including those in the cingulate cortex, insula, basal ganglia and orbitofrontal cortex, in healthy subjects [34,35]. Skodol et al. [36] also found that BPD involves developmental or acquired brain dysfunction associated with early childhood traumatic experience. Meanwhile, associations between childhood maltreatment and brain gray matter volume reductions in the hippocampus [19], as well as in the insula and mesial frontal brain areas [37], were recently reported in large normal population samples. Therefore, examination of relationships between brain volume and attachment, as well as childhood trauma, in patients with BPD using VBM, an efficient exploratory technique for the study of brain-behavior relationships, is thus essential. Overall, further studies with large, homogenous samples and more prudent analytical methods are necessary to clarify the differences in gray matter between patients with BPD and HCs. Thus, in this study, we used Diffeomorphic Anatomical Registration Through Exponentiated Lie (DARTEL) Algebra [38], an improved VBM method that can achieve inter-subject brain image registration more accurately, to assess regionally specific differences in GMV and GMC between patients with BPD and control subjects. We also investigated relationships between gray matter volumes and measures of attachment and childhood trauma in multiple regression models. To our knowledge, few BPD studies to date have used DARTEL Algebra. Based on the results of published studies, we hypothesized that VBM analyses would demonstrate gray matter abnormalities in the prefrontal, temporal and limbic areas in subjects with BPD compared with HCs. We also supposed that insecure attachment correlated with gray matter volumes in the prefrontal gyrus and cingulate cortex, whereas the childhood trauma correlated with gray matter volumes in limbic areas. Methods Participants A total of 34 right-handed young adults with BPD were recruited from outpatient clinics affiliated with the Second Xiangya Hospital of Central South University, Changsha, Hunan, China. Thirty-four right-handed volunteers were recruited for the HC group. The Ethics Committee of the Second Xiangya Hospital of Central South University approved the study. All subjects were made aware of the purpose of the study and provided written informed consent. Two well-trained psychiatrists made diagnoses of BPD independently based on the structured clinical interview for axis II disorders (SCID-II) of the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV) [39]. The psychiatrists rated each symptom item as absent (0), subclinical (1) or clinically present (2) based on the SCID-II users' guide. Every patient also received the Structured Clinical Interview for DSM-IV Axis I disorders (SCID-I) by two psychiatrists to exclude Axis I disorders [40]. Exclusion criteria were: past or current axis I diagnosis of schizophrenia, delusional (paranoid) disorder, schizoaffective disorder, BD or psychotic depression; physical disorder of known psychiatric consequence (e.g., hypothyroidism, seizure disorder, brain injury); and borderline mental retardation. Participants' medical records were reviewed when available to confirm fulfilment of the inclusion or exclusion criteria. None of the patients had received psychiatric treatment. HCs were recruited by advertisements from the surrounding community. To rule out any DSM-IV axis I or axis II disorder, two well-trained psychiatrists also interviewed HCs with SCID-I and SCID-II [39,40]. Control subjects were physically healthy individuals with no past or current history of any DSM-IV axis I or axis II disorder, no current medical problem, and no history of psychiatric disorders among first-degree relatives. Depression and anxiety severity were rated using the Chinese version of the Center for Epidemiologic Studies Depression Scale [41] and the State-Trait Anxiety Inventory [42], respectively. The Attachment Style Questionnaire (ASQ) [43], which yields scores for five scales or dimensions, was used to assess attachment style. The confidence scale of the ASQ reflects secure attachment, and the other four scales (discomfort, approval, preoccupied and secondary) represent particular aspects of insecure attachment. In this study, only insecure attachment score was used, which ranges from 32 to 192. The Childhood Trauma Questionnaire (CTQ) [44] was used to assess childhood trauma, including sexual abuse, emotional abuse, emotional neglect, physical abuse and physical neglect. While in this study, only CTQ total score was used, which ranges from 28 to 140. All the psychological measures used in this study have shown good reliability and validity [43][44][45]. Magnetic resonance imaging acquisition Magnetic resonance imaging (MRI) was performed using a Philips Ingenia scanner operating at 3.0 T. All participants were asked to remain quiet during scanning. Ear plugs and foam pads were used to minimize noise and head motion. MRI data were acquired using a single T1-weighted turbo-field echo sequence with the following acquisition parameters: repetition time = 6.7 ms, echo time = 3.1 ms, flip angle = 8°, field of view = 240 mm, slice thickness = 1.0 mm, acquisition matrix = 240 × 240 and voxel size = 1.0 × 1.0 × 1.0 mm 3 . MRI data analysis MR images were cropped and reoriented following the anterior-posterior commissure line with MRIcro (http://www.cabiatl.com/mricro/mricro/mricro.html). The resulting images (including brain, cerebellum and brainstem) were then processed using SPM 8 software (www. fil.ion.ucl.ac.uk/spm). For each image, the anterior commissure was manually set as the origin of the spatial coordinates using SPM 8. Images were registered to a group-specific common space following the DARTEL Algebra procedure in SPM 8 [35]. The procedure involves the following steps: (1) segmentation of gray matter tissue maps for each subject using the standard unified segmentation model of SPM 8; (2) alignment of gray matter tissue maps to a standard space through rigid-body transformations; (3) generation of gray matter DARTEL templates through a six-step iterative procedure, which involves the creation and refining of the templates and associated image warping fields (briefly, a first template is created by normalizing initial images to a standard template and averaging; subsequent templates are then defined iteratively by normalizing images to the template obtained at the previous iteration, and the final deformation is then parameterized as a warping field); (4) non-linear warping of the individual gray matter images to the corresponding DARTEL template using the warping fields computed in the previous step; (5) modulation of warped gray matter images to preserve the original GMVs and GMCs; and (6) smoothing of the modulated images with a Gaussian kernel of 8 × 8 × 8 full width at half-maximum to improve the normality of data, thereby increasing the power of subsequent statistical analyses. As the templates generated by DARTEL Algebra represent the average shape of the subjects included in the analysis and the reference space thus differs generally from Montreal Neurological Institute (MNI) space, normalized and modulated gray matter images were converted to MNI space prior to smoothing. Statistical analysis Demographic and clinical characteristics of participants in the two groups were compared using chi-squared tests for categorical variables and independent-sample t-tests for continuous variables. Statistical analyses were performed using the Statistical Package for Social Sciences, version 17.0 (SPSS Inc., Chicago, IL, USA). Volumetric data from the BPD and HC groups were compared using t-tests. Images were thresholded at an absolute level of 0.1, resulting in the inclusion only of voxels exceeding the threshold. Whole-brain analyses were generated separately for gray matter and corrected for multiple comparisons. A FDR threshold was determined from the observed P-value distribution and hence adapted to the size of the effect in the data. An extent threshold for voxel clusters of at least k = 50 voxels and a significance threshold of P < 0.05 were chosen. Correlations between gray matter volume and psychological assessment (ASQ, CTQ) scores were assessed using a multiple regression model performed with SPM 8, adjusted for subjects' total intracranial volume (TIV). TIV was obtained by summing the volumes of gray matter, white matter and cerebrospinal fluid using the "get_totals" function (www.cs.ucl.ac.uk/staff/G. Ridgway/vbm/). In this study, we only used gray matter volume in a multiple regression model for correlation analyses [38]. Since insecure attachment and childhood trauma are probably highly related, in multiple regression models, we first conducted regression analyses for insecure attachment scores of ASQ with GMV in BPD group, adjusted for CTQ total socres and TIV. Then, we conducted regression analyses for CTQ total socres with GMV in BPD group, adjusted for insecure attachment scores of ASQ and TIV. And the same analysis procedures above were repeated in HC group. Demographic and clinical characteristics The demographic and clinical characteristics of all subjects are summarized in Table 1. Participants in the BPD and HC groups were well matched, with no significant differences in age, gender, education, depression symptomatology, anxiety severity or TIV (Table 1). ASQ insecure attachment scores and CTQ total scores were significantly higher in the BPD group than in the HC group. Group differences in gray matter abnormalities Compared with HCs, subjects with BPD showed significant bilateral GMV increases in the middle cingulate cortex (MCC)/posterior cingulate cortex (PCC)/precuneus ( Table 2, Fig 1), especially on the right side. GMC did not differ significantly between the BPD and HC groups. In the reverse contrast (HC > BPD), we found no significant difference in GMV or GMC between groups. Correlations between morphometric data and psychometric findings Multiple regression models showed different involvement of GMV associated with insecure attachment and childhood trauma. Significant negative correlations were present between ASQ insecure attachment scores and GMV in the precuneus/middle cingualte cortex (MCC) ( Table 2) and middle occipital gyrus (Table 2) in HC group, HCs with more severe insecure attachment showed smaller volumes in precuneus/MCC and middle occipital gyrus, whereas no negative correlations between insecure attachment and GMV in any region were found in BPD group. In addition, CTQ total scores were not correlated with GMV in any region either in the two groups. Discussion The aim of this study was to investigate GMV and GMC abnormalities in the whole brain in a homogenous sample of young adult patients with BPD compared with HCs and relationships between gray matter volume and insecure attachment as well as childhood trauma, using VBM-DARTEL Algebra. VBM revealed that young adults with BPD displayed significantly increased GMV bilaterally in the MCC/PCC/precuneus compared with healthy subjects. GMV were correlated negatively with insecure attachment scores in the precuneus/MCC and middle occipital gyrus in HCs, HCs with more severe insecure attachment showed smaller volumes in precuneus/MCC and middle occipital gyrus. Increased MCC volume, an indicator of deficits in emotional-cognitive interplay in BPD Many researchers have found the MCC (called the dorsal anterior cingulate cortex in many studies), suggested to be the "cognitive" part of the ACC in emotional-cognitive interplay [46], to be hyperactivated in patients with BPD [47]. As a proportion of ACC, MCC is one of the cognitive and monitoring regions involved in salience detection, attention regulation, and cognitive control [48]. Functions related to the anticipation of unpleasant stimuli [49], as well as the integration and control of emotional stimuli [50], have also been linked to the MCC. In the present study, we observed increased GMV in the MCC in subjects with BPD compared with HCs. In a previous study, Faymonville et al. [51] found that the juncture of anterior midcingulate cortex (aMCC) and posterior midcingulate cortex (pMCC) had enhanced activation when the intensity and unpleasantness of noxious stimuli was increased. Vogt et al. [52] demonstrated that the activity in aMCC would increase as a result of the high level of fear activations in this region. Using functional MRI, Buchheim et al. [10] found that the activation of patients with BPD was located in the anterior midcingulate cortex (aMCC) during telling individual stories, which was consistent with a fluorodeoxyglucose-PET study demonstrating increased baseline ACC metabolism extending from the aMCC into the medial prefrontal cortex in patients with BPD compared with HCs [17]. And Buchheim et al. [10] interpreted their finding of clearly more dorsal aMCC activation as an indicator of unsuccessful coping with emotional pain and a neural signature of pain and fear in these patients. Thus, involvement of aMCC in pain and fear is feasible, which was consistent with the previous studies. Moreover, increased activation of the dorsal anterior cingulate cortex in BPD has already been reported during response inhibition [53]. Given that increased gray matter in BPD (compared with control subjects) is likely to be related in a straightforward manner to patterns of increased functional activation, as the blood oxygen level-dependent signal appears to be related most closely to local field potentials, which arise primarily from dendritic membrane potentials in a local region [54], we can reasonably make a hypothesis that increased GMV in MCC might relate to increased activation in MCC, and might be an indicator of deficits in emotional-cognitive interplay in young adult patients with BPD. Abnormal PCC/precuneus size, abnormality in the default mode circuit in BPD As a central node of the so-called default mode circuit in the brain, the PCC (with the precuneus) has been identified as a major hub involved in non-task-based attention and readiness of response to external and internal environments [55]. The PCC functions as a "convergence node" within the default mode network (DMN), where information integration and the interaction between different subsystems are facilitated [56]. An abnormality in this region may signify an abnormality in the default mode circuit, which could disrupt mental processes, leading to distractability and decreased attention to internal mental processes, as seen frequently in mania [55]. The precuneus has been shown to be activated during imagination of one's own actions or movements and during tasks requiring introspection, self-evaluation and reflection upon one's own personality and mental state [57]. Moreover, it is involved in visuospatial imagery, episodic memory retrieval, self-processing and consciousness (behavioral correlates of the precuneus) [57]. Along with the PCC, the precuneus has been implicated in self-referential processing and first-person perspective [50,57], features consistent with the tendency for patients with BPD to become emotionally overinvolved in interpersonal situations. In the present study, we found larger GMV in the PCC/precuneus in patients with BPD than in HCs. Although no previous VBM study has demonstrated volumetric abnormality in the PCC in BPD, brain activity in this region is more pronounced in patients with BPD than in controls when anticipating negative pictures [58]. Scherpiet et al. [58] suggested that heightened PCC activation during the anticipation of unspecific, non-self-related emotional stimuli fit well with the prominent self-reference in everyday life situations that characterizes BPD. This aspect may explain a stronger emotional engagement in patients with BPD compared with healthy participants [58]. PCC activation has also been found to be a correlate of anger processing [50] and the processing of threat-related words [59]. Thus, increased GMV in the PCC might play the same role as heightened PCC activation in patients with BPD. With regard to precuneus, in a previous VBM study of patients with BPD, Soloff et al. [22] found increased GMCs in a very large area of the right cerebrum extending from the right superior frontal gyrus posteriorly and across the parietal lobe to the precuneus, similar to our results. Moreover, almost all studies have shown the spreading of activation to the precuneus during mental imagery [57]. Stress-related dissociative symptoms are known to occur in about 75% of individuals with BPD [36], and depersonalization is a major dissociative symptom, according to the SCID for dissociative disorders [60]. Irle et al. [18] reported that a larger right precuneus was correlated positively with stronger depersonalization in subjects with BPD. Simeon et al. [61] reported that dissociative symptom severity in individuals with depersonalization disorder was correlated with increased metabolic activity of the precuneus. Dissociative states may be considered to be pathological conscious states, potentially involving precuneus recruitment. In addition, the PCC and precuneus are involved in conscious processing of information and selfreflection [62]. Broyd et al. [63] suggested that enhanced connectivity of the precuneus/PCC reflects a disturbance of self-referential and emotional processing of pain in BPD. In a previous study, Logothetis and Wandell [54] have demonstraded that gray matter increase might relate to patterns of increased functional activation in a straightforward manner. Therefore, our results of increased GMV in PCC/precuneus might suggest heightened PCC/precuneus activation in patients with BPD compared with HCs, which needs further studies to verify. Structural changes associated with insecure attachment style and childhood trauma The results of multiple regression models in the present study showed different involvement of GMVs associated with insecure attachment style and childhood trauma, which requires further analysis. On the one hand, we found no correlations between CTQ total scores and GMV in any region in either of the two groups. Using VBM method, Labudda et al. [24] did not find a relationship between childhood maltreatment and the BPD patients' brain volumes, which was consistent with our results in BPD group. However, using VBM-DARTEL, Dannlowski et al. [37] found the morphometric analysis yielded reduced gray matter volumes in the hippocampus, insula, orbitofrontal cortex, anterior cingulate gyrus, and caudate in HC subjects with high CTQ scores, which we did not find in our HC group. On the other hand, we only found negative correlations between insecure attachment scores and GMV in precuneus/MCC and middle occipital gyrus in HC group, but not in BPD group, which suggests that HCs with more severe insecure attachment showed smaller volumes in precuneus/MCC and middle occipital gyrus. In fact, some neuroimaging studies have investigated neural correlates of adult insecure attachment by means of affective and/or attachment-related stimuli, and demonstrated the relationships between dorsal anterior cingulate cortex (MCC) and insecure attachment [64,65]. In a recent research, Schneider-Hassloff et al. [66] measured brain activation using fMRI in healthy subjects to test how the adult insecure attachment styles modulate neural correlates of mentalizing. They found a strong activation of the mentalizing network, including bilateral precuneus, (anterior, middle, and posterior) cingulate cortices. They also found that insecure attachment styles correlated with task-associated neural activity in the midcingulate cortex. Our results in HCs provide further evidences for the neural correlate of insecure attachment in healthy subjects. Based on the results in our multiple regression analyses and other relevant research results above, we could make a suppostion that the MCC might play a central role in insecure attachment style in HCs. However, we did not find significant negative correlations between insecure attachment and GMV in precuneus/MCC in BPD group, which was not consistent with our hypothesis and might be due to the GMV increases in BPD group. Negative correlations between insecure attachment style and GMV in precuneus/MCC were found in HCs, which suggested HCs with more severe insecure attachment showed smaller volumes in precuneus/MCC, while the enlarged GMVs in the MCC/precunes in young adults with BPD might weaken the negative correlations between insecure attachment and GMV. The results of our study are subject to some limitations. First, the analysis was based on a massive voxel-by-voxel univariate analysis, which exhibits lower accuracy during segmentation of some subcortical structures, such as the thalamus. Second, we described findings in terms of mean differences in indices (GMV and GMC) between the BPD and HC groups. These mean differences can be important when describing and identifying regions that play a role in BPD, but they do not provide information about individual cases or clinical applications in individual subjects. Third, because the young patients with BPD in our sample had no comorbid axis I/II disorders and may be not representative of other BPD populations [4], the generalizability of results in our study remains to be determined. Conclusions In summary, this study is the first to report GMV enlargement in the MCC and PCC in patients with BPD using VBM. Our preliminary finding of larger precuneus in BPD is consistent with the results of previous MRI and PET studies. Further studies are needed to replicate these findings and to verify the relationships between measures of attachment style and childhood trauma and volumetric abnormalities in patients with BPD. Supporting Information S1 Dataset. Data of demographic and clinical characteristics of all subjects and data of structural MRI. (XLS)
2016-05-12T22:15:10.714Z
2016-01-25T00:00:00.000
{ "year": 2016, "sha1": "93ef4de24369fcbecfc4514fb52b2e45fe7d726e", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0147938&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "93ef4de24369fcbecfc4514fb52b2e45fe7d726e", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
195878928
pes2o/s2orc
v3-fos-license
A CANDIDATE GENE ASSOCIATION STUDY OF FKBP5 AND CRHR1 POLYMORPHISMS IN RELATION TO WAR-RELATED POSTTRAUMATIC STRESS DISORDER A CANDIDATE GENE ASSOCIATION STUDY OF FKBP5 AND CRHR1 POLYMORPHISMS IN RELATION TO WAR-RELATED POSTTRAUMATIC STRESS DISORDER Nenad Jaksic, Emina Šabi Džananovi , Branka Aukst Margetic, Dusko Rudan, Ana Cima Franc, Nada Bozina, Elma Feri Boji 6 Sabina Ku ukali , Alma Džubur Kulenovi , Damir Marjanovi , Esmina Avdibegovi , Dragan Babi , Ferid Agani, Abdulah Ku ukali , Alma Bravo Mehmedbaši , Nermina Kravic, Mirnesa Muminovi Umihani , Osman Sinanovi , Romana Babi , Marko Pavlovi , Shpend Haxhibeqiri, Aferdita Goci Uka, Blerina Hoxha, Valdete Haxhibeqiri, Christiane Ziegler, Christiane Wolf, Bodo Warrings, Katharina Domschke, Jürgen Deckert & Miro Jakovljevic Department of Psychiatry, University Hospital Center Zagreb, Zagreb, Croatia Department of Psychiatry, University Clinical Center, Sarajevo, Bosnia and Herzegovina Department of Psychiatry, University Hospital Center Sestre Milosrdnice, Zagreb, Croatia School of Medicine, University of Zagreb, Zagreb, Croatia Department of Laboratory Diagnostics, University Hospital Center Zagreb, Zagreb, Croatia Department of Genetics and Bioengineering, International Burch University, Sarajevo, Bosnia and Herzegovina Department of Psychiatry, University Clinical Center of Tuzla, Tuzla, Bosnia and Herzegovina Department of Psychiatry, University Clinical Center of Mostar, Mostar, Bosnia and Herzegovina Faculty of Medicine, University Hasan Prishtina, Prishtina, Kosovo Community Health Center Zivinice, Zivinice, Bosnia and Herzegovina Department of Neurology, University Clinical Center of Tuzla, Tuzla, Bosnia and Herzegovina Department of Psychiatry, University Clinical Center of Kosovo, Prishtina, Kosovo Department of Psychiatry and Psychotherapy, Medical Center University of Freiburg, Faculty of Medicine, University of Freiburg, Germany Department of Psychiatry, Psychosomatics and Psychotherapy, University Hospital Würzburg, Würzburg, Germany INTRODUCTION Posttraumatic stress disorder (PTSD) is a highly frequent and disabling psychiatric condition among waraffected populations ( Anxious disorders are significantly correlated with levels of stressful experiences. Prior research has shown that these disorders are characterised by dysfunction of mineralocorticoid (MR) and glucocorticoid receptors (GR), which are the main effector receptors of the hypothalamic-pituitary-adrenal (HPA) axis. Corticotropin-releasing hormone receptor 1 (CRHR1) is one of the mediators of the HPA axis as the binding of corticotropin-releasing hormone (CRH) to this receptor subsequently leads to the release of cortisol (Holsboer & Ising 2008, Refojo & Holsboer 2009). The CRHR1 gene has been subject to various studies implicating its potential role as a candidate gene for mood and anxiety disorders and several of its polymorphisms have been investigated concerning their influence on PTSD risk (Amstadter et al. 2011, White et al. 2013). The minor allele of the CRHR1 SNP rs17689918 increases the risk for developing panic disorder (PD) in females and is suggested to result a reduced flight behaviour, predominant generalization of fear, increasd anxious apprehension and anxiety sensitivity via a reduced gene expression of CRHR1 (Weber et al. 2016). On the other hand, the functions of MR and GR are regulated by several chaperone proteins. The genetic variation within one of these proteins, the FK506 Binding Protein 5 (FKBP5) which regulates the sensitivity of the glucocorticoid receptors, is considered to be one of the main risk factors for the development of stress-related disorders, including PTSD (Zannas et al. 2016). By investigating the influence of the FKBP5 polymorphism rs1360780 on the corticosteroid receptor, which mediates the regulation of the HPA gland, researchers have concluded that individuals homozygous for the minor allele manifest incomplete normalisation of the cortisol secretion, caused by stress . Homozygous carriers of the rs1360780 minor alleles manifested insufficient recovery of cortisol associated with increased levels of anxiety after being exposed to psycho-social stress. This suggests that individuals who carry the risk variants of these polymorphisms are at risk of chronic manifestations of higher cortisol levels when exposed to repeated stress ). While the research mentioned above has dealt with the recovery from psycho-social stress by investigating FKBP5 polymorphisms of healthy controls, some of the more recent research explored the relation between FKBP5 polymorphisms and lifetime PTSD symptoms. The dysfunction of the HPA axis is also featured in PTSD (Sherin & Nemeroff 2011). It is exactly those genes involved in the HPA axis functioning, that are considered to be especially apt for investigating the interactions between genes and environment, which are highly present in the development of PTSD and its symptoms. Genes that regulate the activity of the GR, such as co-chaperone FK506, by linking the protein 5 (FKBP5) to it, have been described to influence the impact of childhood trauma due to alterations within the HPA axis (Watkins et al. 2016). Single nucleotide polymorphisms (SNPs) in the FKBP5 gene are linked to higher FKBP5 protein expression, which leads to GR resistance and a dysfunction of the negative return connection (Binder 2009). SPNs might, therefore, be associated with a slower return to the baseline stressinduced levels of cortisol, which can potentially increase the risk of PTSD development (Binder 2009). Specific SPNs (e.g., rs3800373 and rs1360780) in FKBP5 are thus conceptualized as risk factors for PTDS development (Watkins et al. 2016). Besides, specific FKBP5 polymorphisms were also found to be associated with peritraumatic dissociation (Koenen et al. 2005), a strong risk factor for subsequent PTSD development, as well as with effectiveness of psychotherapy for PTSD (Wilker et al. 2014). So far, prior research has been performed in African-American (Binder et al. 2008), South Asian (Kohrt et al. 2015) and mixed Caucasian/African-American populations (Watkins et al. 2016, Young et al. 2018. Studies conducted in European populations, particularly among war-affected individuals, are very scarce. Our aim was to examine the association of the FKBP5 SNP rs1360780 and CRHR1 SNP rs17689918 with PTSD in persons who were involved in the Balkan wars during the 1990s. It seems these populations have suffered numerous stressful and traumatic experiences, and the prevalence of mental disorders, especially PTSD, is particularly high (Bogic et al. 2012). As specific genes can promote the occurence of certain psychiatric disorders, including war-related PTSD, the findings of this study may further extend our understanding of the complex pathophysiology of PTSD in the framework of the stress-diathesis or vulnerability-resilience model (Jakovljevic et al. 2012a(Jakovljevic et al. , 2017. The general inclusion criteria were: DSM-IV current or life-time PTSD or no PTSD, participants must be at least 16 years at the time of traumatization and currently not older than 65 years. The exclusion criteria were: the presence of organic depression, epilepsy, psychotic symptoms, addiction (except smoking), intellectual disability, oncological disorders, valproid acid use and 1 st and 2 nd degree relation to an already recruited person. More detailed information regarding recruitment, diagnostic assessment, inclusion and exclusion criteria, as well as sample size and gender distribution are described in detail elsewhere (Dzubur- Kulenovic et al. 2016). Ethical votes at the participating clinical centers were obtained between 2011 and 2013 on the basis of local translations of an information and consent form designed by the Sarajevo center. Participants thus were informed and gave written informed consent according to the principles of the declaration of Helsinki (WMA 2013). Psychometric Instruments The presence or absence of PTSD symptoms in the screening stage was assessed using a structured clinical interview -Mini International Neuropsychiatric Interview (M.I.N.I.). The Clinician Administrated PTSD Scale (CAPS) (Blake et al. 1996) was performed to make a categorical PTSD diagnosis (current or remitted PTSD) and to assess the severity of PTSD symptoms. Interviews were performed by trained medical personnel (psychiatrists, clinical psychologists or psychiatric residents). For the assessment of general psychiatric symptoms we used a self-report measure -the Brief Symptom Inventory (BSI) (Derogatis & Melisaratos 1983). The internal reliability of the BSI for the entire sample was high (Cronbach =0.987). Molecular Analyses For genotyping, genomic DNA was isolated from frozen venous blood by using the FlexiGene DNA Kit (Qiagen, Hilden, Germany) according to manufacturer's instructions and stored at the Laboratory of the Department of Psychiatry, Psychosomatics and Psychotherapy, University of Würzburg, Germany at -80°C. FKBP5 rs1360780 genotypes were determined using a custom designed KASP genotyping assay (LGC, Berlin, Germany). PCR reaction including an end-point fluorescent read-out was performed according to manufacturers' instructions in a CFX384 Touch Cycler (Biorad, Munich, Germany). Genotype analysis was performed using the CFX Manager Software. for 3 h at 37°C, which results in differentially sized fragments representing the respective genotypes. The fragments were separated on a 4% agarose gel by electrophoresis and visualized with ethidium bromide. Fragment lengths and resulting genotypes were determined by two independent investigators blinded for diagnosis. Statistical Analysis Statistics were performed using PLINK 1.9. With a minor allele frequency (MAF) higher than 10% both variants were polymorphous, reached a call rate of 98% and controls did not deviate from Hardy Weinberg equilibrium (P 0.01). Logistic regression was used for case-control analyses and linear regression were carried out for CAPS and BSI analyses, separately for the current and remitted PTSD patients. The following models were tested in all phenotypes: additive allelic, dominant (based on the minor allele), as well as the genotypic model. The significance level was Bonferroni adjusted for 23 variants that were analyzed in total in the entire project ( =0.002). RESULTS In order to characterize the role of FKBP5 and CRHR1 on PTSD, two well investigated SNPs rs1360780 and rs17689918 were subjected to a case-control analysis in altogether 719 participants. The influence of genotypes on the dimensional CAPS and BSI questionnaires, both linked to PTSD, was determined in all cases with current PTSD symptoms and additionally in those with diagnosed lifetime PTSD. FK506-binding protein 5 Three case-control analyses for FKBP5 rs1360780 were carried out to test the allelic, dominant, and genotypic models, respectively. The logistic regression analysis identified a nominally significant association for the dominant model (P=0.0423; Table 1), suggesting that individuals who are homozygous for the C allele are at a higher risk for developing PTSD. However, following the Bonferroni adjustment for multiple testing, this result became insignificant. Results for the genotypic model (P=0.0819; Table 1) was not even nominally significant and showed for the allelic model no significance at all (P>0.1). Detailed allele and genotype distributions for cases and controls are given in Table 1. Linear regression analyses were performed with genotypes predicting the total CAPS and BSI scores, in additive allelic, dominant and genotypic models for current and remitted PTSD separately. The results of the regression analyses for the CAPS total scores reached in none of the calculated models any significance, neither in the current nor in the remitted PTSD group (p all >0.05; Table 1). Only the allelic model predicting BSI total score, not in the current (p>0.1), but in the remitted PTSD group was nominally significant (P=0.0493), suggesting an association between the major C allele and higher BSI scores (Table 1, Figure 1). However, it became insignificant after Bonferroni adjustment. The results of the regression analyses for the CAPS total scores were not even nominally significant in both groups of subjects (p>0.09). Corticotropin Releasing Hormone Receptor 1 No significant associations were identified within the additive allelic, genotypic, and dominant models for rs17689918, neither for dimensional nor categorical phenotypes (P all >0.1). DISCUSSION This is the first study to assess the association of the FKBP5 polymorphism rs1360780 with PTSD in a European population of war-affected individuals. We recruited participants who were exposed to war activities in the states of former Yugoslavia, with a pre- , the FKBP5 polymorphism rs1360780 was not associated with PTSD and general psychiatric symptoms in the population of European war-affected persons. It seems the assumed FKBP5 risk alleles described in previous studies do not always implicate higher risk (Zannas et al. 2016). For example, the G allele of rs4606 within the regulator of Gprotein signaling 2 gene, although not examined in the current study, has been associated with posttraumatic growth following exposure to the Hurricane Katrina (Dunn et al. 2014). FKBP5 rs1360780 was also linked to less intense depressive symptoms in the context of early institutionalization but low current stress (VanZomeren-Dohm et al. 2015), and to less lifetime PTSD in the absence of child abuse in a highly traumatized cohort (Klengel et al. 2013). In addition, the C allele was previously described as protective, but in the study of Van Zoren-Dohm et al. (2015), boys with the CC genotype had higher rates of depressive symptoms compared to girls with the CC genotype in the context of elevated victimization. These results are actually in line with our nominally significant finding of an association of PTSD with higher BSI scores in remitted patients carrying the C allele. Additionally, our results were not able to support previous findings (Amstadter et al. 2011, White et al. 2013) suggesting CRHR1 as a mediator of PTSD risk. This may be due to the use of a study population comprising a different genetical background than the ones analysed in earlier research. This study had several limitations that need to be mentioned. The patient cohort is fairly small, in particular when it is separated into remitted/lifetime PTSD and current PTSD which may have reduced statistical power. Studies in consortia such as the PGC consortia are therefore necessary. Also, trauma type is significantly different among individuals in this study, as the characteristics of the war situation varied between the countries and regions. This may have affected epigenetic processes which possibly confounded the genetic analyses. Another issue is the comorbidity with neuropsychiatric disorders which does not allow for conclusions regarding the specificity of the present findings to PTSD. We thus may presume that factors related to genetic and cultural background may explain why FKBP5 alleles predicted PTSD symptoms in an urban African American cohort (Binder et al. 2008) or a mixed cohort (Watkins et al. 2016), but not in a rural South Asian cohort (Kohrt et al. 2015) or individuals from Balkan region states in the current study. In addition, numerous other factors, such as post-deployment social support, unemployment and stressful life events are known to predict PTSD in war-affected populations (Jakovljevic et al. 2012a,b, Possemato et al. 2014). Findings of studies linking candidate vulnerability genes (or 'risk alleles') directly to specific psychopathological conditions have proven difficult to replicate. Candidate genes are usually chosen as a target of research based on previous kowledge of biological functional impact of the genes on the symptom that is researched. As previously discussed, these heterogeneous outcomes raise the possibility that what such alleles may confer is increased plasticity, with opposite outcomes being then possible depending on the presence of positive or negative environmental influences (Belsky & Hartman 2014, Belsky et al. 2009). Yet the exact role of timing, type and duration of these genotype-dependent environmental challenges and the molecular and cellular mechanisms underlying such differential outcomes remain to be elucidated by future studies. Indeed, a recent systematic review has documented a significant influence of the FKBP5 T allele (SNP rs1360780) on the development of PTSD in adulthood, but only for individuals who were exposed to early-life trauma (Wang et al. 2018). CONCLUSION In contrast to the findings of some previous studies, we only found a nominally significant association of FKBP5 rs1360780 and no association of CRHR1 rs17689918 with PTSD susceptibility among individuals affected by the Balkan wars during the 1990s. The nominally significant finding did not withstand correction for multiple testing. For elucidating these genes' real resilience/vulnerability potential, interaction with other environmental influences should be taken into account in future studies. Acknowledgements: We thank all the participants and their families without whose idealistic and enthusiastic support the study would not have been possible. We also would like to thank at Sarajevo for technical support and Maja Brki and Sandra Zorni for their assistance in data collection; at Würzburg: Carola Gagel for technical assistance with extracting DNA. Thanks are highly deserved by and gratefully extended to Peter Riederer as spiritus rector who brought the consortium together. The study was funded by the DAAD program Stability Pact for South Eastern Europe and supported by the DFG-funded RTG 1253 (speaker Pauli) as well as the DFG-funded CRC-TRR58 (projects C02 Domschke, Deckert, and Z02 Deckert, Domschke).
2019-07-12T13:14:33.557Z
2019-06-01T00:00:00.000
{ "year": 2019, "sha1": "6ac6af09a62bcd911a097924e892034b81578931", "oa_license": null, "oa_url": "https://doi.org/10.24869/psyd.2019.269", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "0cd62c7143a57f0f6002be368c7c276d0faf5c72", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
16366279
pes2o/s2orc
v3-fos-license
Care staff and family member perspectives on quality of life in people with very severe dementia in long-term care: a cross-sectional study Background Little is known about the quality of life of people with very severe dementia in long-term care settings, and more information is needed about the properties of quality of life measures aimed at this group. In this study we explored the profiles of quality of life generated through proxy ratings by care staff and family members using the Quality of Life in Late-stage Dementia (QUALID) scale, examined factors associated with these ratings, and further investigated the psychometric properties of the QUALID. Methods Proxy ratings of quality of life using the QUALID were obtained for 105 residents with very severe dementia, categorised as meeting criteria for Functional Assessment Staging (FAST) stages 6 or 7, from members of care staff (n = 105) and family members (n = 73). A range of resident and staff factors were also assessed. Results Care staff and family member ratings were similar but were associated with different factors. Care staff ratings were significantly predicted by resident mood and awareness/responsiveness. Family member ratings were significantly predicted by use of antipsychotic medication. Factor analysis of QUALID scores suggested a two-factor solution for both care staff ratings and family member ratings. Conclusions The findings offer novel evidence about predictors of care staff proxy ratings of quality of life and demonstrate that commonly-assessed resident variables explain little of the variability in family members’ proxy ratings. The findings provide further information about the psychometric properties of the QUALID, and support the applicability of the QUALID as a means of examining quality of life in very severe dementia. Little is known about the quality of life of people with very severe dementia [1], and research evidence in this area is lacking [2]. An important goal of long-term care provision for this group should be to promote quality of life [2], and in order to achieve this goal it is essential to understand as much as possible about the factors that may affect quality of life. However, the majority of quality of life research has focused on people with less severe dementia, and especially those who can provide self-ratings. Comparison of self-and proxy ratings has shown that proxy ratings are typically more negative than self-ratings [3][4][5][6][7]. Estimations of quality of life are subjective, and self-rating is the gold standard, but where dementia has progressed to the extent that verbal communication is very limited or completely absent, and self-rating is not possible, quality of life can only be assessed through direct observation or proxy rating [8]. In this situation it is important to be able to employ suitable measures and to be aware of factors that may affect or bias the proxy ratings made using these measures. A recent systematic review identified 5 dementia-specific measures of quality of life developed for use with people who have severe dementia, but noted a lack of studies using these measures that can contribute information about their psychometric properties and relative merits [8]. One of the measures considered worthy of further investigation was the Quality of Life in Late-stage Dementia scale (QUALID) [9]. The QUALID is based on observable behaviours and contains 11 items asking respondents to rate the frequency with which they have observed these behaviours in the person with dementia over the previous week. In the original development study, the scale showed good inter-rater and test-retest reliability, and principal components analysis (PCA) identified a single factor. Studies have further examined the properties and applicability of the QUALID in Sweden [10], Spain [11], and Norway [12], supporting the reliability and validity of this measure, although with some variability in identified factor structure. Furthermore the QUALID appears sensitive to change resulting from both pharmacological and non-pharmacological interventions, and hence has the potential to serve as an outcome measure [13][14][15]. Additional information about this measure and its utility would therefore be valuable. All studies involving the QUALID to date have been based on proxy ratings of resident quality of life made by members of care staff in residential and nursing home settings. However, the need to incorporate consideration of proxy ratings by family members when assessing quality of life in people with severe dementia has been noted [8]. A number of quality of life studies using other measures with people with dementia of varying degrees of severity in both community and residential settings have included both family and care staff ratings, but while many of these studies have examined which factors are associated with or predictive of care staff ratings, only a few have examined which factors are associated with, or predictive of, family member ratings. In most cases the focus has been on comparing proxy ratings with selfratings by the person with dementia (e.g. [4,7]) rather than on differences between care staff and family member ratings. However, where proxy rating is the only means of assessing quality of life, the question of who provides the rating and what this implies may be important [16]. Ratings by family members and care staff are typically correlated (e.g. [4,6]), but they are far from identical [2], and can be differentially sensitive to effects of intervention [15]. It has been suggested that each different perspective on resident quality of life is ? relatively independent and somewhat unique? ( [17] p. 27) and that resident quality of life should be explored from multiple perspectives. In order to extend the available evidence, it would be helpful to examine family members? proxy ratings alongside those of care staff. In this study we explore the profiles of quality of life generated through proxy ratings by care staff and family members of people with very severe dementia using the QUALID, and aim to identify what factors influence these ratings for each group. In so doing we examine further the properties of the QUALID and its applicability in a severely-impaired United Kingdom (UK) sample. The following specific questions will be addressed: 1. What is the profile of family and care staff ratings of resident quality of life using the QUALID, and which variables are associated with, or predictive of, each of these two sets of quality of life ratings? 2. What are the psychometric properties of the QUALID scale for two groups of respondents, care staff and family carers? Method Design This paper reports a cross-sectional examination of factors associated with family member and care staff ratings of the quality of life of people with severe dementia, and examines the psychometric properties of the QUALID measure. The analysis uses data from the AwareCare study [18], including data from the initial, measure development phase [19] and data from the baseline assessments conducted for the randomised controlled trial of the awareness-based intervention [15]. The relevant National Health Service and University ethics committees gave approval for each phase of the AwareCare study. As participants were unable to provide informed consent on their own behalf, in each case a relative was approached by the research team to act as a personal consultee as outlined in the provisions of the UK Mental Capacity Act (2005) [20], advising on whether or not the resident should be included in the study. In one case where no personal consultee was available, a nominated (non-kin) consultee was identified instead. Participants The participants were residents with severe dementia drawn from 12 care homes in North Wales, family members of these residents, and members of the care staff in the 12 care homes (see Figure 1). The study included 105 residents with severe dementia participating in AwareCare -40 in the measure development phase and 65 in the randomised controlled trial (RCT). Seventy-three family members of these people with dementia provided proxy quality of life ratings. Also included were 105 members of care staff, of whom 40 rated resident quality of life and behaviour in the measure development phase, 64 contributed both proxy quality of life ratings, ratings of resident behaviour, and other personal data in the RCT phase, and 1 contributed quality of life ratings only in the RCT phase. Inclusion criteria for residents were that they should meet criteria for Functional Assessment Staging (FAST) [21] stages 6 or 7, and should have no, or only very limited, verbal communication, indicated by an inability to clearly verbally communicate needs and wishes, with speech either very circumscribed and limited to single words or phrases or completely absent. Potential participants were identified, and family members notified, by care home managers. Inclusion criteria for care staff in the RCT phase were that the staff member should be a permanent employee, working 15 hours or more per week, who had been in post for at least two months, and should have good knowledge of the resident for whom proxy ratings were provided. Family carers were eligible to provide proxy quality of life ratings if they visited their relatives at least weekly; 76 family members met this criterion and 73 of these agreed to provide ratings. The 12 care homes were all privately owned; 11 offered both residential and nursing care and 1 offered only residential care. Eight homes specialised in dementia care while 4 offered care to older people with and without dementia. the family carer, and the frequency of visits by the family carer. Quality of Life in Late-stage Dementia Scale (QUALID) The QUALID [9] is an 11-item scale completed by a proxy with reference to the person? s quality of life in the preceding week. Perceived frequency of occurrence of the 11 behaviours or responses is rated on a 5-point scale. For 5 positively-stated items (smiles, enjoys eating, enjoys touching/being touched, enjoys interacting with others, appears calm and comfortable) 1 indicates the highest frequency and 5 the lowest, and for the remaining 6 negatively-stated items (appears sad, cries, facial expression of discomfort, appears physically uncomfortable, verbalisation suggests discomfort, is irritable and aggressive) 1 indicates the lowest frequency and 5 the highest. Thus, lower scores on this measure are indicative of better perceived quality of life. Independent proxy ratings of resident quality of life were made by a family member (where available and willing) and by a member of care staff. Behavioural Assessment Scale of Later Life (BASOLL) The BASOLL [22] is a reliable and valid measure of self-care ability, functioning and behavior. This measure, rated by care staff, incorporates sub-scales assessing mood (9 items), self-care (10 items), sensory abilities (2 items), memory and orientation (9 items), mobility (1 item) and challenging behaviour (5 items). Each item is rated on a 0 ? 3 scale where higher score indicates greater severity of problems. Therefore, lower total scores for each subscale indicate fewer problems in that domain. Guy? s Advanced Dementia Schedule (GADS) The GADS [23], administered by a researcher, is a valid and reliable structured assessment of cognitive ability for people with severe dementia that involves measuring responses (reading, naming, using and taking) to familiar objects (such as a comb and a cup). Possible scores range from 0 ? 40 with higher scores indicating greater cognitive ability. Up to three prompts may be given in relation to each item and in the present study three prompts were required in almost all cases. Positive Response Scale (PRS) The PRS [24] is an observational scale which focuses on the person? s affective response to the environment using 10 behavioural categories. Observations were conducted by a researcher, using a time-sampling schedule of one minute in every five over two 30-minute sessions, giving a total of 12 minutes of observation. This yielded a score representing the sum of the number of behaviours (out of the 10 possible categories) that occurred during each minute of the 12 minute observation period. This was divided by the total number of possible behaviours that could have been recorded (i.e. 10 behaviours ? 12 minutes = 120) and the result was multiplied by 100 to give a percentage score. Higher percentage scores indicate greater frequency of observed behaviours. AwareCare The AwareCare observational measure [19] lists 10 events that either occur spontaneously in the environment (7 types, e.g. resident is touched, loud noise) or are introduced by the observer (3 types, e.g. resident is addressed by name). It has 14 response categories grouped into the following sub-categories: eyes (e.g. makes eye contact), face (e.g. smiles), head (e.g. nods/shakes head), arm (e.g. reaches), body (e.g. moves towards) and sounds (e.g. shouts or moans General Health Questionnaire (GHQ-12) The GHQ [25] is a brief, well-validated, 12-item measure of psychological distress. Items are rated on a 0 ? 3 scale and higher scores indicate higher levels of distress. Care staff used this scale to rate their general level of psychological distress. Maslach Burnout Inventory (MBI) The MBI [26] is a 25-item self-report questionnaire comprising subscales for emotional exhaustion (9 items), depersonalisation (5 items) and personal accomplishment (8 items) and three additional optional items reflecting involvement. Items are rated for both frequency and intensity, with higher scores indicating a greater sense of emotional exhaustion, depersonalisation or personal accomplishment. Care staff used this scale to rate aspects of their own well-being in relation to their work. Scores for the three subscales are reported here. Approaches to Dementia Questionnaire (ADQ) The ADQ [27] is a 19-item reliable and valid scale with two sub-scales assessing person-centred (11 items) and hopeful (8 items) attitudes to people with dementia. Higher scores indicate more positive attitudes towards people with dementia. Care staff completed this scale to provide an indication of their attitudes towards people with dementia. Procedure For each resident, wherever possible, a family member was interviewed to provide background information about the resident and proxy ratings of quality of life (QUALID), and in each case a member of the care staff was interviewed separately to provide ratings of behaviour (BASOLL) and proxy ratings of quality of life (QUALID). Each member of the care staff gave ratings for one resident only, and in each case this was a resident who was well-known to the staff member. Each resident was then observed by a researcher for two 30-minute periods using the PRS to provide a profile of current well-being, and was assessed with the GADS to provide a profile of cognitive functioning. Care staff participating in the RCT phase also completed the GHQ-12, MBI and ADQ; the 40 residents in the initial, measure development stage did not have the staff personal information recorded. Following these initial assessments, participants were observed using the AwareCare measure. In the measure development phase, these observations were conducted by the researchers during five 30-minute sessions with each of the 40 residents. In the RCT phase, following appropriate training, care staff in the four homes randomised to the intervention condition conducted the observations. Each staff member was asked to observe several residents according to a pre-planned schedule involving six 10-minute observations each week for six weeks, and observations were available for 32 residents. The mean number of observations obtained for each resident was 22 (s.d. 12.12, range 1 ? 54). The mean number of observations conducted by each staff member across all assigned residents was 23 (s.d. 10, range 3 ? 36), and the mean number of observations conducted by each staff member for individual assigned residents was 4.17 (s.d. 1.57, range 1 ? 9). The AwareCare measure score was not available for the 33 residents in the four homes allocated to the control condition in the RCT. Data analysis Quality of life ratings made by family members and care staff were compared using a paired-sample t-test. Kurtosis was within acceptable limits for the two sets of ratings, but skewness was slightly raised for family carers only (skewness = 1.203, se 0.281). Therefore a Wilcoxon test, the equivalent non-parametric test, was also applied; this gave an identical result to the paired-sample t-test, and hence the latter is reported below. Pearson? s r was calculated to determine the extent of correlation between the two sets of ratings. Analysis of variance was used to check for differences in ratings according to care home or resident gender. The relationships of resident and staff measures with family carer and staff member ratings of quality of life were examined using correlation and regression analysis. Pearson? s r was calculated for continuous variables, and the point-biserial correlation for categorical variables. All regression analyses reported here used the stepwise method and were conducted with the default entry probability of p < 0.05 and the default removal probability of p > 0.10. Nominal and ordinal variables were dichotomised and coded 0/1 for inclusion in the correlation and regression analyses; the variable name given in the tables refers to the group coded ? 1? . For family carer ratings, the regression analysis was initially run including all the resident personal and questionnaire variables, and was then re-run excluding the AwareCare and GADS measures to increase the sample size. For care staff ratings, in view of the sample size, two separate regression analyses were conducted. The first used the resident personal information and questionnaire ratings as predictors, and the second used the care staff information and ratings as predictors. The significant predictor variables emerging from these two analyses were then combined in a further stepwise regression analysis to examine key predictors of care staff ratings of resident quality of life. Exploratory factor analysis to examine the psychometric properties of the QUALID when rated by family carers and care staff was undertaken using Principal Components Analysis (PCA). In each case, item-scale reliability was examined using corrected item-total correlations, and Cronbach? s alpha was calculated prior to conducting the principal components analysis. Suitability of the data for PCA was determined with the Kaiser-Meyer-Olkin measure of sampling adequacy (0.702 for care staff and 0.686 for family carers), and Bartlett? s Test of Sphericity (approx. χ 2 55 251.916, p < .001 for care staff and approx. χ 2 55 223.912, p < .001 for family carers). The Kaiser-Meyer-Olkin statistic indicates the proportion of variance that may be caused by underlying factors; values above 0.5 suggest that the data may be amenable to factor analysis. Bartlett? s test of sphericity tests whether the correlation matrix is an identity matrix indicating that variables are unrelated and hence unsuitable for structure detection; significance levels < .05 indicate that a factor analysis may be appropriate. Therefore, although the family carer sample was relatively small, these indices gave no cause for concern regarding the suitability of the data for factor analysis. PCA was conducted using both the varimax with Kaiser normalization and the oblimin rotation methods. The two methods produced almost identical results, and the results from the varimax rotation method are reported here. Factors were initially selected based on an a priori criterion of eigenvalue > 1 and the final selection was determined following scrutiny of the scree plot and identification of inflection points. Structural validity of the identified factors was examined using Cronbach? s alpha (ά). All analyses were conducted in IBM SPSS Statistics v20. Results Descriptive details for the 105 residents and for the 64 members of care staff who contributed in the RCT phase are provided in Table 1. The majority of residents were female and most had FAST scores in stage 7. Prescription of antipsychotic medication was common. The family carers who completed the QUALID were more likely to be adult children than spouses; all visited the resident regularly, mostly once a week. Participating members of care staff were mostly female and from the UK, but 21 out of 64 (33%) were from outside the UK and 20 (31%) had a first language other than English or Welsh. The profile of quality of life ratings by family carers and care staff will first be described, and correlates and predictors of these ratings examined. The psychometric properties of the QUALID in the AwareCare study will then be outlined. Details of scores on all measures are shown in Table 2. Profile of quality of life scores Mean scores for the two sets of QUALID ratings were very similar (lower scores on the QUALID indicate better quality of life). For those residents where quality of life was rated by both a family carer and a member of the care staff (n = 73), mean ratings were slightly higher (less positive) for care staff than for family carers (22.59 ? 6.56 vs. 21.66 ? 6.71), but a paired-samples t-test indicated no significant difference in the means (t 72 = −1.106, p = .273), and the two sets of scores were moderately and significantly correlated (r = .412, p = .000). Neither family member nor care staff ratings differed by care home (F 11,61 = .938, p > .05 for family carer ratings; F 11,93 = 1.732, p > .05 for care staff ratings) or according to the gender of the resident (F 1,71 = .011, p > .05 for family carer ratings; F 1,103 = .444, p > .05 for care staff ratings). Factors associated with family carer ratings Family carer QUALID ratings were available for 73 residents. Table 3 shows the correlations between these ratings and resident personal details and questionnaire scores. The only significant association was with the resident variable of prescription of antipsychotic medication; residents who were prescribed antipsychotic medication were regarded as having poorer quality of life. The regression analysis was initially run including all 22 of the resident personal and questionnaire variables listed in Table 3, and antipsychotic medication emerged as the only predictor variable. However, the sample size was only 49 due to missing values for the AwareCare or GADS measures. As neither variable was included in the model, the regression analysis was re-run excluding these two variables to increase the sample size, but again only antipsychotic medication was identified as a significant predictor. More positive family carer QUALID ratings were significantly predicted by non-use of antipsychotic medication (coefficient = 3.744, SD = 1.623, beta = 0.268, t(70) = 2.306, p = .024, adjusted R 2 = .058). Factors associated with care staff ratings Table 4 shows the correlations of care staff QUALID ratings (n = 105) with resident personal information and resident questionnaire ratings. Care staff QUALID ratings were significantly associated with the resident variables of FAST stage, prescription of benzodiazepine medication, and number of types of psychotropic medication prescribed. Residents who were more impaired, were prescribed benzodiazepines or were prescribed more types of psychotropic medication were regarded as having poorer quality of life. With regard to the questionnaire measures, there were significant associations with the AwareCare RI score and with the BASOLL self-care, mood and challenging behaviour scores. Greater responsiveness to stimuli as shown by the AwareCare RI, BASOLL scores indicative of fewer difficulties in self-care, mood and behaviour, and the non-use of benzodiazepines were all associated with better QUALID ratings. Table 4 also shows the correlations between care staff variables and QUALID scores for the 64 care staff participating in the AwareCare RCT phase. The only variables significantly associated with staff ratings of resident quality of life were the staff member? s ethnicity and first language. This suggests that staff identifying as British and having English or Welsh as their first language tended to rate resident quality of life more positively than staff from overseas. Only 30 residents had full data for all 33 predictor variables shown in Table 4, so it was not advisable to run a regression analysis using all variables, and two separate analyses were undertaken. Firstly we used the resident personal information and questionnaire ratings as predictors (n = 67), and secondly we used the care staff information and ratings as predictors (n = 64). Using the 20 resident variables, the regression analysis for predicting care staff QUALID rating included AwareCare RI, prescription of benzodiazepine medication, and BASOLL mood score. Table 5 shows the coefficients and adjusted R 2 values. Using the 13 care staff variables the regression analysis included the care staff member? s first language, the MBI emotional exhaustion sub-scale score, and the GHQ score. This suggests that better QUALID ratings were associated with more emotional exhaustion, greater individual psychological well-being, and also with having English or Welsh as the first language, in care staff. Table 6 shows the coefficients and adjusted R 2 values. The six significant predictor variables identified in Tables 5 and 6 were combined in a further stepwise regression analysis to examine key predictors of care staff ratings of resident quality of life. The sample size for this analysis was 32. The predictor variables chosen were BASOLL mood score (p = .003) and AwareCare RI (p = .003), with an adjusted R 2 value of .361. Therefore, greater responsiveness and more positive mood were predictive of more positive quality of life ratings. Psychometric properties of the QUALID The psychometric properties of the QUALID were examined separately for the two groups of respondents, care staff and family carers. Details of item-scale reliability and factor structure are shown in Table 7. two-factor solution. The two factors derived were as follows: Factor 1 ? discomfort and distress (9 items; ά = 0.71) and Factor 2 ? sociability (2 items; ά = 0.749). Table 7 shows which items primarily loaded onto each of these factors. For responses by care staff, corrected item-scale correlations ranged from .073 for item 8, ? enjoys eating? to .538 for item 11, ? appears calm and comfortable? . In total, 7 items had correlations > .2 and 4 items > .4. Apart from ? enjoys eating? , the remaining items correlating < .2 were item 1, ? smiles? , item 9, ? enjoys touching/being touched? , and item 10, ? enjoys interacting with others? . Cronbach? s alpha for the whole scale was .67. Removing item 8, ? enjoys eating? , increased this only slightly to .678. Four factors with eigenvalues > 1 were initially identified. However, factor 3 had only two items that primarily loaded onto it while factor 4 had only one, item 8, ? enjoys eating? . Examination of the scree plot indicated that these two factors had eigenvalues only slightly > 1 and that the major inflection point occurred between factors 2 and 3. Therefore the analysis was repeated stipulating a three-factor solution. In this solution, factor 3 again contained only item 8, ? enjoys eating? , and so the analysis was repeated stipulating a two-factor solution. This yielded two factors which were labelled as follows: Factor 1 ? discomfort and distress (8 items; ά = 0.746), and Factor 2 ? sociability (3 items; ά = 0.694). Table 7 shows which items primarily loaded onto each of these factors. This differed from the two-factor solution for family carers only in that for family carers item 1, ? smiles? , loaded onto Factor 1 (discomfort and distress) rather than Factor 2 (sociability). Discussion This study is one of few to focus on people with very severe dementia who have no, or only very limited, verbal communication, with the aim of understanding more about care staff and family carer perceptions of quality of life in this group and what factors influence these perceptions. The study was the first to examine family carer proxy ratings using the QUALID and the first to examine the role of awareness/responsiveness and staff ethnicity in influencing proxy ratings. This study also provided an opportunity to examine the psychometric properties of the QUALID scale when rated by family carers and by care staff, and its applicability with a UK sample of severely impaired residents. The overall profile of quality of life was broadly consistent with that reported in other studies. Mean scores on the QUALID, rated by both family members and care staff, were slightly more positive in this sample than those reported in previous studies [9][10][11][12]. It is not clear why this was the case, as all studies sampled from long-term dementia care settings, although one study also sampled from psychiatric hospital wards [12]. However, the samples in these previous QUALID studies were less impaired than in the present study. Despite characterising samples as having severe dementia, MMSE score ranges where quoted were wide (0 ? 25 [10] and 0 ? 30 [11]). In the present study, our inclusion criteria were such that participants would not have been expected to complete any items on the MMSE, and hence this measure was not used; the majority of participants met criteria for FAST stage 7. There is no consensus on a definition of ? severe? dementia [28], and clearly there is considerable variability in sample characteristics among studies purporting to investigate aspects of severe dementia. This limits the extent to which meaningful comparisons can be made. In many cases it seems that no cut-off is applied to represent the upper limit of ability in ? severe? dementia, which complicates the picture even further. Even if a cutoff score on the MMSE were to be applied, the differences between someone who scores 10 and is fully mobile and able to comment on his/her own quality of life and someone who is in FAST stage 7, unable to score on the MMSE, immobile and no longer communicating verbally are considerable. There is a need for more precise characterisation of samples and for greater homogeneity of samples in research studies in this field. In our study, the only variable contributing to prediction of family member ratings was prescription of antipsychotic medication, accounting for 5.8% of variance. Few studies have examined predictors of family member ratings; none of these used the QUALID, and again the samples have tended to be less impaired than in the present study. One study in residential settings identified cognition, health problems and behavioural symptoms of the person with dementia, and use of restraint, as predictive of family member ratings [4]. Another identified resident functional ability, carer contribution to nursing home costs and use of feeding tubes as relevant, accounting for 25.1% of variance, but found no effect of carer stress or emotional well-being [7]. In a sample recruited initially from general hospitals, there was no association between carer stress or psychological distress and proxy ratings of quality of life [5], although these factors were associated with the person? s self-ratings of quality of life; factors influencing carer ratings were the person? s functional ability and dementia severity. One study with a community sample identified predictors of proxy quality of life ratings by family members as cognition, functional ability, neuropsychiatric symptoms, depression and prescription of antipsychotic medication together with carer burden and relationship to the person with dementia, accounting for 59.8% of variance [29]. Family members of community-dwelling people with dementia might be expected to be more directly involved in care and hence more aware of functioning and symptoms, but family carers who provided proxy ratings in our study visited their relatives at least weekly, and therefore had regular opportunities to observe residents? functioning and well-being. A possible explanation for the limited proportion of variance accounted for in the present study could be the severity of impairment in the present sample, as different factors may come into play in very severe dementia. In the AwareCare trial [15], family member ratings of quality of life were sensitive to change whereas care staff ratings were not, which is consistent with the lack of overlap in predictive variables and supports the view that the two sets of ratings are independent and influenced by different factors [17]. The key predictors of care staff ratings in this study were resident mood, awareness/responsiveness (indicated by the AwareCare RI), and prescription of benzodiazepine medication, together accounting for 26.7% of variance. Our findings on resident factors are broadly consistent with those from other studies in long-term care settings, although the samples in many of these studies were less impaired than the present sample. Most studies using the QUALID provide only correlational analyses, indicating associations with mood [9,10], neuropsychiatric symptoms and behaviour [9][10][11], cognition [10,11] and pain [11]; one study reporting a regression analysis found that key predictors were mood and functional ability [12]. Factors identified in previous studies as predictors of staff ratings on other quality of life measures are resident mood, cognition, functional ability or dependency, neuropsychiatric symptoms and behavioural difficulties [1,4,[30][31][32][33]. In some studies psychotropic drug use, physical health problems and falls [4] also contribute; in our study it was specifically use of benzodiazepines that was associated with care staff quality of life ratings. Inclusion of the AwareCare measure provides novel evidence indicating that resident responsiveness to stimuli and interactions is strongly linked to care staff evaluations of quality of life. While the QUALID involves assessment of observable behaviours and responses, estimates of frequency of occurrence over the past week will necessarily be somewhat subjective, and in addition items require some judgements, for example regarding whether the resident shows ? enjoyment? . Aware-Care is based on direct behavioural observation and on clearly-defined bodily responses, and the association with quality of life ratings could be considered to support the construct validity of the QUALID. It is understandable that staff might hold more positive views of residents who are more likely to respond to, and interact with, them. It has been suggested previously that where staff perceive residents as having the capacity for relationships and activities, they will also ascribe a good quality of life [32]. Nevertheless it is important to exercise caution in equating awareness/responsiveness with quality of life, as we should not necessarily assume that non-responsiveness indicates negative internal states. The effects of care staff variables were also examined. When considering care staff variables alone, extent of emotional exhaustion, lower psychological distress and language were individually significant predictors, and together these accounted for 36.1% of variance. Our tentative finding, albeit based on a smaller sample size (n = 32), that staff factors are weaker predictors than resident factors is consistent with the few previous studies that have included an examination of care staff factors [4,7,32]. Staff distress at neuropsychiatric symptoms [4] and nursing assistant characteristics including attitudes to dementia [32] accounted for only a small proportion of variance in overall regression models. In one study emotional exhaustion, job satisfaction, training and experience were not related to staff ratings of quality of life, although staff shift pattern (permanent vs. rotating) and type of home (private vs. public) contributed alongside resident functional ability, cognition and mood in a regression model accounting for 41.3% of variance [7]. The impact of staff ethnicity has not previously been examined and our data contribute a novel perspective indicating that where staff member and resident share the same ethnic background, staff rate quality of life more positively. Possible explanations for this might be that ratings are influenced by cultural beliefs and assumptions about care for older people and people with dementia, or that communication is more effective where staff are native speakers of the resident? s language. With regard to the properties of the QUALID, Cronbach? s alpha was slightly lower in our study than in the previous QUALID studies [9][10][11]. Item-scale reliability indicated that item 8, ? enjoys eating? , had a very weak correlation with the overall total score in both groups of respondents. This item did not emerge as causing concern in other studies reporting item-scale reliability data [9][10][11]. Items causing concern in earlier studies were item 3, ? cries? [9,10] and item 7, ? appears irritable and aggressive? [11]. Items causing more concern in the present study were item 10, ? enjoys interacting with others? , which was weakly correlated for both care staff and family members, and item 9, ? enjoys touching/being touched? which was weakly correlated in the care staff ratings. Adopting a criterion of taking only correlations of 0.4 or above as supporting internal consistency [11], from available data one study had 4 items reaching this criterion [10] and one had 5 [11]. Our results were consistent with this. These findings indicate that there is a small degree of variability in the internal consistency of the measure across studies; one possible explanation is that this may be attributable to cross-national differences, but it is also important to note that the samples in previous studies tended to be less severely impaired than the present sample. Nevertheless, our data supports the applicability of the QUALID. Factor analysis produced similar results for family members and care staff. Previous studies of factor structure in care staff responses have been inconsistent. The original development study [9] and the subsequent evaluation of the Swedish version [10] suggested a single factor. However, subsequent studies reported a two-factor [12] or a three-factor [11] solution. Reviewing the factor structures identified in these two studies and in the two groups of respondents in the present study, it appears that two main components can be distinguished. Seven items (2, 3, 4, 5, 6, 7, 11) seem to share common elements even though the factor labels may differ slightly, as these items load onto factors labelled ? discomfort? , ? discomfort or distress? , or ? negative mood? , encapsulating the presence or absence of negative physical and emotional states. Three items (1,9,10) load onto factors labelled ? social interaction? , ? sociability? , or ? comfort? , encapsulating the presence or absence of social communication, responsiveness and positive emotional states. Item 8, ? enjoys eating? , shows most variability and does not fit reliably into either of these two groupings. Residents with severe dementia often lack appetite, may need help with eating, and are likely to be fed by care staff, and hence the question may have limited suitability for this group. In any future revision of the scale it may be useful to consider omitting or revising this item, as well as identifying ways of improving upon any other items found to have low internal consistency across studies using the QUALID. Several limitations resulting from study design must be taken into consideration. Family carers were asked to provide ratings of resident quality of life only, and we were not able to collect information about factors pertaining to family carers themselves that may have been associated with their proxy ratings of quality of life. The ability to respond accurately to items on the QUALID depends on frequent observation and hence may have been somewhat challenging for the majority of family members who visited once or twice a week. Sample size was restricted for family carer QUALID ratings; as the scale asks about observations during the past week, we were only able to obtain ratings from family members who visited regularly. Sample size was also restricted for care staff variables as the relevant measures were used only in the intervention phase of the study, and for the residents? AwareCare RI score as no observations were available for participants residing in care homes that took part in the intervention trial but were allocated to the control condition. However, although our sample size was relatively small to undertake regression analyses, by limiting the number of predictor variables in each model, post hoc power calculations suggested that we did retain sufficient power to detect medium to large effect sizes. The residents and care staff were drawn from 12 residential homes, but with an average of only 8.75 residents and carers from each home it was not feasible to examine whether facility-level factors had any effect on ratings, as has been suggested [7]. Nevertheless, despite these limitations, this study provides new evidence about factors affecting care staff perceptions of resident quality of life and indicates directions for further research, particularly with regard to factors related to family carer ratings. While the practicalities of obtaining ratings from residents? family members can be challenging, it is important to include this distinct perspective alongside that of care staff in studies examining the quality of life of severely-impaired residents with dementia who are unable to communicate verbally regarding their own perceptions. Conclusions The findings of this study offer novel evidence about predictors of care staff proxy ratings of quality of life, identifying resident mood and awareness/responsiveness as key predictors with a more limited contribution from staff emotional exhaustion and ethnicity. The findings further demonstrate that commonly-assessed resident variables explain little of the variability in family members? proxy ratings of quality of life and point to a need to think more broadly about what might be salient for family members in this situation. This study contributes to the growing body of evidence about the suitability of the QUALID as a proxy measure of quality of life in severe dementia, and supports the applicability of this measure, which captures both negative and positive elements of observed experience. This study is one of very few to focus on a clearly-described group of residents with very severe dementia, and demonstrates a need for studies evaluating quality of life to identify more homogeneous samples. It is particularly important for research to pay careful attention to this group of people with very severe dementia who cannot speak for themselves and express their views about their own quality of life, so that appropriate ways of promoting quality of life can be identified and implemented.
2018-02-15T21:32:51.857Z
2014-12-09T00:00:00.000
{ "year": 2014, "sha1": "782cc9b85f2e767cb24db4e4e932481bdbe005a8", "oa_license": "CCBY", "oa_url": "https://hqlo.biomedcentral.com/track/pdf/10.1186/s12955-014-0175-3", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c4158ab291468c2d2aa9c0396ff3f4c435d091f2", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
268936851
pes2o/s2orc
v3-fos-license
Evaluating the effects of benzoic acid on nursery and finishing pig growth performance Abstract Three studies were conducted evaluating the use of benzoic acid in swine diets. In experiment 1, 350 weanling barrows (DNA 200 × 400; initially 5.9 ± 0.04 kg) were allotted to one of the five dietary treatments with 14 pens per treatment. Diets were fed in three phases: phase 1 from weaning to day 10, phase 2 from days 10 to 18, and phase 3 from days 18 to 38. Treatment 1 contained no benzoic acid throughout all three phases (weaning to day 42). Treatment 2 included 0.50% benzoic acid throughout all three phases. Treatment 3 contained 0.50% benzoic acid in phases 1 and 2, and 0.25% benzoic acid in phase 3. Treatment 4 contained 0.50% benzoic acid in phases 1 and 2, and no benzoic acid in phase 3. Treatment 5 contained 0.50% benzoic acid in phase 1, 0.25% benzoic acid in phase 2, and no benzoic acid in phase 3. For the overall period, pigs fed 0.50% in the first two phases and 0.25% benzoic acid in the final phase had greater (P < 0.05) average daily gain (average daily gain) than pigs fed no benzoic acid through all three phases, or pigs fed 0.50% in the first two phases and no benzoic acid in the final phase, with pigs fed the other treatments intermediate. Pigs fed 0.50% in the first two phases and 0.25% benzoic acid in the final phase had improved (P < 0.05) gain-to-feed ratio (G:F) compared with pigs fed no benzoic acid throughout all three phases, pigs fed 0.50% in the first two phases and no benzoic acid in the third phase, or pigs fed 0.50%, 0.25%, and no benzoic acid, respectively. For experiment 2, a 101-d trial was conducted using two groups of 1,053 finishing pigs (2,106 total pigs; PIC 337 × 1,050; initially 33.3 ± 1.9 kg). Dietary treatments were corn–soybean meal-dried distillers grains with solubles-based with the addition of none, 0.25%, or 0.50% benzoic acid. Overall, pigs fed increasing benzoic acid had a tendency for increased average daily feed intake (linear, P = 0.083) but decreased G:F (linear, P < 0.05). In experiment 3, 2,162 finishing pigs (DNA 600 × PIC 1050; initially 31.4 ± 2.2 kg) were used in a 109-d trial. Dietary treatments were formulated with or without 0.25% benzoic acid. For the overall experimental period, pigs fed benzoic acid had increased (P < 0.05) G:F. In summary, feeding benzoic acid elicits improved growth performance when fed throughout the entire nursery period while improved G:F in growing-finishing pigs was observed in one experiment, but not in the other. Introduction As the swine industry continues to reduce the use of feed-grade antimicrobial growth promoters and pharmacological levels of Zn, finding new alternatives that elicit similar responses has become a focal point (Kongsted and McLoughlin, 2023).A specific class of feed additives that has shown positive effects on growth performance and gut health is organic acids (Suiryanrayna and Ramana, 2015).Acidifiers, such as benzoic acid, are suggested to lower the pH of the gastrointestinal tract leading to potential improvements in nutrient digestion, growth performance, and gut microbiota (Torrallardona et al., 2007;Rao et al., 2023).Furthermore, benzoic acid has been shown to exhibit antimicrobial properties within the gastrointestinal tract (Knarreborg et al., 2002;Outlaw et al., 2023).This mechanism is largely driven by the acidic environment inhibiting the growth of pathogens through the accumulation of hydrogen ions.The acids themselves can also disrupt bacteria cell walls (Nguyen et al., 2020). Due to these benefits, the effects of benzoic acid on growth performance and intestinal morphology have been widely studied (Torrallardona, et al., 2007;Zhai et al., 2017;Warner et al., 2023).Most studies have reported positive responses in growth performance, but only evaluate one level of the acidifier, often in combination with other feeding strategies (Torrallardona et al., 2007;Zhai et al., 2020;Warner et al., 2023).The few studies that have evaluated multiple benzoic acid feeding levels in nursery diets observed positive responses up to the highest inclusion level (Zhai et al., 2017;Silveria et al., 2018).Despite these findings, previous studies have failed to evaluate the feeding duration throughout the nursery period, thus disregarding an important economical component.Therefore, additional research is needed to further validate benzoic acid feeding strategies throughout the entire nursery period. Despite the information on benzoic acid inclusion in nursery diets, there has been limited research focusing on the addition of benzoic acid in finishing diets.Recently, a review by Rao et al. (2023) focused on the effects of different feed additives on finishing pig growth performance concluding that acidifiers had the potential to positively impact feed efficiency when compared to other feed additives.However, the challenge with the current research utilizing benzoic acid in finishing diets is the consistency of response.In the feed additive review, implementing acidifiers in finishing diets had an impact on average daily gain (ADG) ranging from −14.9% to 11.4% and an impact on feed efficiency ranging from −9.7% to 11.3% (Rao et al., 2023).More research is needed to understand the consistency in response from feeding finishing pigs diets containing acidifiers.Consequently, the objective of these studies was to evaluate different benzoic acid feeding strategies on nursery pig performance and investigate the effects of benzoic acid supplementation in finishing pig diets. Materials and Methods The Kansas State University Institutional Animal Care and Use Committee approved the protocols used in these experiments (IACUC #4506,#4375,and #4564).Experiment 1 was conducted at the Kansas State University Segregated Early Weaning Facility located in Manhattan, KS.The pigs were housed in two identical barns.Each barn was enclosed and environmentally controlled using mechanical ventilation.Pens had a metal tri-bar floor, contained a cup waterer, and had a four-hole, dry self-feeder.Pens (1.2 × 1.2 m) housed 5 pigs which allowed approximately 0.30 m 2 /pig.Experiment 2 was conducted in two barns at a commercial research growfinish site located in southwest Minnesota (New Horizon Farms, Pipestone, MN).Each barn had slatted concrete floors, deep pit manure storage, were naturally ventilated, and double-curtain-sided. Pens (3.0 × 5.5 m) contained a cup waterer and a five-hole stainless steel dry self-feeder (Thorp Equipment, Thorp, WI).Pigs were allowed approximately 0.6 m 2 /pig.Experiment 3 was conducted at a commercial research facility located in southwest Minnesota (Pipestone Nutrition, Edgerton, MN).Pigs were housed in a temperature-controlled wean-to-finish facility.Each pen (6.8 × 2.5 m) contained one nipple waterer and a four-hole dry self-feeder.Pigs were allowed approximately 0.6 m 2 /pig.Daily feed additions were recorded for experiments 2 and 3 with a computerized feeding system (FeedPro; Feedlogic Corp., Willmar, MN).Pigs were provided ad libitum access to the treatment diets and water throughout all three experiments. Animals and Diets For experiment 1, a total of 350 weanling barrows (DNA 200 × 400; initially 5.9 ± 0.04 kg) were randomly assigned to pens and pens were allotted to one of the five dietary treatments with 14 replications per treatment.Diets were fed in three phases: phase 1 from weaning to day 10, phase 2 from days 10 to 18, and phase 3 from days 18 to 38.Dietary treatments were formulated to provide 0%, 0.25%, or 0.50% benzoic acid (minimum 99.5% pure benzoic acid; VevoVitall, DSM Nutritional Products, Parsippany, NJ) added at the expense of corn (Table 1).These inclusions were selected as they represent common industry levels fed throughout the nursery period.Treatment 1 served as the control and contained no benzoic acid throughout all three phases.Treatment 2 included 0.50% benzoic acid throughout all three phases.Treatment 3 contained 0.50% benzoic acid for phases 1 and 2, then 0.25% benzoic acid in phase 3. Treatment 4 contained 0.50% benzoic acid in phases 1 and 2, and no benzoic acid in phase 3. Treatment 5 contained 0.50% benzoic acid in phase 1, 0.25% benzoic acid in phase 2, and no benzoic acid in phase 3.For both the phases 1 and 2 diets, a single batch of a base diet was manufactured (Hubbard Feeds, Beloit, KS), then benzoic acid additions were mixed in at the Kansas State University O.H. Kruse Feed Technology Innovation Center, Manhattan, KS.For phase 3, complete diets were manufactured (Hubbard Feeds, Beloit, KS).The diets were fed in meal form and pig weights and feed disappearance were measured on days 0, 10, 18, 24, and 38 to determine ADG, average daily feed intake (ADFI), and gain-to-feed ratio (G:F).Feces were collected on days 10 and 24 from 3 pigs per pen to determine fecal dry matter (DM).Samples were dried at 55 °C for 48 h and loss of weight was used to determine the percentage of fecal DM. In experiment 2, a 101-d growth trial was conducted using two groups of 1,053 finishing pigs (2,106 total pigs; PIC 337 × 1,050; initially 33.3 ± 1.91 kg).Pens of pigs (27 pigs per pen), with a similar number of barrows and gilts per pen, were randomly assigned to one of the three dietary treatments in a completely randomized design with 13 replications per treatment in each barn for a total of 26 replications.Dietary treatments were corn-soybean meal-dried distillers grains with solubles-based with the addition of none, 0.25%, or 0.50% benzoic acid.Diets were fed in four phases from 34 to 50, 50 to 75, 75 to 100, and 100 to 132 kg body weight (BW).All treatments were formulated to meet or exceed NRC (2012) requirements for finishing pigs for their appropriate weight ranges (Table 2).All diets were manufactured at New Horizon Farms Feed Mill (Pipestone, MN).Every 2 wk, pens of pigs were weighed and feed disappearance was measured to determine ADG, ADFI, and G:F. The three heaviest pigs per pen were visually determined, weighed, and marketed 2 wk prior to the end of the experiment.These pigs were included in growth performance data but were not evaluated for carcass characteristics.At the completion of the experiment, the remaining pens of pigs were weighed and marketed.Pigs were transported to a U.S. Department of Agriculture-inspected packing plant (JSB Swift, Worthington, MN).Carcass data were collected including hot carcass weight (HCW), loin depth, and backfat.Percentage lean was calculated using a proprietary equation from the plant.Carcass yield was calculated using the pen average HCW divided by the pen average final live weight. In experiment 3, a total of 2,162 finishing pigs (DNA 600 × PIC 1,050; initially 31.4 ± 0.47 kg) were used in a 109-d trial.Dietary treatments were formulated in 2 × 2 factorial with main effects of soybean meal source and benzoic acid.The main effect of soybean meal source was included to accomplish a separate objective and will not be discussed further as there were no interactions with benzoic acid inclusion.Dietary treatments for the main effect of benzoic acid were formulated with or without 0.25% benzoic acid.On day 0, pens were blocked by location in the barn and randomly allotted to one of the two benzoic acid treatment levels.There were 27 or 28 pigs per pen and 40 pens per benzoic acid treatment.A similar number of barrows and gilts were placed in each pen.Experimental diets were fed in six different phases.Pigs were fed by a feed budget with phases 1, 2, 3, 4, and 5 provided at 19, 42, 49, 50, and 45 kg per pig, respectively.Phase 6 was provided for the remainder of the study until pigs were marketed.Pens of pigs and feed disappearance were measured approximately every 14 d to determine ADG, ADFI, and G:F.All nutrients were formulated to meet or exceed NRC (2012) requirement estimates (Table 3).All diets were corn-soybean meal-based and were fed in meal form.Diets were manufactured at the Spronk Brothers Feed Mill (Edgerton, MN). On days 88 and 96, eight of the heaviest pigs per pen were visually selected, weighed individually, and transported to a commercial packing plant (WholeStone Farms, Fremont, NE) for processing and carcass data collection.The remaining pigs were marketed at the conclusion of the trial on day 109 and transported to WholeStone Farms for carcass data collection.A fat sample was taken from the belly of one barrow per pen per marketing event and included all three layers of fat.Analysis of iodine value was conducted using near-infrared spectroscopy at WholeStone Farms.Pigs were fed on a feed budget with phase 1, 2, 3, 4, and 5 provided at 19, 42, 49, 50, and 45 kg per pig, respectively.Phase 6 was provided for the remainder of the study. 2 Lester Feed and Grain (Lester, IA). 3 Thr Pro; CJ America Bio, Downers Grove, IL. 4 Quantum Blue 5P (AB Vista, Marlborough, UK) was included at 2,000 FTU/kg to provide an estimated release of 0.11% STTD P for all diets. Statistical Analysis For experiment 1, data were analyzed as a completely randomized design with pen serving as the experimental unit, treatment as a fixed effect, and barn as a random effect.Data were analyzed using R Studio (Version 3.5.2,R Core Team, Vienna, Austria).Contrasts were used to test for the main effects of the different benzoic acid feeding levels (0%, 0.25%, and 0.50%), within the three phases.Overall performance data were analyzed as a one-way ANOVA using the lmer function from the lme4 package.Differences for treatments demonstrating a significant source of variation were determined through pairwise comparisons using the Tukey-Kramer multiplicity adjustment to control for type I error.Similarly, contrasts were used to test for the main effects of treatment, day, and interaction between treatment and day of different benzoic feeding levels on fecal DM.For experiment 2, data were analyzed as a completely randomized design with pen as the experimental unit and treatment as the fixed effect.Data were analyzed as a oneway ANOVA using the lmer function from the lme4 package in R (version 4.1.1[August 10, 2021], R Foundation for Statistical Computing, Vienna, Austria).Contrasts were used to test the main effect of benzoic acid levels (0%, 0.25%, and 0.50%).Similarly, contrasts were used to analyze carcass characteristics including backfat, loin depth, and percentage lean with HCW serving as a covariate. For experiment 3, data were analyzed using the GLIMMIX procedure of SAS OnDemand for Academics (SAS Institute, Inc., Cary, NC) in a randomized complete block design with pen as the experimental unit and location as the blocking factor.Treatments were considered a fixed effect and block as a random effect.The main effect of benzoic acid (0% vs. 0.25%) was analyzed.For all three experiments, results were considered significant at P ≤ 0.05 and marginally significant at 0.05 > P ≤ 0.10. Experiment 1 From days 0 to 10 (phase 1), pigs fed 0.50% benzoic acid had increased (P ≤ 0.05) ADG, G:F, and heavier day 10 BW than those fed the control diet (Table 4).From days 10 to 18 (phase 2), pigs fed 0.50% benzoic acid had increased (P < 0.01) ADG compared to pigs fed either none or 0.25% benzoic acid, while pigs fed 0.25% benzoic acid had decreased (P < 0.001) G:F compared to pigs fed none or 0.50% benzoic acid.There was a significant increase in ADFI for pigs fed 0.25% (P = 0.033) benzoic acid and a marginally significant increase in ADFI for pigs fed 0.50% (P = 0.069) benzoic acid in phase 2 compared to pigs fed no benzoic acid; however, they did not differ from each other.Pigs fed 0.50% benzoic acid had increased (P = 0.012) day 18 BW compared to pigs fed no benzoic acid, while pigs fed 0.25% benzoic acid were intermediate.From days 0 to 18 (phases 1 and 2), pigs fed 0.50% benzoic acid had an increased (P = 0.013) ADG compared to pigs fed no benzoic acid, but there was no evidence of differences (P > 0.10) in ADFI or G:F.When looking at the individual feeding programs for the combined phases 1 and 2 period, there were no differences (P > 0.10) in ADG or ADFI.However, pigs fed 0.50% benzoic acid in phase 1 and 0.25% benzoic acid in phase 2 had a lower (P < 0.05) G:F than pigs fed 0.50% benzoic acid throughout both phases, with pigs fed no benzoic acid being intermediate.From days 18 to 38 (phase 3), pigs fed 0.50% or 0.25% benzoic acid had increased (P < 0.01) ADG and ADFI compared with pigs fed no benzoic acid.Additionally, pigs fed 0.25% benzoic acid in phase 3 had improved (P < 0.05) G:F compared to pigs fed none or 0.50% benzoic acid. For the overall experimental period (days 0 to 38), pigs fed 0.50% benzoic acid in the first two phases and 0.25% benzoic acid in the final phase had a greater (P < 0.05) ADG than pigs fed no benzoic acid through all three phases and pigs fed 0.50% benzoic acid in the first two phases with no benzoic acid in the final phase, while pigs fed the other treatments were intermediate.Pigs fed 0.50% in phases 1 and 2 and 0.25% benzoic acid in the final phase had increased (P < 0.05) G:F compared with pigs fed no benzoic acid throughout all three phases, pigs fed 0.50% in the first two phases and no benzoic acid in the third phase, and pigs fed 0.50%, 0.25%, and no benzoic acid, respectively.There was also evidence for differences (P < 0.01) in day 38 BW with pigs fed 0.50% benzoic acid in the first two phases and 0.25% benzoic acid in the third phase having increased (P < 0.01) BW compared with pigs fed no benzoic acid throughout all three phases and pigs fed 0.50% benzoic acid in the first two phases with no benzoic acid in the final phase.There was no evidence (P > 0.10) of an interaction between treatment and day for fecal DM.Furthermore, there was no evidence of a main effect of treatment (P > 0.10) for fecal DM.However, there was evidence for a main effect of day (P < 0.001) with greater fecal DM on days 10 vs. 18. Experiment 2 In the grower period (days 0 to 44), there was no evidence of differences (P > 0.10) for any growth response criteria (Table 5).For the finisher period (days 44 to 101), increasing benzoic acid tended to increase ADFI (linear, P = 0.053) but decrease G:F (linear, P = 0.002).There was no evidence for differences (P > 0.10) in ADG.Similarly, for the overall experimental period (days 0 to 101), pigs fed increasing benzoic acid had a tendency for increased ADFI (linear, P = 0.083) and decreased G:F (linear, P = 0.011).There was no evidence of difference in ADG (P > 0.10) for the overall experimental period.Furthermore, there was no evidence of differences (P > 0.10) in grower BW (day 44) or final BW (day 101). For carcass characteristics, no evidence of difference (P > 0.10) was observed for any criteria including HCW, carcass yield, backfat, loin depth, or percentage lean due to increasing benzoic acid. Experiment 3 From days 0 to 51, pigs fed diets without benzoic acid had greater (P ≤ 0.01) ADG and ADFI compared to pigs fed diets containing benzoic acid (Table 6).Pigs fed diets containing benzoic acid had increased (P < 0.001) G:F compared to pigs fed diets without benzoic acid.From days 51 to 109, there was a tendency for an increase (P = 0.06) in ADG in pigs fed diets containing benzoic acid compared to those fed diets without benzoic acid.There were no effects (P > 0.10) of benzoic acid on ADFI or G:F during this period.Overall (days 0 to 109), pigs fed benzoic acid had decreased (P = 0.02) ADFI, but similar (P > 0.10) ADG.As a result, pigs fed benzoic acid had improved (P = 0.01) G:F compared to pigs fed diets without benzoic acid.Carcass characteristics and iodine value were not impacted (P > 0.10) by benzoic acid treatment. Discussion As the swine industry decreases the use of antimicrobial growth promoters and pharmacological levels of Zn, researchers have begun to investigate multiple feed additives including acidifiers, prebiotics, probiotics, phytogenics, nucleotides, and direct-fed microbials in order to maintain pig health and growth performance (Liu et al., 2018).The growing interest in acidifiers is largely driven by the proposed multi-mechanistic effects driven by the lowered gastric pH. Benzoic acid is an aromatic carboxylic acid and is classified as a weak organic acid.When added at 0.50% in a complete swine diet, benzoic acid has the ability to reduce the calculated acid-binding capacity-4 by approximately 30 mEq/ kg (Warner et al., 2023).Although benzoic acid tends to be less acidic than other organic acids, such as formic acid, it has been shown to successfully decrease the pH of stomach contents when fed at levels ranging from 0.20% to 0.75% (Chen et al., 2017;Silveira et al., 2018).In addition to acidification of the gastrointestinal tract, benzoic acid has been shown to reduce urine pH through excreting the metabolite, hippuric acid.Weaning is a stressful event in a pig's life and typically occurs during a critical window of immune and intestinal development (Moeser et al., 2017).During this time, weaned pigs exhibit reduced stomach acid secretions which do not typically peak until 56 d of age (Yen, 2001;Pluske, 2016).Thus, a pig is challenged with a multitude of factors including changes in environment, diet, and physiological development at the time of weaning.Therefore, by the addition of an acidifier in nursery diets, there is a potential for a reduction in pH in the digestive tract.This reduction in pH has the potential to assist a young pig through antimicrobial effects, increased nutrient digestion, and improved intestinal health, ultimately yielding improved growth performance (Liu et al., 2018;Tugnoli et al., 2020).Many studies utilizing benzoic acid in nursery diets only evaluate one level of the acidifier often in combination with other feeding strategies (Guggenbuhl et al., 2007;Torrallardona et al., 2007;Zhai et al., 2020;Warner et al., 2023).These studies all found significant improvements in growth performance including increased ADG and G:F with 0.50% benzoic acid added to the diet.Furthermore, the few studies that have evaluated multiple benzoic acid feeding levels in nursery diets observed positive responses up to the highest levels of inclusion, 0.50% and 0.75% (Zhai et al., 2017;Silveria et al., 2018).Despite previous literature evaluating the growth performance of nursery pigs supplemented with benzoic acid, there is currently no research investigating feeding duration and level throughout the nursery period. A review by Kil et al. (2011) outlined the results of multiple previous studies utilizing benzoic acid in the nursery period and found a positive response during the first 2 wk postweaning.However, according to experiment 1, it appears that feeding 0.50% benzoic acid in all three phases or 0.50% benzoic acid for the first two phases and 0.25% in the third phase will result in the best growth performance for nursery pigs.The reduction in performance reported when benzoic acid was removed from the diet both when transitioning from phase 1 to 2 and when transitioning from phase 2 to 3 is a unique finding of this study.One potential explanation for this decrease in performance is a change in feed palatability when benzoic acid is removed from the diet.A study by Partanen et al. (2002) reported that pigs preferred diets containing sodium benzoate compared to other organic acids such as formic and lactic acid.However, in experiment 1, there was no decrease in feed intake witnessed when benzoic acid was reduced in the diet in phase 2. Therefore, the reduction in ADG may be attributed to shifts in gastrointestinal pH or digestibility when benzoic acid levels change.Furthermore, since pigs were 18 d postweaning and approximately 10.5 kg BW when this change from phase 2 to phase 3 occurred, this might indicate their gastrointestinal system had not matured enough in physiological development to handle a less complex diet.Ultimately, further research is needed to truly understand the mechanism behind this decrease in growth performance. Similar to the nursery period, implementing the optimal level and duration of benzoic acid in growing-finishing diets Benzoic acid in swine diets 9 presents unique challenges.A review by Rao et al. (2023) focusing on the effects of feed additives on finishing pig growth performance showed acidifiers had a potential positive impact on feed efficiency when compared to other feed additives.However, the challenge with the current research utilizing benzoic acid in finishing diets is the consistency of response.In the feed additive review, implementing acidifiers in finishing diets had a wide impact on growth performance with effects on ADG ranging from −14.9% to 11.4% and feed efficiency ranging from −9.7% to 11.3% (Rao et al., 2023).The data in experiment 2 investigating the effects of feeding increasing levels of benzoic acid suggests that feeding benzoic acid in the grow-finish period had no effect on ADG, but tended to increase ADFI and decreased G:F by 1.7%.These data contradict past studies showing a positive (Zhai et al., 2017) or no response (Cho et al., 2014;O'Meara et al., 2020) to benzoic acid supplementation in the growingfinishing period.The consistency of response to benzoic acid supplementation has proven to be more variable during the grow-finish production phase compared to the nursery period across past literature.This inconsistency in response was demonstrated further in experiment 3 where G:F improved by 1.1% when 0.25% benzoic acid was added to the diet.In summary, these data suggest that feeding benzoic acid for the first 38-d postweaning improves ADG and G:F.However, when benzoic acid was removed from the nursery diet, pigs experienced a reduction in performance and ultimately had similar performance to pigs fed no benzoic throughout the entire experimental period.In this study, feeding 0.50% benzoic acid in all three phases or 0.50% benzoic acid for the first two phases and 0.25% in the third phase resulted in the best growth performance throughout the nursery period.Furthermore, the two finisher trials conducted demonstrate the inconsistency of response to benzoic acid supplementation in finishing diets.One trial suggests that feeding benzoic acid in the grow-finish period had no impact on ADG, but tended to increase ADFI and worsen G:F while the other study found that additions of benzoic acid improved feed efficiency.Overall, further research is warranted to better understand under what conditions a positive response might repeatedly be observed in the growing-finishing period as well as understanding the carry-over effect of benzoic acid supplementation from the nursery into the finishing period. Table 1 . Diet composition (as-fed basis), experiment 1 1 2MEPro, Prairie Aquatech, Brookings, SD.3 Provided per kg of diet: 4,134 IU vitamin A; 1,653 IU vitamin D; 44 IU vitamin E; 3 mg vitamin K; 0.03 mg vitamin B12; 50 mg niacin; 28 mg pantothenic acid; 8 mg riboflavin.Ronozyme HiPhos GT, DSM Nutritional Products, Parsippany, NJ was included at 1,250 FTU/kg to provide an estimated release of 0.14% STTD P for all diets.4 Provided per kg of diet: 110 mg Zn from zinc sulfate; 110 mg Fe from iron sulfate; 33 mg Mn from manganese oxide; 17 mg Cu from copper sulfate; 0.30 mg I from calcium iodate; 0.30 mg Se from sodium selenite. Table 4 . Effects of benzoic acid feeding strategy on nursery pig performance, experiment 11,2 Table 5 . Effects of increasing benzoic acid on grow-finish pigs growth performance and carcass characteristics, experiment 2 1 1A total of 2,106 pigs (PIC 337 × 1,050; initially 33.3 ± 1.91 kg) were used in two groups with 27 pigs per pen and 26 replicates per treatment.2 VevoVitall, DSM Nutritional Products, Parsippany, NJ. 3 Adjusted using hot carcass weight as covariate. Table 6 . Main effect of benzoic acid on growth performance, carcass characteristics, and carcass iodine value, experiment 3 1 2,162 pigs (DNA 600 × PIC 1,050; initially 31.4 ± 0.47 kg) were used with 27 to 28 pigs per pen and 40 replications per benzoic acid treatment for a 109-d trial.
2024-04-06T15:15:31.301Z
2024-04-04T00:00:00.000
{ "year": 2024, "sha1": "f4d7fa173361e50fbd7151a2049c98d4ff7d88e1", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/tas/advance-article-pdf/doi/10.1093/tas/txae049/57156155/txae049.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "07f7e40c3430023e97d38775bc3f4ce24b541104", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [] }
236278324
pes2o/s2orc
v3-fos-license
Quantifying multiple breeding vital rates in two declining grassland songbirds . Many studies of reproductive success in North American songbirds have focused on nesting success, while relatively few have evaluated breeding-season adult survival and post-fledging survival. Grassland songbirds are among North America's most rapidly declining avian groups, and knowledge of factors that influence vital rates is needed to address declines, develop management strategies, and accurately model population limitation. We concurrently monitored nesting success, breeding-season adult survival, and postfledging survival of two grassland obligates, Baird's Sparrow and Grasshopper sparrow, breeding in western North Dakota and northeastern Montana. Nesting success was monitored by locating and visiting nests at regular intervals while adult and post-fledging survival were assessed by daily telemetry tracking of radio-tagged birds. We analyzed the three variables using logistic exposure and modeled climate, temporal, and vegetative covariates to explain variation in rates. Cumulative nesting success, breeding-season adult survival, and post-fledging survival were 37%, 78%, and 25%, respectively, for Baird's Sparrow and 16%, 74%, and 55% for Grasshopper Sparrow. Both nesting success and post-fledging survival in Baird's Sparrow were responsive to environmental covariates including temporal effects and vertical vegetation structure. Conversely, vital rates of Grasshopper Sparrow were largely unresponsive to covariates we modeled, perhaps because of the species' broader habitat niche relative to Baird's Sparrow. Breeding season adult survival in both species showed little annual variation and was high relative to overwintering survival estimates for the same species, while post-fledging survival in Baird's Sparrow was low and may be a management concern. We suggest as a next step the formal comparison of vital rates across life-stages in an integrated population model capable of identifying sources of population limitation throughout the full annual cycle of the species. que la survie des adultes et la survie après l'envol ont été calculées au moyen du suivi télémétrique quotidien d'oiseaux munis d'un émetteur. Nous avons analysé ces trois variables au moyen de l'exposition logistique et modélisé les covariables climatiques, temporelles et végétales pour expliquer la variation des taux. Le succès cumulatif de la nidification, la survie des adultes en saison de reproduction et la survie après l'envol étaient respectivement de 37 %, 78 % et 25 % pour le Bruant de Baird, et de 16 %, 74 % et 55 % pour le Bruant sauterelle. Le succès de nidification et la survie après l'envol du Bruant de Baird étaient sensibles aux covariables environnementales, y compris les effets temporels et la structure verticale de la végétation. À l'inverse, les taux vitaux du Bruant sauterelle ne variaient pas en fonction des covariables que nous avons modélisées, peut-être en raison de la niche d'habitat plus élargie de cette espèce comparativement au Bruant de Baird. La survie des adultes en saison de reproduction chez les deux espèces a montré peu de variation annuelle et était élevée par rapport à leur taux de survie en hiver, tandis que la survie après l'envol chez le Bruant de Baird était faible et devrait peut-être être une préoccupation à prioriser. Comme prochaine étape, nous proposons de comparer de façon formelle les taux vitaux à travers les étapes de vie dans un modèle de population intégré permettant d'identifier les facteurs qui limitent les populations tout au long du cycle annuel complet de ces deux espèces. INTRODUCTION Accurate demographic rates are fundamental to our understanding of species ecology (Pulliam 1988, Murdoch 1994 particularly when reversing population decline is a management goal (Anders and Marshall 2005). Avian population dynamics are influenced by multiple vital rates and life-history traits (Clark and Martin 2007). These include not only those that are directly tied to fecundity, such as nesting success and clutch-size, but also rates that affect recruitment to death ratios and lifetime reproductive success, such as adult and juvenile survival (Stahl andOli 2006, Clark andMartin 2007). Vital rates may display both spatial and temporal variation, changing across environmental conditions and throughout the annual cycle of species (Rushing et al. 2017). Therefore, addressing species declines often requires a complete understanding of vital rates that influence population dynamics (Fletcher et al. 2006), and in some cases, failure to assess multiple rates can result in the overlooking of life-history phases that are critical for management (Crouse et al. 1987). Although species abundance data can be effectively used to track population changes over time (e.g., Rosenberg et al. 2019), field-collected data describing vital rate performance under different habitat conditions, and at different life-stages, may be needed to understand population changes mechanistically (Donovan et al. 1995, McCoy et al. 1999, Eng et al. 2011). Among North American songbirds, breeding-season reproductive studies often focus on nesting success, ignoring adult and postfledging survival (Streby and Andersen 2011). This oversight is likely due in part to the difficulty and cost associated with tracking individual birds within and across seasons (Kershner et al. 2004, Suedkamp Wells et al. 2007, Rush and Stutchbury 2008, Cox et al. 2014). Yet, in populations of avian species, both adult and post-fledging survival can influence population dynamics (Pulliam et al. 1992, Anders and Marshall 2005, Fletcher et al. 2006, Bonnot et al. 2011, Cox et al. 2014) and the importance of these vital rates often vary among species (Stahl and Oli 2006). Further, habitat requirements can differ among life-history stages, thus managing for a single vital rate may be inadequate. For example, good foraging habitat for adults may differ from highquality nest sites (Steele 1993) while fledglings may select habitat cover that differs from adult preferences or changes with age Bock 2005, Small et al. 2015). If habitat preference is linked to fitness in any respect (Chalfoun and Schmidt 2012), it is reasonable to expect that response to environmental conditions among vital rates may differ accordingly. Therefore, there is a need to examine the effect of environmental covariates on multiple vital rates within populations both to guide management (Perlut et al. 2008a, Young et al. 2019 and to provide baseline data for the construction of accurate population models (Streby and Andersen 2011). North America's grassland obligate songbirds provide a relevant model system to explore the importance of monitoring multiple vital rates, as many species within this group are both steeply declining and have been poorly studied with respect to adult and juvenile survival. Despite increased attention in recent decades, grassland songbirds and their habitats remain in crisis (Askins et al. 2007, North American Bird Conservation Initiative 2009, Green et al. 2018). This assemblage of species has experienced steep declines documented from the advent of the Breeding Bird Survey (Rosenberg et al. 2019) and faces a wide variety of threats on both the breeding and wintering grounds. These threats include habitat loss through conversion to agriculture, climate change, fragmentation and disturbance associated with energy development, shrub encroachment, non-native plant species, and disruption of historic fire and grazing regimes (see Brennan and Kuvlesky 2005, Askins et al. 2007, North American Bird Conservation Initiative 2009. Although declines among grassland birds are broadly understood to be driven by agricultural land conversion (Murphy 2003, Pool et al. 2014, Hill et al. 2014), more information on vital rates is required to develop effective, species-level, management strategies for implementation in remaining grassland habitats (e.g., Davis 2003, Fletcher et al. 2006, Perlut et al. 2008a). The mixed-grass prairie of the Northern Great Plains (NGP) is one of the largest and most intact areas of grassland in North America, though today only 50% of its historic 600,000 km 2 extent remains (Comer et al. 2018). Still, this eco-region is critical breeding habitat for an assemblage of 26 grassland obligate bird species (Askins et al. 2007). We focused our demographic research on two species belonging to this group: Baird's Sparrow (Centronyx bairdii) and Grasshopper Sparrow (Ammodramus savannarum). Both species have suffered population losses of more than 60% since 1970 ) and both are species of management concern for multiple states and provinces in the Great Plains region (Green et al. 2018). The two species share similar habitat preferences and reproductive ecology in the NGP (Jones et al. 2010, Lipsey andNaugle 2017) and both are shortdistance migrants that winter in the desert grasslands of the southwestern United States and Northern Mexico (Vickery 1996, Green et al. 2002. However, despite apparent ecological similarities, the two species also present an interesting contrast with respect to degree of habitat specialism (Correll et al. 2019); Baird's Sparrow is narrowly range-restricted to the NGP (however see Youngberg et al. 2020) and highly specialized to mixed-grass prairie ecosystems during the breeding season (see Green et al. 2002) while Grasshopper Sparrow is continently distributed and found in a greater variety of surrogate grasslands and opencountry habitats (Vickery 1996). Although the nesting ecologies of these species have been wellstudied (e.g., Davis and Sealy 1998, Davis 2003, Jones et al. 2010, Davis et al. 2016), estimates of nesting success in grassland species can vary with climate conditions (George et al. 1992, Skagen and Adams 2012, Conrey et al. 2016, Zuckerberg et al. 2018) as well as habitat and management treatments (e.g., Davis 2005, Lloyd and Martin 2005, Hovick et al. 2012, C. A. Davis et al. 2016, Pipher et al. 2016. Replicate studies, and those that examine both annual and within season variation, therefore remain valuable. Moreover, estimates of both adult and post-fledging survival are few, or lacking entirely, for many grassland birds. There are currently no published estimates of within breeding-season adult survival for Baird's Sparrow or Grasshopper Sparrow. Several studies of annual adult survival in Grasshopper Sparrow have been conducted in eastern populations (Perkins andVickery 2001, Balent andNorment 2003), but these rely on mark-recapture techniques and thus survival estimates are considered apparent because mortality cannot be distinguished from emigration (Lebreton et al. 1992). Research on post-fledging survival is also limited, with no estimates existing for Baird's Sparrow and only a single study of post-fledging survival in Grasshopper Sparrow (Hovick et al. 2011). Only one study to our knowledge has attempted to address the nesting, adult, and post-fledging lifestages simultaneously in a grassland bird species (van Vliet et al. 2020). To address these knowledge gaps, we conducted a multi-year study to jointly examine the rates and drivers of nesting success (4 years), breeding-season adult survival (3 years), and post-fledging survival (3 years) in single populations of Baird's and Grasshopper Sparrows. We defined adult breeding-season survival as the probability of surviving for 90 days on the breeding grounds and post-fledging survival as the probability of surviving for 20 days, the approximate age of independence. We conducted nest monitoring and radio-telemetry tracking of adult and fledgling birds at two sites in the NGP, one in western North Dakota (2015Dakota ( -2018 and the other in northeastern Montana (2016Montana ( -2018. Broadly, we hypothesized that there would be variation in rates and drivers among life-history phases, as there are potential differences in predation pressure, vegetation structural requirements, and sensitivity to climate exposure across these stages (Jones and Bock 2005, Low et al. 2010, McCauley et al. 2017, Zuckerberg et al. 2018. We predicted that, overall, greater within-season precipitation would result in more productive range conditions (Barnett and Facey 2016), and thus higher survival across all stages (Conrey et al. 2016), despite some negative effects of extreme precipitation events (Carver et al. 2017). We also predicted that nests of both species would benefit from taller and more dense vegetation cover, given the species' habitat-preferences in the region (Lipsey and Naugle 2017) and potential benefits of concealment. Similarly, we predicted that fledglings of both species would benefit from taller and denser vegetation (e.g., Davis 2011, Hovick et al. 2011) which can provide more cover from predators and weather exposure in the vulnerable early postfledging phase (Jones and Bock 2005). Finally, we predicted that fledglings of both species would be sensitive to within-season temperature and precipitation, as fledgling grassland birds can be affected by climate conditions (e.g., Adams et al. 2006) and likely have a limited ability to thermoregulate relative to adults. Our study is the first to concurrently examine all three vital rates in breeding populations of these species and provides insight into the effects of habitat conditions across life-history stages. Study area We conducted our study at two mixed-grass prairie sites ( Fig. 1 . We conducted research activities on two pastures at each study site (4 total pastures); pastures ranged in size from 128-177 ha (x̅ = 150.5, SD = 17.6). Private lands support 85% of remaining grassland habitat in North America (North American Bird Conservation Initiative 2013) and much of this land is directly tied to ranching in mixed-grass prairie regions (Lipsey and Naugle 2017). Therefore, in choosing site where grazing occurs, we sought to use plots that were reflective of grasslands in the Northern Great Plains (NGP). We selected pastures based on abundance of singing male focal species, and feasibility of land access. Both pastures at our North Dakota study site were located on the Little Missouri National Grassland, managed by the U.S. Forest Service, while pastures at our Montana site were located both on private and U. S. Bureau of Land Management properties. Pastures at both sites consisted of flat to moderately rolling hills, small seasonal wetlands, and sparse to patchy shrub-cover (e.g., Symphoricarpos occidentalis and Artemisia sp.). Vegetation at sites was predominantly a mixture of non-native, cool-season grasses and native, warm-season, mixed-grass prairie species as well as a diversity of primarily native forbs. Cool-season grasses included Kentucky Bluegrass (Poa pratensis), Crested Wheatgrass (Agropyron cristatum) and Smooth Brome (Bromus inermis); these three species comprised the majority of non-native plant cover at all sites. Native grasses included Blue Grama (Bouteloua gracilis), Western Wheatgrass (Agropyron smithii), Needle-and-thread Grass (Hesperostipa comata), Prairie June Grass (Koeleria macrantha), and Green Needlegrass (Nassella viridula). Our North Dakota and Montana sites differed substantially in average cover of non-native species (x̅ ND = 49%, SD = 26; x̅ MT = 10%, SD = 19). In addition to our avian focal species, other species of grassland songbird commonly breeding at sites included, Chestnut-collared Longspur (Calcarius ornatus), Sprague's Pipit (Anthus spragueii), Western Meadowlark (Sturnella neglecta), Savannah Sparrow (Passerculus sandwichensis), and Bobolink (Dolichonyx oryzivorus). Domestic cattle (Bos taurus) grazed intermittently at variable stocking rates on all pastures throughout the duration of the study and although we did not record grazing intensity yearly, we systematically surveyed all pastures twice a season in 2016-2018 to characterize the structure and cover conditions at our sites (TableA1.1; see Data collection). In 2018, one of our Montana study pastures had to be partially shifted because of a severe unscheduled burn the previous fall. Nesting success We located and monitored nests (Table 1) of Baird's and Grasshopper Sparrows from May 24th to August 7th, 2015-2018. To maximize focal species sample-size, we located nests using rope-dragging and systematic walking (Winter et al. 2003), behavioral observation (Martin and Geupel 1993), and opportunistic discovery while traversing plots during other research activities (e.g., telemetry, vegetation surveys). Because birds are most active early in the morning, and there is evidence that grassland birds are most likely to be on the nest from sunrise to 0900 (see Holmes 2012, Kirkham andDavis 2013), we conducted our nest searching efforts primarily during this period. We did not conduct rope-drag surveys when temperatures were less than 10° C, when grass was excessively wet (enough to soak boots and clothing), or during active precipitation. Upon locating nests, we marked the location with two pin-flags, one 5 m to the north, and the other 5 m to the south; we did not place any markers immediately near nest openings to avoid attracting predators directly to nests. We visited nests every three days, with longer intervals occasionally (e.g., five days) in cases of poor weather. We used Global Positioning System (GPS; etrex 10 and etrex 20; Garmin Ltd., Olathe, KS, USA) units and handdrawn microhabitat maps with compass bearings to relocate nest entrances during checks. We also took care to avoid creating trails by varying our approaches to the nest (Major 1990). To increase our ability to correctly assign nest fates, we visited nests more frequently (1-2 days) when they were near fledging age. Human visitation frequency does not appear to affect the nesting success of ground-nesting grassland birds (O'Grady et al. 1996, Pietz et al. 2012, Border et al. 2018), thus we felt this approach could improve the accuracy of our nesting success data with little risk. At each visit, we recorded and photographed nest contents, examined nests for evidence of predators or brood parasitism by Brown-headed Cowbirds (Molothrus ater), and aged nestlings based on physiological development (Jongsomjit et al. 2007, Ruth andKitting 2018). To address nest fate uncertainty (Manolis et al. 2000), we implemented 15-minute fledgling searches during the final visit to every potentially successful nest (mean fledging age for the species; Davis 2003 andJones et al. 2010). During these observation periods, we searched the nest area for fledglings and observed adults for feeding activity. These data helped us to confirm whether nests had fledged or been depredated during the late nestling stage when exact age at termination could not be known (Manolis et al. 2000). We found that the movements of both Baird's and Grasshopper Sparrow fledglings at our sites was extremely limited during this period ( Fig. A2.1-2) reducing the possibility of conflating fledglings from neighboring territories (Streby and Andersen 2013a). We considered nests successful if they fledged at least one host young, failed if they did not, and unknown in cases where nestlings were potentially old enough to fledge, but we found no clear evidence of fledging or depredation (see Manolis et al. 2000). Adult survival. We captured and tracked adult male Baird's and Grasshopper Sparrows from May 14th to August 10th, 2015-2017 (Table 1). We chose to focus our capture efforts on males to avoid disrupting nesting females, as pilot data indicated that capture of nesting females triggered nest abandonment. To capture birds, we identified territorial individuals, set up 1-2 mobile mist-nets where we observed males perching and singing, and played audio-lure at the net location to draw birds in for capture. We conducted target-netting activities from sunrise to 1000 MST, but we did not target-net in temperatures less than 10° C, during precipitation, or winds greater than ~24 km/h. Following capture, we outfitted individuals with very high frequency (VHF) radio-transmitters (PicoPip Ag379; 0.42 g, ~30-40-day battery-life; Lotek Wireless, Seattle, WA, USA) using an elastic leg-loop harness for attachment (Rappole and Tipton 1991). We weighed all birds prior to transmitter attachment (x̅ = 18.1 g, SD = 1.1) to ensure that the transmitter weighed no more than 4% of the individual's total body mass. We also fitted all captured birds with a United States Geological Survey (USGS) federal aluminum band, and one or more plastic color bands, and collected standard morphometrics. We briefly observed birds for signs of adverse reaction to harness fit following release (e.g., hampered flight ability). We tracked all tagged birds daily (sunrise to 1500 MST) using a receiver (Biotrack VHF Receiver; Lotek Wireless) and 3-or 5element folding Yagi antennae (Advanced Telemetry Systems, Isanti, MN, USA). During tracking, we took care to vary the time of day we tracked a given individual bird, and to circle a suspected bird location before flushing the bird, to avoid chasing it through the grass and influencing the recorded location. We recorded a single location per individual each day. We did not track birds in extreme weather conditions (e.g., lightening, heavy rain, excessive winds). After locating birds, we recorded their status (alive or dead) and their location using hand-held GPS units. We tracked birds until transmitters failed (failure was usually characterized by weak or intermittent signal prior to disappearance) or until signal suddenly disappeared (before the expected life of the transmitter). In the latter case, we searched the perimeter of the plot using a 5-element antenna for extended range, but we were rarely successful in relocating birds once we had lost the signal. It is likely these individuals emigrated, given the nomadic lifehistory of grassland birds. For example, breeding Grasshopper Sparrows are known disperse up to 9 km within season (Williams and Boyle 2018). However, we considered the fates of missing individuals unknown, as emigration could not be confirmed. We considered birds dead only when we found physical evidence of mortality (e.g., dead bird, blood on harness, transmitter with chew marks, pile of feathers, transmitter found buried, etc.). In cases where we found an undamaged, clean transmitter and harness, we assumed it had fallen off due to loose fit, or that the bird had been able to remove it by pulling at the elastic with its bill. In some instances, individuals previously fitted with a transmitter were recaptured confirming that they were alive and had shed their harnesses. We were not able to assess the presence of negative transmitter effects in our study (Barron et al. 2010), as we did not have a control group for comparison. However, we seldom observed unusual behavior in tagged birds during our study. In most cases, individuals displayed good flight ability and resumed territorial behavior after release (e.g., perching and singing, interacting with other males). Post-fledging survival We monitored post-fledging survival in Baird's and Grasshopper Sparrows from June 15th to August 7th, 2016-2018 (Table 1). We tagged 1-2 nestlings per nest with VHF transmitters (PicoPip Ag337; 0.29 g, ~20-30-day battery-life; Lotek Wireless), depending on the brood-size and individual weights, when nestlings were 7-8 days old. We chose this age to help minimize risk of force fledging (Berkeley et al. 2007), as our focal species typically fledge at 9-10 days old in the NGP (Davis 2003, Jones et al. 2010. We used the same leg-loop harness attachment as for adults, but with a slightly looser harness fit to allow for growth. We also weighed and applied federal metal bands to all nestlings prior to transmitter attachment (x̅ = 13.6 g, SD = 1.3). We ensured that transmitters weighed no more than 3.0% of the individual's total body. In cases where greater than two individuals weighed enough to receive transmitters in a nest, we selected individuals to tag by placing all nestlings of adequate weight in a bird bag and drawing at random. After the birds fledged, we tracked tagged birds daily using the same protocols described for adults but with additional caution to avoid trampling young fledglings. We considered fledglings dead if they were less than 10 days old when the signal was lost following Hovick et al. (2011), as our movement data ( Fig. A2.1-2) suggested that fledglings were not capable of traveling a distance beyond our antenna range (~300-400 m) from their previous location in a single day at this age. This pattern of limited movement by young fledglings is widely reported in grassland songbirds (Davis and Fisher 2009, Hovick et al. 2011, Small et al. 2015, Young et al. 2019. We therefore assumed they had been carried off by a predator or that the transmitter was damaged during a depredation event. We were not able to compare survival between tagged and untagged birds because of the difficulty involved in tracking untagged fledglings in a grassland environment. However, research on Henslow's Sparrow (Centronyx henslowii) fledglings suggests the negative effects of transmitters are negligible (Young et al. 2019). We also acknowledge that in tracking only individuals of sufficient weight to allow tagging under our permits, our survival estimates may have been biased. Finally, we could not confidently partition causes of mortality in fledglings, as individuals that died of exposure or starvation could also show signs of predation from scavenging activities. Vegetation surveys and weather data We characterized the vegetation composition at each nest location using a Daubenmire frame (Daubenmire 1959) centered around the nest to estimate percent cover of live grass, dead grass, forbs, bare ground, and litter. We also estimated the mean height of vegetation contained within the frame. Finally, we estimated visual obstruction (VOR) at the nest site from four cardinal directions using a Robel pole at a distance of 4 m and a height of 1 m (Robel et al. 1970). We selected this suite of variables because they are the most commonly assessed and influential vegetation covariates in studies of grassland birds (Fisher and Davis 2010). We conducted vegetation surveys at nest sites within three days of nest termination to best represent conditions at the nest while it was active. We also collected vegetation data at fledgling bird telemetry locations in 2017-2018. Because estimated bird locations were sometimes less precise than nest locations, we estimated percent cover and mean vegetation height over a 5-m radius area around the bird location. The cover types we estimated at these locations included the same categories recorded at nest sites, in addition to an estimate of non-native species cover. Nesting success To analyze nesting success, we estimated daily survival (DSR) of nests using logistic exposure (Shaffer 2004); we performed the analysis in Program R (R Version 3.6.1; R Core Team 2020) using the lme4 package (Bates et al. 2015). Logistic exposure is a modified form of logistic regression that accounts for length of exposure in survival analyses while also providing a flexible modeling approach in which multiple continuous and categorical variables can be included (Shaffer 2004). We included nests with unknown fates in analysis, but truncated intervals to the date of last known activity (Manolis et al. 2000). We combined failures from depredation, abandonment, cowbird parasitism, weather, and unexplained causes in analysis. In cases of nest abandonment, we assigned failure either to the interval following the last known date of nest activity, or the interval in which an event suspected of causing the abandonment occurred (e.g., partial predation, extreme weather event). We excluded nests we thought had been abandoned because of research activity, such as capture of an adult, from analysis (n = 34); this occurred primarily in 2015 and 2016 when we experimented with tracking adult females. We extrapolated DSR estimates produced with logistic exposure over a period approximating the length of a complete nesting cycle in each species (21 days for Baird's Sparrow and Grasshopper Sparrow; Davis 2003, Jones et al. 2010 to calculate the cumulative probability of nesting success. We modeled the influence of environmental variables on nesting success with covariates for year (2015-2018), date (days from May 1 st ), daily precipitation, average precipitation over the previous week to observation date (hereafter weekly precipitation), daily minimum temperature, daily maximum temperature, and daily average temperature. We intended that weekly precipitation would capture effects of drought when present, while daily precipitation could reflect severe weather events as well. We averaged continuous, temporally explicit variables over intervals to best represent conditions during the exposure period being evaluated (see Shaffer 2004). We did not include lagged bio-year climate effects (i.e., effect of previous year's climate conditions) because we expected that any strong annual trends would be subsumed by year effects. Finally, we modeled a full suite of nest-site vegetation covariates including cover of live grass, dead grass, forbs, litter, bare ground, total grass, total vegetation, mean vegetation height and VOR. We centered and standardized all continuous variables by subtracting the mean and dividing by the standard deviation to make parameter estimates directly comparable (Schielzeth 2010). We tested for correlation among all continuous variables using Pearson's correlation coefficient and did not include variables in the same model if r > 0.4. We selected this threshold because it has been demonstrated that even low levels of collinearity can bias parameter estimates in multiple regression (Graham 2003). We constructed our models in a two-step process. First, to select among correlated variables to include in full analysis, and to compare linear and polynomial terms for continuous variables, we compared global models (consisting of non-correlated variables) with interchanged terms of interest (e.g., one global model with only linear date, one with linear and quadratic date). We tested all continuous predictors for both linear and quadratic effects. We used an information theoretic approach to model selection, selecting the model with the lowest AIC c value (Burnham and Anderson 2002). Because site and year were conflated in our study (e.g., in 2015, data were only collected at the North Dakota site, and only in one pasture), we could not include both variables in global models, thus we chose to include year because it was of more interest to our study questions. However, we compared a univariate model with site to a constant survival model and found that, in both species, site models were indistinguishable from the null model by absolute distance (Baird's Sparrow: ΔAIC c = 2.00; Grasshopper Sparrow: ΔAIC c = 1.02). Following variable selection, we compared all model subsets (including a null constant survival model) evaluating the strength of each model using ΔAIC c and AIC c weight (w i ). We used a conservative threshold of ΔAIC c < 2 to determine top models in our comparisons (Burnham et al. 2011). To address modelselection uncertainty, we used the MuMin package (Barton 2020) to average predictions from a full suite of models (with 95 % confidence intervals). However, because model-averaging has been criticized as a valid means of parameter estimation (Cade 2015), we report parameter estimates (β) with standard errors (SE) and 85% confidence intervals (see Arnold 2010) from only the top model that each variable appeared in. To avoid the use of variable weights and model-averaged confidence intervals in detecting uninformative parameters (Cade 2015), we considered all variables appearing in the top model set to be informative, but only if 85% confidence intervals did not overlap zero in any top model. This procedure allowed us to filter out uninformative parameters in competitive models (Arnold 2010). We only discuss results and report estimates for variables that fit these criteria. Additionally, if the constant survival model appeared in the top model set, we took this as a lack of evidence for any effect of variables modeled for that species. Because the unit of analysis is the number of intervals in logistic exposure, not the number of nests (Shaffer 2004), we report interval sample-size in all modeling tables. Adult and post-fledging survival We used the same logistic exposure and information theoretic modeling approach described for nesting success to evaluate adult and post-fledging survival. Logistic exposure can be used to estimate survival similarly to nesting success (see Streby and Andersen 2013b), and in our case, it had several advantages. First, because the unit of analyses is the exposure interval and not the individual, we were able to include data from individuals with unknown fates in analysis by truncating encounter histories. Second, for the same reason, inability to disentangle permanent emigration from mortality in individuals did not directly influence our estimates (Lebreton et al. 1992). Third, we were not restricted to fixed re-sighting intervals and therefore could accommodate cases where individuals were tagged or re-sighted asynchronously. For adults, we extrapolated DSR estimates produced with logistic exposure over a 90-day period to estimate the cumulative probability of an individual surviving the breeding season under given conditions. For fledglings, we used a 20-day period; the exact age of independence is not known for either of our focal species, but a study of a similar species, Henslow's Sparrow, reported independence between 19-21 days post-fledge (Young et al. 2019). Additionally, a 20-day monitoring period was consistent both with the battery life of fledgling transmitters, and our observations of individuals in the field by this age (e.g., capable of strong flight, no longer observed being fed by parents). We modeled the same temporal and climate variables described for nesting success for both adults and fledglings. For fledglings, we also modeled the effect of days post-fledge, and vegetation cover. Because vegetation data were only collected at fledgling bird locations in 2017-2018, we conducted two separate analyses of post-fledging survival to make use of the full sample, but also examine effects of vegetation. The first analysis included the full dataset and included all variables described above. The second analysis included only the 2017-2018 data and modeled all these variables in addition to a suite of vegetation variables including the previously described cover types (see Vegetation surveys and weather data), extent of non-native vegetation, and vegetation height. Because we did not collect vegetation data at adult bird locations in all years, we were not able to include these variables in the adult analysis. As with nesting success, we could not include variables for year and site in the same global models. However, we evaluated the effect of site in preliminary analysis and found that the site model was not Nesting success We located and monitored nests of 152 Baird's Sparrows and 209 Grasshopper Sparrows. Combining data across sites and years, peak nest-initiation (back calculated from hatch date and nestling ages) for Baird's Sparrow (Fig. A3.1A Baird's Sparrow Modeled DSR and cumulative survival were 95% (CI: 89 to 98) and 37% (CI: 14 to 61) respectively under average conditions (covariates set to mean values). Success did not vary statistically among years ( Fig. 2A). Covariates appearing in the top model set included year, nest stage, date, daily precipitation, visual obstruction reading (VOR), live grass cover, forb cover, bare ground cover, and litter cover (Table 2). However, only date (β = -0.43, SE = 0.15, CI: -0.66 to -0.20) and VOR (β = 0.28, SE = 0.16, CI: 0.05 to 0.52) had confidence intervals that never overlapped zero across top models. DSR for Baird's Sparrow declined as the season progressed (Fig.3A) and date had the strongest influence on nesting success in the species, appearing in all top models (Table 2) and having the largest effect size. Additional variation was explained by VOR, which had a moderate positive effect on DSR (Fig. 3B). Grasshopper Sparrow Modeled DSR and cumulative nesting success were 91% (CI: 87 to 95) and 16% (CI: 5 to 33), respectively, and showed little annual variation ( Fig. 2A). Nesting success in Grasshopper Sparrow was not influenced by any of the variables we modeled, as the constant survival model (ΔAIC c = 1.27) was in the top model set (Table 3) and 85% confidence intervals overlapped zero for all variables modeled. Adult survival Over the course of the study, we tagged and monitored 167 adult male Baird's Sparrows and 149 adult male Grasshopper Sparrows. Of the individuals tagged, 49% of Baird's Sparrows and 27% of Grasshopper Sparrows had known fates, while the remaining individuals either left the study area, had a transmitter fail, or appear to have shed their transmitter. Although we did not formally quantify mortality sources, we observed individuals that appeared to have died from weather exposure or illness with no visible injuries (n = 6), those that had lacerations or had been plucked and torn apart by predators (n = 12), and in several instances we noted that transmitters were found near or inside ground squirrel (Spermophilina sp.) burrows (n = 3). Additionally, in two cases, transmitters were found in and near what were likely Short-eared Owl (Asio flameus) pellets. Modelled survival estimates Logistic exposure analysis of survival produced DSR estimates of 99% (CI: 99 to 99) and 99% (CI: 98 to 99) for male Baird's and Grasshopper Sparrows under average conditions. Cumulative breeding season survival estimates for the two species (Fig. 2B) were 78% (CI: 51 to 91) and 74% (CI: 36 to 92) respectively. Survival was invariant among years, and none of the climate and time of season covariates we modeled explained variation in survival; the constant survival model was the top model for both species (Table 4). Post-fledging survival We tagged and monitored the survival of 94 fledgling Baird's Sparrows and 62 fledgling Grasshopper Sparrows over the duration of our study. Of these individuals, 95% of Baird's Sparrows and 71% of Grasshopper Sparrows had known fates. The remaining birds were greater than 10 days old when their signals were lost and potentially had the flight capability to leave study pastures, but because they could also have been carried off by predators, their fates were unknown. As with adults, we did not formally attempt to determine the cause of mortality for fledglings. However, we observed fledglings of both species that had likely been depredated or scavenged (n = 40). In these cases, bodies were found injured and dismembered, or transmitters were recovered with blood, feathers, or chew marks. We also recorded instances where transmitters were found buried or in ground squirrel burrows (n = 8), and in two cases, transmitters were tracked after being consumed by Plains Garter Snakes (Thamnophis radix). We also noted many cases where no sign of predation could be found, and mortality was likely associated with starvation or exposure (n = 23). Grasshopper Sparrow DSR and cumulative survival for Grasshopper Sparrow fledglings under average conditions were and 97% (CI: 94 to 99) and 55% (CI: 27 to 76) respectively, and survival was consistent among years (Fig. 2C). As in Baird's Sparrow, variables for year, precipitation, and temperature appeared in top models (Table 5), but only age was influential (Fig. 4B) and survival increased with days post-fledge (β = 2.74, SE = 0.65, CI: 1.89 to 3.77). Analysis of vegetation cover and survival (Table 6) DISCUSSION Examining multiple vital rates in populations of management interest can provide a broader view of species demographics and more accurately inform management activities. Our analysis of nesting success, breeding-season adult survival, and post-fledging survival in Baird's and Grasshopper Sparrows in the Northern Great Plains (NGP) revealed differences in response to environmental conditions among vital rates, as well as between species, highlighting the value of multi-stage monitoring in understanding species-habitat relationships. We found that mixed-grass prairie specialist Baird's Sparrow was generally more responsive to habitat conditions than the more generalist Grasshopper Sparrow. Baird's Sparrow responded to several environmental covariates in nesting success (time of season, vertical vegetation structure) and post-fledging survival (vegetation height, non-native vegetation cover) while Grasshopper Sparrow was only responsive to one environmental covariate (dead grass cover) across all rates estimated. Adult survival of both species was consistent and high relative to existing overwintering survival estimates for the two species, suggesting that adult survival during other parts of the annual cycle may be of greater importance. Finally, cumulative post-fledging survival of Baird's Sparrow was low relative to theoretical demographic thresholds (Cox et al. 2014) indicating that this vulnerable lifehistory stage is potentially important for the species. Drivers of vital rates We found that relationships between survival and environmental conditions varied by vital rate and species. Time of season was the strongest predictor of nesting success for Baird's Sparrow, which declined as the season progressed, an effect previously reported in the species (Davis 2005, Lusk and Koper 2013, Davis et al. 2016b) and in grassland birds generally (Zimmerman 1984, Grant et al. 2005, Davis et al. 2006, Grant and Shaffer 2012. This pattern may correspond to increases in predator abundance Age = days post-fledge, Dead = % dead vegetation, Exotic = % exotic cover, Forb = % forb cover , Grass = % grass cover, Hght = vegetation height, Liter = % litter cover, Live = % live vegetation, Prcp = daily precipitation, Temp(mn) = min temp., Year = study year following early season reproduction (Grant et al. 2005) or increased activity of exothermic predators coupled with warming conditions and longer days through the progression of the breeding season (Burhans et al. 2002). Nests of grassland songbirds in the NGP are subject to predation pressure from a diverse array of nest predators including small mammals, large mammals, snakes, and avian predators (Pietz et al. 2012, Bernath-Plaisted and. Thus, earlier nest initiation on the breeding grounds may be adaptive to increase nesting success by avoiding predation pressure later in the season (Grant et al. 2005). Alternatively, this effect could have been driven by higher quality of early-arriving individuals (Wheelwright andSchultz 1994, Verhulst andNilsson 2008). Nesting success of Baird's Sparrow was also partially driven by a positive effect of VOR (visual obstruction reading) matching the species' preference for both habitat and nest-site selection (Dieni and Jones 2003, Davis 2005, Lipsey and Naugle 2017 and further supporting the hypothesis that increased cover can provide concealment from nest predators (Winter 1999, Fondell and Ball 2004, Davis 2005, Klug et al. 2010. Survival in fledglings of both species was strongly driven by age, with most mortalities concentrated within the first five days postfledge. This pattern is common in fledgling songbirds (Cox et al. 2014) and specifically in grassland birds (Kershner et al. 2004, Fisher and Davis 2011, Hovick et al. 2011, Young et al. 2019. A likely explanation for this pattern is increased mobility with age; fledgling birds may be better able to evade predators as they develop sustained flight. Age may also act as a proxy for body condition, which can also be an important predictor of survival in fledgling grassland birds (Adams et al. 2006, Suedkamp Wells et al. 2007, Jones et al. 2017. Thus, because ethical and permitting limitations prevented us from placing transmitters on individuals below a certain weight threshold, it is possible that our rates overestimate post-fledging survival to some extent. Conversely, it is also possible that transmitters themselves reduced fledgling survival creating a negative bias (Barron et al. 2010) though there is some evidence that transmitters have little effect on fledgling grassland birds (Young et al. 2019). Post-fledging survival in our study was also influenced by vegetation structure and composition. In Baird's Sparrow, survival increased with greater vegetation height. This finding is consistent with other studies of post-fledging survival in grassland bird species and supports the hypothesis that vegetation cover may provide a refuge from predators during a critical lifehistory period of high-mortality (Jones and Bock 2005, Berkeley et al. 2007, Small et al. 2015, Jones et al. 2017. Predation is typically identified as the largest source of mortality in studies of post-fledging survival in grassland songbirds (Adams et al. 2001, Kershner et al. 2004, Suedkamp Wells et al. 2007, Hovick et al. 2011. Though shading and shelter from severe weather are also plausible benefits of taller vegetation, these mechanisms seem less likely in our study as we found no effect of climate conditions on the post-fledging survival of either species. Post-fledging survival in Grasshopper Sparrow responded negatively to increased cover of dead grass, though the mechanism underlying this effect is less clear. It is possible that stiff dead grass impeded movement of fledglings, or that it provided fewer food resources relative to live vegetation. Finally, we also found that survival of fledgling Baird's Sparrow was negatively affected by cover of non-native plant species. Non-native plants have been linked to reduced food availability in grassland systems (Flanders et al. 2006, Hickman et al. 2006) and although fledglings were still being fed by parents during the monitoring period in our study, a reduction in local food resources could have hampered provisioning efficiency of adults. Negative effects of non-native vegetation on vital rates have been reported for several grassland-obligate songbirds in the Great Plains (Lloyd and Martin 2005, Fisher and Davis 2011, Ludlow et al. 2015, Davis et al. 2016b, but more mechanistic studies are needed to understand and mitigate these apparent deleterious effects. Species differences Generally, we found many more relationships between vital rates and the environmental covariates tested for Baird's Sparrow than for Grasshopper Sparrow. The only influential covariates we found for Grasshopper Sparrow affected fledgling survival, where survival declined with dead grass cover and increased with fledgling age. There was no effect of time of season on the nesting success of Grasshopper Sparrow (as there was for Baird's), a surprising finding as nests of both Baird's and Grasshopper Sparrows are likely preyed upon by a similar predator community. Given the similarity in nest structure and habitat preference between the two species, it is possible that heightened predation at Grasshopper Sparrow nests was the result of behavioral differences between the two species. We observed that Grasshopper Sparrows were often more conspicuous at the nest site, perching and chipping when observers approached the nest area, and thus perhaps more liklely to attract predators. Conversly, Baird's Sparrows were often quiet or not visible during nest checks, particularly early in the nesting cycle. Once again, unlike Baird's Sparrow, success of Grasshopper Sparrow nests was also unaffected by vegetation height. One explanation for this discrepancy could be that, in the NGP, although general habitat preferences of Baird's Sparrow and Grasshopper Sparrow are similar (Lipsey and Naugle 2017), Baird's Sparrow has been shown to select more strongly for increased vegetation height at the nest site than Grasshopper Sparrow (Dieni and Jones 2003). We observed this difference in nest-site selection in our study as well, though the difference was marginal (Guido 2020). Interestingly, lack of response to vegetation characteristics in Grasshopper Sparrow relative to Baird's Sparrow was also apparent in the post-fledging phase. We did not observe a negative effect of non-native cover in survival of fledgling Grasshopper Sparrows, as we did in Baird's Sparrow. It is possible that feeding niches of the two species are subtly different, affecting their ability to exploit non-native food resources. For example, a study of the overwintering diet in Baird's Sparrow and Grasshopper Sparrow found that the two species had different seed preferences and food handling times as a result of differing bill morphologies (Titulaer et al. 2018). More broadly, inconsistent response of vital rates to environmental conditions between the two species could be explained in part by divergent life-history strategies. Baird's Sparrow exhibits more of the common specialist characteristics of the two species, occupying a smaller geographic range ( Fig. 1; BirdLife International 2016), less varied habitat-use (Green et al. 2002, Vickery et al. 1996, and lower population numbers (Partners in Flight 2020). It may be that Baird's Sparrows are simply more sensitive to environmental conditions than Grasshopper Sparrows as a consequence of inherent life history differences. Specialists species are highly successful when operating within ideal habitat conditions, but they have a limited ability adapt to sub-optimal or marginal environments (Correll et al. 2019). For example, it is possible that Baird's sparrow was less successful at foraging in habitat that is partially degraded by non-native plant species than Grasshopper Sparrow, resulting in higher fledging mortality for Baird's sparrow. Conversely, Baird's sparrow could be better adapted to select highly-concealed nest sites in mixed-grass prairie than Grasshopper Sparrow, which uses a shorter vegetation structure in many parts of its range (Vickery 1996), perhaps explaining the lack of positive response to vertical structure in Grasshopper sparrow. Regardless of mechanism, inconsistent effects of vegetation structure on vital rates are frequently reported in mixed-grass prairie songbird communities (Davis 2005, Koper and Schmiegelow 2007, Kerns et al. 2010, Lusk and Koper 2013. This variation in reponse to structure among species reflects the unique microhabitat needs of grassland songbirds and serves as a reminder that grassland songbird species can be poor management surrogates for one another (Davis 2005, Derner et al. 2009, Lipsey and Naugle 2017. Vital rates across life-stages Nesting success, the most familiar and commonly measured vital rate during the breeding season, fell within established ranges in the Great Plains for Baird's Sparrow (17-43%; Davis and Sealy 1998, Davis 2003, Jones et al. 2010, Lusk and Koper 2013, Davis et al. 2016b, and Grasshopper Sparrow (14-53%; Berthelsen and Smith 1995, Jones et al. 2010, Hovick et al. 2012, Davis et al. 2016b. It is worth noting, however, that our estimate of Grasshopper Sparrow nesting success (16%) fell on the extreme low end of reported estimates for the species, and yet success in that species was unresponsive to all environmental covariates we analyzed. Thus, additional exploration into the causes of low nesting success for this species in the NGP may be of value. In contrast to nesting success, few studies have isolated adult survival on the breeding grounds in North American songbirds, making our results unique but also difficult to contextualize. Several mark-recapture studies examining annual survival in grassland songbirds species such as Bobolink, Savannah Sparrow, and Florida Grasshopper Sparrow report annual apparent survival typically between 40-60% (Perkins and Vickery 2001, Fletcher et al. 2006, Perlut et al. 2008a, 2008b. However, our results are not directly comparable to these estimates, as our rates are confined to the breeding season period. Additionally, telemetry studies are better able to disentangle mortality from detectability than mark-recapture studies, which often suffer low recapture rates driven by poor site fidelity in grassland birds (Balent and Norment 2003, Fletcher et al. 2006. It is also important to note we were only able to track male birds, which may have biased our estimates. Male survivorship is sometime higher than female in grassland birds, and in songbirds generally (Perlut et al. 2008a, Low et al. 2010. This difference is likely a consequence of mortality incurred by females while on the nest. Nonetheless, our adult survival estimates for the breeding season were notably higher and more consistent than adult survival rates estimated for these species during other parts of their annual cycle (range 4-32%; Macías-Duarte et al. 2017 where winter survival for both species was influenced by environmental factors including precipitation and vegetative cover. Increasingly, there is evidence that substantial adult mortality may occur on the wintering grounds or during migration for some species (Hostetler et al. 2015). Studies of other migratory songbirds breeding in North America have shown that adult mortality is often highest during the migration period Holmes 2002, Rushing et al. 2017). Adult survival during the non-breeding periods of the annual cycle may therefore be of equal or greater importance to overall population growth than breeding season survival for our focal species. Post-fledging survival has also been little studied in our focal species, but a growing number of studies examining survival of grassland songbird species during this critical life-history phase have been conducted over the past two decades. Our estimate of post-fledging survival in Baird's Sparrow (25%) was similar to rates reported for closely related species such as Grasshopper Sparrow (21%; Hovick et al. 2011), Henslow's Sparrow (25%; Young et al. 2019), and Savannah Sparrow (21-35%; van Vliet et al. 2020), though our own estimate of post-fledging survival in Grasshopper Sparrow was much higher (55%). However, survival of fledgling Baird's Sparrow in our study was also lower than existing estimates for many other grassland obligates such as Dickcissel (Spiza americana; 56%; Suedkamp Wells et al. 2007), Sprague's Pipit (29%; Fisher and Davis 2011), Lark Bunting (27-37%; Adams et al. 2006), and Eastern meadowlark (Sturnella magna; 63-69%; Kershner et al. 2004, Suedkamp Wells et al. 2007). Further, survival estimates for Baird's Sparrow fell well below the 40% threshold theoretically necessary to maintain populations without unrealistically high survival during other demographic stages (Cox et al. 2014). Therefore, we suggest that post-fledging survival may be a management consideration for Baird's Sparrow and should be examined more frequently in conjunction with nesting success. Monitoring multiple vital rates in avian populations of interest is critical not only for management purposes (Fletcher et al. 2006, Perlut et al. 2008b, van Vliet et al. 2020 but also for the creation of accurate population models (Streby and Andersen 2011). While informally comparing vital rates across life-history stages may be the first step in assessing overall population limitation for species, evaluating the relative impact of these seasonal vital rates on population trajectory through full annual cycle population models is necessary to fully understand limitation (Hostetler et al. 2015). Increasingly, avian researchers have shown the importance of assessing demographics across the full spatial and temporal life-cycles of species, and the potential for carryover and interaction between these stages (Latta et al. 2016, Rushing et al. 2017. Integrated population models (IPMs) are a powerful tool for identifying limiting life-stages and geographies, and making accurate population predictions with respect to broad changes in land use and climate (Ahrestani et al. 2017, Zhao et al. 2019. Importantly, for habitat limited species like grassland birds, such models can also be used to identify critical habitat areas (Grand et al. 2019). Although one appealing aspect of IPM's is the ability to estimate latent parameters for which data are lacking, such as survival during the migration period (Ahrestani et al. 2017), such models require large amounts of data from various life-stages. Therefore, accurate data describing fundamental vital rates across multiple life-history phases of species are a critical component of both management oriented and analytical conservation efforts. CONCLUSION While demographic monitoring of songbirds has traditionally focused on single vital rates (e.g., nesting success), more comprehensive approaches may yield greater insight into population dynamics (Rushing et al. 2017, Wilson et al. 2018). Our study is the first to simultaneously monitor nesting success, breeding-season adult survival, and post-fledging survival for two declining grassland songbird species in the NGP. We found that Baird's Sparrow demographics were more responsive to environmental conditions than in Grasshopper Sparrow, for which habitat covariates had little impact on vital rates. Both nesting success and post-fledging survival in Baird's Sparrow increased with vertical vegetation structure, and post-fledging survival in Baird's Sparrow was also negatively influenced by nonnative plant cover. Adult survival on the breeding grounds for both species was high and invariant relative to overwintering survival in the same species, indicating that breeding season adult survival in these populations may be of less conservation significance. By contrast, post-fledging survival in Baird's Sparrow was low, suggesting that management of juveniles may be a priority for further research and monitoring efforts. We suggest the combination of these data with other datasets from the nonbreeding grounds in full annual cycle models to formally compare seasonal vital rates with the ultimate goal of creating conservation goals inclusive of the entire life cycle for these declining species. Responses to this article can be read online at: https://www.ace-eco.org/issues/responses.php/1875
2021-07-26T00:06:03.360Z
2021-06-09T00:00:00.000
{ "year": 2021, "sha1": "c872fb472a89eb81974121faf8d06c564c349ec8", "oa_license": "CCBY", "oa_url": "http://www.ace-eco.org/vol16/iss1/art19/ACE-ECO-2021-1875.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "58c6f4bd4db054b2e9e98478b9a3fc566383c5a7", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Geography" ] }
116183896
pes2o/s2orc
v3-fos-license
Design of a Hybrid Railway Power Conditioner with Co-phase Power Supply System With the widespread use and rapid development of electric locomotives, the negative sequence and excessive phase problems of traction networks have become increasingly prominent. A hybrid railway power conditioner system (H-RPC) for co-phase was proposed, And compared with the traditional RPC system. The paper analyzed its topology and compensation principle. The design of its key parameters was given. The analysis results show that H-RPC has better compensation effect than RPC. Finally, the correctness of the H-RPC compensation effect is verified by simulation. 1 Introduce With the maturity of China's high-speed rail technology, DC electric locomotives are gradually being replaced by AC electric locomotives. Since the commutation stage of the AC locomotive generally adopts PWM modulation technology, the power factor of the network side is generally higher, However, as the traction capacity and running speed of the locomotive increase significantly, the increasingly serious negative sequence and traction network over-phase problems are accompanied. In order to comprehensively solve the above problems, we use phase sequence rotation to balance the single-phase asymmetry of the load in the wiring mode, but the existence of the electric phase separation link limits the further development of the railway to high-speed and heavy load [1]. In terms of active device compensation, the use of hybrid active filter for harmonic suppression is an effective measure [2-5], but it cannot effectively compensate for negative sequence currents. Using static var compensator (SVC) can not only provide reactive power compensation, but also can compensate negative sequence current, but its required capacity is larger [6-8]. The railway power conditioner RPC proposed by Japanese scholars first links the two power supply arms into one whole [9-11],The active power can be transferred between the two arms. Although it has excellent compensation performance and stability, the investment cost is high. Based on the above background, this paper proposes a new hybrid RPC (ie: H-RPC) for co-phase power supply based on an impedance balanced transformer. This paper will first briefly discuss the topology structure and port wiring mode of H-RPC, and then theoretically analyze the negative sequence management principle of H-RPC. On this basis, the optimal design method of LC/L parameters under fluctuating load is given. and finally, verify the effectiveness of the system compensation effect through simulation. 2 Topology The topology of H-RPC is shown in Fig. 1. The 110kV (or 220kV) grid voltage is reduced by traction main transformer and then reduced to 27.5KV to supply power to the locomotive. Compared with traditional RPC, H-RPC has thefollowing significant features: 1) The voltage supplied by the H-RPC to the locomotive is the difference between the two side port voltages Uα, Uβ of the impedance balance transformer (ie, the ED port voltage in Fig. 1. is 27.5KV, Uα(EF) = Uβ(DF) = 27.5/ 2 KV ), neutral point F is not connected to two converters, At this point, the load currents of the α, β phases are equal in magnitude and opposite in direction. 2) The β phase of the H-RPC is connected to the power supply arm through the LC coupling branch. The parameters of the H-RPC must be designed considering the load conditions. The RPC topology and compensation principles have been widely reported in the literature and will not be described here[9-11]. © The Authors, published by EDP Sciences. This is an open access article distributed under the terms of the Creative Commons Attribution License 4.0 (http://creativecommons.org/licenses/by/4.0/). MATEC Web of Conferences 232, 04009 (2018) https://doi.org/10.1051/matecconf/201823204009 EITCE 2018 Introduce With the maturity of China's high-speed rail technology, DC electric locomotives are gradually being replaced by AC electric locomotives. Since the commutation stage of the AC locomotive generally adopts PWM modulation technology, the power factor of the network side is generally higher, However, as the traction capacity and running speed of the locomotive increase significantly, the increasingly serious negative sequence and traction network over-phase problems are accompanied. In order to comprehensively solve the above problems, we use phase sequence rotation to balance the single-phase asymmetry of the load in the wiring mode, but the existence of the electric phase separation link limits the further development of the railway to high-speed and heavy load [1]. In terms of active device compensation, the use of hybrid active filter for harmonic suppression is an effective measure [2][3][4][5], but it cannot effectively compensate for negative sequence currents. Using static var compensator (SVC) can not only provide reactive power compensation, but also can compensate negative sequence current, but its required capacity is larger [6][7][8]. The railway power conditioner RPC proposed by Japanese scholars first links the two power supply arms into one whole [9-11],The active power can be transferred between the two arms. Although it has excellent compensation performance and stability, the investment cost is high. Based on the above background, this paper proposes a new hybrid RPC (ie: H-RPC) for co-phase power supply based on an impedance balanced transformer. This paper will first briefly discuss the topology structure and port wiring mode of H-RPC, and then theoretically analyze the negative sequence management principle of H-RPC. On this basis, the optimal design method of LC/L parameters under fluctuating load is given. and finally, verify the effectiveness of the system compensation effect through simulation. Topology The topology of H-RPC is shown in Fig. 1. The 110kV (or 220kV) grid voltage is reduced by traction main transformer and then reduced to 27.5KV to supply power to the locomotive. Compared with traditional RPC, H-RPC has thefollowing significant features: 1) The voltage supplied by the H-RPC to the locomotive is the difference between the two side port voltages Uα, Uβ of the impedance balance transformer (ie, the ED port voltage in Fig. 1. is 27.5KV, Uα(EF) = Uβ(DF) = 27.5/ 2 KV ), neutral point F is not connected to two converters, At this point, the load currents of the α, β phases are equal in magnitude and opposite in direction. 2) The β phase of the H-RPC is connected to the power supply arm through the LC coupling branch. The parameters of the H-RPC must be designed considering the load conditions. The RPC topology and compensation principles have been widely reported in the literature and will not be described here[9-11]. Compensation principle The compensation principle of H-RPC is shown in Fig. 2, The system makes the active components of the two arms equal by transferring the active component of the current I L in the α and β phases, compensate the inductive reactance of the α phase, and compensate the capacitive reactive power of the β phase. Finally, the port feeder current is compensated from There are two things to note: First, due to the power factor angle In the presence of θ, the active current components I αp (OA) and I βp (OB) of the current I L in the α and β phases are not equal. |ΔIp/2|=||I αp |-|I βp ||/2 is not equal to 0 most of the time, H-RPC needs to transfer active power most of the time. Second, the feeder port voltage is neither U α nor U β , but the difference between the two is U αβ . Figure 2. compensation principle As can be seen from Fig. 2, the relationship between the angle φ α (the acute angle between the α-phase port voltage U a and the compensation current I ac ) and the load power factor λ=cos θ is (1) among them (2) In the formula Parameter design Since the design of α and β phases is similar, Here we only discuss the parameter design of α phase. 4.1.Design principles From Fig. 1 and Fig. 2, it is easy to get the converter port voltage phasor diagram shown in Fig. 3. According to Fig. 3, when U αL ⊥U αc , U αc reaches a minimum, then U αL =U α sinφ α , L coupling reactance X αL must satisfy: From equations (1) to (4), X αL is related to load power factor λ and load current I L . the parameter design principle of this paper should meet the minimum voltage of the converter port when the locomotive is heavily loaded (ie, I L =I L max). For one thing, it can reduce the apparent power of the converter output, reduce costs and losses, and increase economic efficiency. For another thing, it can improve the safety and reliability of the traction system operation. respectively I αc1 , I αc2 , I αc3 , I αc4 , and I αc5 . Among them, I αc1 , I αc3 , and I αc4 are compensation currents when φ α =φ αmin , representing the boundary conditions of the compensation current. I αc2 and I αc5 are the compensation currents when φ α >φ αmin , and represent the general condition of the compensation current. Therefore, it is of universal significance to analyze these five compensation currents. In addition, OG, OH, OF, OB, and OC in the figure correspond to voltages U αL1 , U αL2 , U αL3 , U αL4 ,and U αL5 on the L coupling reactance, respectively. AG, AH, AF, AB, and AC correspond to α-phase converter port voltages U αc1 , U αc2 , U αc3 , U αc4 , and U αc5 , respectively. Since φ α increases with λ, the load current factors corresponding to the compensation currents I αc1 , I αc3 , and I αc4 are the lowest in Fig. 4. 4.2.Parameter design According to the parameter design principle mentioned in this paper, X αL should make the port voltage of the converter lowest when the locomotive is heavily loaded (ie, I L =I Lmax ). The load current I Lmax has two corresponding compensation currents in Fig. 4: I αc4 and I αc5 , From the conclusion of Fig. 3, we can see that AB and AC are the minimum port voltages of converters corresponding to I αc4 and I αc5 , respectively. The following two cases are discussed: 1) If the X αL is determined by the BO in the ΔABO corresponding to AB, then when the compensation current becomes I αc5 , the port voltage of the converter will become AD, and AD<AB. Note that when φα increases (ie when λ increases), if X αL is determined using BO (ABO), the port voltage of the converter will not exceed AB. 2) If X αL is determined by CO (ΔACO) corresponding to AC, when the compensation current becomes I αc4 , the port voltage of the converter will become AM, apparently AM>AB, a At this time, the output voltage of the converter is too high to be over-compensated, which is not conducive to the work of the converter. To sum up the above two points, X αL should be determined according to the BO in the ΔABO corresponding to AB. At this time, Equation (4) can be rewritten as (5) In the equation, I αcM is the maximum effective value of the compensation current, and satisfies I αcM =ε αmax I Lmax , φ αmin is the minimum value of the sharp angle between the compensation currents I αc and U α . Simulation study In order to verify the correctness of the proposed H-RPC, we built a simulation model, And select light-load (S=4MV·A, λ=0.98); medium load (S=8MV·A, λ=1); heavy-load (S=15MV·A, λ=0.95) three different loads to simulate the characteristics of the entire locomotive load, The impedance of the coupled branch of the RPC is 19Ω and is powered by the α-phase power supply arm. Other system parameters are shown in Tab. 1. When the load power is 15MV·A and the power factor is 0.95, Fig. 5 and Fig. 6 show the waveforms of the three-phase current of the system network, output current of the secondary side port of the main transformer, voltage and current unbalance of the DC side before and after (0.1s) H-RPC and RPC respectively. we can see that after investing in H-RPC and RPC,The three-phase currents of the system network tend to be symmetrical sine waves, the power factor is close to 1, and the output currents i α and i β of the secondary-side ports are equal in amplitude, with a phase difference of 90°, The power quality has been significantly improved, so both H-RPC and RPC have satisfactory compensation effects under the same load conditions. However, after comparing Fig. 5 and Fig. 6(c) and (d) respectively, we can see that after investing in RPC, The three-phase current unbalance on the network side dropped to 2.1%, the voltage on the DC side stabilized at a given value of 45KV, and after the H-RPC was put into operation, the unbalanced three-phase current on the network side dropped to 0.6%, and the voltage on the DC side was stable at 23KV. This shows that under the premise of accomplishing the same compensation task, The H-RPC has better compensation under heavy load conditions, which verifies the correctness of the parameter design method proposed above. Conclusion This paper proposes a new system (H-RPC) based on an impedance balanced transformer. The article analyzes in detail the topology of H-RPC and the principle of compensation, and gives the design method of the key parameters of the coupling branch. The system fully exploits the potential of traction main transformer and hybrid compensation branch, which makes H-RPC have better compensation effect than RPC. Finally, the correctness of the system compensation effect is verified by simulation.
2019-04-16T13:29:07.626Z
2018-01-01T00:00:00.000
{ "year": 2018, "sha1": "f26a5384563e6f6783656a6f799cd3f613b9e0bb", "oa_license": "CCBY", "oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2018/91/matecconf_eitce2018_04009.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "abb1b6de6a8116603bb45f79e4a75e7df510ad1d", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
3453733
pes2o/s2orc
v3-fos-license
Endothelin-1 promotes epithelial–mesenchymal transition in human chondrosarcoma cells by repressing miR-300 Chondrosarcoma is a malignant tumor of mesenchymal origin predominantly composed of cartilage-producing cells. This type of bone cancer is extremely resistant to radiotherapy and chemotherapy. Surgical resection is the primary treatment, but is often difficult and not always practical for metastatic disease, so more effective treatments are needed. In particular, it would be helpful to identify molecular markers as targets for therapeutic intervention. Endothelin-1 (ET-1), a potent vasoconstrictor, has been shown to enhance chondrosarcoma angiogenesis and metastasis. We report that ET-1 promotes epithelial–mesenchymal transition (EMT) in human chondrosarcoma cells. EMT is a key pathological event in cancer progression, during which epithelial cells lose their junctions and apical-basal polarity and adopt an invasive phenotype. Our study verifies that ET-1 induces the EMT phenotype in chondrosarcoma cells via the AMP-activated protein kinase (AMPK) pathway. In addition, we show that ET-1 increases EMT by repressing miR-300, which plays an important role in EMT-enhanced tumor metastasis. We also show that miR-300 directly targets Twist, which in turn results in a negative regulation of EMT. We found a highly positive correlation between ET-1 and Twist expression levels as well as tumor stage in chondrosarcoma patient specimens. Therefore, ET-1 may represent a potential novel molecular therapeutic target in chondrosarcoma metastasis. INTRODUCTION Chondrosarcoma, the second most common type of bone cancer, is a heterogeneous group of malignancies that are characterized by the production of cartilage matrix. Chondrosarcomas can be classified into three histologic grades: grade I (low-grade), grade II (intermediate grade) or grade III (high-grade). The higher the grade, the more likely the tumor is to metastasize to other areas of the body. Although high-grade tumors develop in approximately only 5-10% of chondrosarcoma patients, these aggressive tumors remain the major cause of death [1,2]. Thus, metastasis is a major obstacle that must be overcome for the successful treatment of chondrosarcoma. Exploring the molecular basis of metastasis may help to improve the early detection, prevention, intervention, and prognostic evaluation of a chondrosarcoma. Secreted proteins are responsible for crosstalk among cancer cells and may facilitate the progression of metastasis, particularly within the steps of epithelialmesenchymal transition (EMT), migration, and invasion [3][4][5][6]. Endothelin-1 (ET-1) is a potent vasoconstrictor and Research Paper the most abundantly and widely expressed member of the endothelin family of proteins (ET-1, ET-2, and ET-3). Aberrant ET-1 is implicated in the pathobiology of a wide range of human tumors [7]. ET-1 acts as a survival factor from apoptosis via the endothelin A receptor (ET A R) [8] or ET B R [9] in an autocrine/paracrine manner in several different types of tumor cells. It has been reported that an association between ET-1 and various secreted factors or matrix proteins plays an important role in tumor progression and metastasis [10], while other research has demonstrated that the ET-1/ET A R autocrine pathway drives EMT in ovarian tumor cells by inducing an invasive phenotype [11]. These findings suggest that ET-1 induces the EMT process and may represent a novel target for therapeutic intervention in tumor angiogenesis and metastasis. The metastatic process consists of distinct steps, including tumor growth, angiogenesis, tumor cell detachment, EMT, survival within blood and lymphatic vessels and embolization, extravasation, mesenchymalepithelial transition (MET), formation of micrometastasis and, finally, growth of macrometastasis. EMT increases the metastatic and invasive potential of these cells. Downregulation of epithelial markers such as cytokeratin and E-cadherin and upregulation of mesenchymal markers such as vimentin, N-cadherin characterize the EMT process. Usually, inhibition of E-cadherin expression leads to induction of N-cadherin expression, which has been associated with tumor invasiveness [6]. Twist, and other transcription factors such as TGF-β, Snail, Slug and Sip1, have been show to play a regulatory role in EMT [6]. MicroRNAs (miRNAs) are small, endogenous, evolutionarily conserved non-coding ribonucleotide acids. It is estimated that up to 3% of the human genome codes for miRNA sequences. MiRNAs are involved in numerous biological processes, including cell growth, development, differentiation, proliferation, and death. They bind to complementary sequences in the 3′ untranslated regions (3′ UTRs) of their target mRNAs, resulting in degradation or blocking of gene translation. Studies have demonstrated that miRNAs modulate the metastatic process in many tumors [12]. Recently, miRNA microarray analysis has highlighted differential expression of miRNAs between mesenchymal-like cancer cells and epitheliallike cancer cells [13,14]. Remarkably, miR-300 was down-regulated in cancer cells that underwent EMT compared with miR-300 expression in typical epithelial phenotype carcinoma cells, indicating that miR-300 may affect EMT. This study found that ET-1 promotes EMT in chondrosarcomas by inhibiting miR-300 via the AMP-activated protein kinase (AMPK) signaling pathways. This work provides a novel insight into the mechanism of ET-1 in metastasis of human chondrosarcoma cells. ET-1 promotes EMT in human chondrosarcoma cells ET-1 has been implicated in the angiogenesis and metastasis of human chondrosarcoma cells [15,16]. To investigate the effects of ET-1 on chondrosarcoma cell migration, JJ012 and SW1353 cells were treated with different concentrations of ET-1. As shown in Figure 1A-1C, ET-1 induced wound healing ( Figure 1A), migration ( Figure 1B) and invasion ( Figure 1C) of chondrosarcoma cells in a dose-dependent manner. The essential features of EMT in the context of tumor progression are enhanced cell migration and invasion [17,18]. To examine whether ET-1 is required for EMT in chondrosarcoma, the chondrosarcoma cell lines were treated with ET-1. Induction of EMT after ET-1 treatment was demonstrated by a shift from the expression of an epithelial marker (E-cadherin) to mesenchymal markers (N-cadherin and vimentin) ( Figure 1D-1F). To further clarify whether ET-1 is associated with migration activity and EMT in chondrosarcoma, highly migratory JJ012(S10) cells were selected by Transwell assay. Results revealed that JJ012(S10) cells show higher migration ( Figure 2A ETRs are involved in ET-1-induced EMT in chondrosarcoma ET-1 acts through two distinct subtypes of G-protein coupled receptors (i.e., ET A and ET B ) [19,20]. Therefore, we hypothesized that the ET receptors may be involved in ET-1-induced EMT and cell migration in chondrosarcoma. Pretreatment of chondrosarcoma cells with the ET A R antagonist BQ123 and the ET B R antagonist BQ788 abolished ET-1-induced wound healing ( Figure 3A), migration ( Figure 3B), and invasion ( Figure 3C). We further examined whether ET-1 has the ability to trigger activation of EMT-related markers via the ETRs. Our results show that ETR inhibitors reverse ET-1-induced changes in expression of EMT markers ( Figure 3D-3F). These data suggest that ET-1 promotes EMT in chondrosarcoma via the ETRs. Twist is required for ET-1-increased EMT and cell migration in human chondrosarcoma cells Previous studies have indicated that Twist promotes the initiation of EMT [21,22]. We therefore hypothesized www.impactjournals.com/oncotarget significant change in cell migration ( Figure 4C) and invasion ( Figure 4D) as well as EMT ( Figure 4E & 4F), all of which were drastically attenuated in the presence of Twist siRNA. Twist therefore plays a critical role in ET-1induced EMT and cell migration. The AMPK signaling pathway is involved in ET-1-induced EMT and cell migration AMPK has been shown to regulate human chondrosarcoma metastasis [23,24]. We therefore investigated whether AMPK mediates ET-1-induced EMT and migration of chondrosarcoma cells. Transfection of chondrosarcoma cells with AMPK-specific siRNA (AMPKα1 or AMPKα2 siRNA) abolished ET-1-induced cell migration ( Figure 5A) and invasion ( Figure 5B). Moreover, AMPK-specific siRNA reversed ET-1induced EMT ( Figure 5C-5E). Subsequently, we directly measured AMPK phosphorylation in response to ET-1 and found that stimulation of cells with ET-1 increased phosphorylation of AMPK in a time-dependent manner ( Figure 5F). These data suggest that AMPK activation is involved in ET-1-induced cell migration and EMT in human chondrosarcomas. ET-1 induces Twist expression by inhibiting miR-300 in chondrosarcomas Recent evidence has highlighted the role played by miRNAs in modulating the metastatic process in solid tumors [25]. Many studies have subsequently been conducted and a large number of miRNAs have been found to correlate with the EMT process [26]. We next used 3 online computational algorithms (TargetScan, miRanda and miRWalk) to explore candidate miRNAs that target Twist. The results indicate that miR-300 targets the 3'-untranslated region (UTR) segment of Twist mRNA. We found that miR-300 expression was decreased in a dose-dependent manner after ET-1 treatment ( Figure 6A). When we transfected chondrosarcomas with a miR-300 mimic then treated them with ET-1, the miR-300 mimic but not the control miRNA abolished ET-1-induced Twist expression and EMT ( Figure 6D-6E). We also confirmed the role of miR-300 in cell migration by targeting Twist. The data indicate that the miR-300 mimic inhibited ET-1induced migration ( Figure 6B) and invasion ( Figure 6C). To elucidate whether miR-300 specifically targets the Twist 3′UTR, we constructed luciferase reporter vectors harboring wild-type 3′UTR of the Twist mRNA (WT-Twist-3′UTR) and mismatches in the predicted miR-300 binding site (MT-Twist-3′UTR; Figure 7A). These vectors were then transfected into JJ012 cells after undergoing treatment with various concentrations of ET-1. As shown in Figure 7B, ET-1 decreased luciferase activity in the WT-Twist-3′UTR plasmid but not in the MT-Twist-3′UTR, indicating that miR-300 directly represses Twist protein expression via binding to the 3′UTR of human Twist. In addition, BQ123, BQ788 and AMPK inhibitors (Ara A and compound C) reversed ET-1-inhibited miR-300 expression ( Figure 7C) and WT-Twist-3′UTR luciferase activity ( Figure 7D). Furthermore, AMPKα1 or α2 siRNA also reversed ET-1-inhibited WT-Twist-3′UTR luciferase activity ( Figure 7E). These data indicate that miR-300 directly represses Twist expression via binding to the 3′UTR of human Twist through ETRs and AMPK signaling. ET-1 expression is positively correlated with Twist expression in resected chondrosarcoma specimens To determine the clinical significance of ET-1 and Twist in patients with chondrosarcoma, we performed an immunohistochemical (IHC) assay using a tissue microarray to compare the expression of ET-1 and Twist in normal cartilage and different grades of chondrosarcomas. Representative examples of IHC staining for ET-1 and Twist in normal cartilage and histopathologically different grades of chondrosarcoma tissues are shown in Figure 8A. The expression of ET-1 and Twist increased significantly with tumor progression (Figure 8B & 8C). In addition, Pearson's correlation revealed significantly positive correlations between ET-1 expression and Twist (r 2 = 0.6238, P < 0.0001) ( Figure 8D). Higher levels of ET-1 expression were found in tumor specimens and were positively correlated with Twist expression in chondrosarcomas. DISCUSSION The advent of effective systemic chemotherapy has dramatically improved long-term survival in other primary malignant bone tumors, such as osteosarcoma and Ewing's sarcoma, chondrosarcoma continues to have a poor prognosis due to the limited effectiveness of adjuvant therapy [1]. Chondrosarcoma shows a predilection for metastasis to the lungs. Much has been learned about the commonly known and well-studied process of EMT during the malignant progression to chondrosarcoma. It is therefore important to explore potential targets for preventing the occurrence of EMT in chondrosarcoma. This study describes how ET-1 enhances the expression of Twist in human chondrosarcoma cells and subsequently increases EMT and metastasis. In addition, the downregulation of miR-300 through the ETRs and AMPK pathway is mediated by ET-1-induced EMT and tumor metastasis. Considerable evidence suggests that tumor cells expressing aberrant levels of ET-1 facilitate tumor development and progression. Our previous study indicated that ET-1 facilitates oncogenesis in human chondrosarcoma by increasing cell migration via the matrix-metalloproteinase (MMP) family and cyclooxygenase (COX)-2 overexpression [15,16]. We have also previously demonstrated that ET-1 facilitates tumor metastasis and tumorigenesis by mediating angiogenesis in human chondrosarcomas [27]. However, the molecular mechanisms of ET-1-regulated EMT in human chondrosarcomas are not well characterized. The present study reveals that ET-1 signaling has a distinct function in chondrosarcoma, namely, regulation of EMT and cell migration. Higher AMPK expression has been found to correlate with lower tumor grade and/or grade in various cancers, including ovarian, hepatocellular, pancreatic, breast, and gallbladder cancers [28,29], and evidence indicates that the tumors in Peutz-Jeghers syndrome may result from deficient activation of AMPK due to inactivation of serine/threonine kinase 11, the major upstream kinase required for AMPK activation [30]. Recently, several studies have shown that AMPK plays an important role in metastasis through its effects on cell migration, and that AMPK stimulates cell motility via microtubule polymerization [31]. Silencing AMPK expression disrupts front-rear polarity and results in directional migration defects [32][33][34]. Our results indicate that ET-1 induces cell migration by activation of AMPK in human chondrosarcoma cells. AMPK may therefore represent a potential target for the development of new anticancer drugs, in particular those targeting metastasis. miRNAs control gene expression by binding to complementary sequences in 3′UTRs of target mRNAs [35,36]. Deregulated miRNA expression has been cited in human cancers and may affect multiple steps during metastasis [37]. In particular, the following miRNAs can regulate metastatic ability in osteosarcoma: miR-507 [38], miR-497 [39], miR-519d [23], miR-185 [40], miR-218 [40] and miR-200b [41]. Our study indicates that miR-300 is downregulated in response to ET-1; miR-300 reportedly suppresses tumor formation in human glioblastoma, making it an attractive candidate biomarker for the prediction of response to cancer treatment [42,43]. In this study, transfection of cells with miR-300 mimic reduced ET-1-induced cell migration, indicating that miR-300 can function as a tumor suppressor. Previous research has demonstrated Twist functions in many stages of cancer metastasis [44,45]. ET-1 has been reported to regulate Twist expression [46]. Our study found increasing Twist mRNA and protein expression in JJ012/S10 cells. Knockdown of Twist expression via transfection with Twist siRNA abolished ET-1-induced migration activity of chondrosarcoma cells, demonstrating 3'UTR representation of the human Twist containing the miR-300 binding site. B. Cells were transfected with a wt or mutant Twist 3′UTR luciferase plasmid for 24 h followed by stimulation with ET-1 (10~100 nM) for 24 h, and the relative luciferase activity was measured. C-E. Cells were pretreated with BQ123 (10 μM), BQ788 (10 μM), Ara A (1 mM) and compound C (10 μM) for 30 min or pre-transfected with specific siRNAs for 24 h followed by stimulation with ET-1 for 24 h. miR-300 expression or wild-type Twist 3′UTR luciferase activity were examined (n=4-5). Results are expressed as the mean ± S.E.M. * p< 0.05 compared with control. #p < 0.05 compared with the ET-1treated group. that Twist is involved in ET-1-mediated cell migration. miRNA target prediction analysis proved that Twist is the targets of miR-300; transfection of cells with the miR-300mimic strongly inhibited ET-1-induced Twist expression. Our findings indicate that miR-300 directly represses Twist protein expression through binding to the 3′UTR of human Twist genes, and thus negatively regulates Twistmediated metastasis. In conclusion, metastasis plays a critical role in the progression of tumors and is the main cause of death from cancer. Chemotherapy and radiation play limited roles in primary treatment of chondrosarcoma and no specific standardized therapy has as yet proven to be effective for chondrosarcoma [47]. Our study elucidates the mechanism of ET-1-induced EMT in chondrosarcoma; miR-300 may play a pivotal role in this process. Our findings provide a novel insight into the role of ET-1 in cancer metastasis and indicate that ET-1 may be a novel therapeutic target in chondrosarcoma metastasis. Materials Protein A/G beads, anti-mouse and anti-rabbit IgGconjugated horseradish peroxidase, rabbit polyclonal antibodies specific for vimentin, N-cadherin, E-cadherin, Twist, p-AMPK, AMPK, and β-actin were purchased from Santa Cruz Biotechnology (Santa Cruz, CA, USA). ON-TARGETplus siRNAs of Twist, AMPKα1, AMPKα2 and control were purchased from Dharmacon Research (Lafayette, CO, USA). Recombinant human ET-1 was purchased from PeproTech (Rocky Hill, NJ, USA). miRNA control and miR-300 mimic were purchased from Invitrogen (Carlsbad, CA, USA). All other chemicals were obtained from Sigma-Aldrich (St Louis, MO, USA). Cell culture The human chondrosarcoma cell line (JJ012) was kindly provided by the laboratory of Dr. Sean P Scully (University of Miami School of Medicine, Miami, FL). The human chondrosarcoma cell line (SW1353) was obtained from the American Type Culture Collection. Cells were cultured in Dulbecco's modified Eagle's medium (DMEM)/α-MEM supplemented with 10% fetal bovine serum and 100 units/mL penicillin/streptomycin at 37°C in a humidified chamber in 5% CO 2 . The basal levels of ET-1 in JJ012 and SW1353 cells are 2.16 pg/ml and 1.33 pg/ml, respectively. Western blot analysis The cellular lysates were prepared and proteins were then resolved on SDS-PAGE and transferred to Immobilon polyvinyldifluoride (PVDF) membranes. The blots were blocked with 4% BSA for 1 h at room temperature and then probed with rabbit anti-human antibodies against AMPK, p-AMPK, E-cadherin, N-cadherin, vimentin or Twist (1:1000) for 1 h at room temperature. After three washes, the blots were incubated with a donkey anti-rabbit peroxidase-conjugated secondary antibody (1:1000) for 1 h at room temperature. The protein bands were visualized by enhanced chemiluminescence using ImageQuant LAS 4000 (GE Healthcare Life Sciences, Little Chalfont, UK). Quantitative data were obtained using a computing densitometer and ImageQuant software (Molecular Dynamics, Sunnyvale, CA). Quantitative real time PCR The quantitative real-time PCR (qPCR) analysis was carried out using the Taqman® one-step PCR Master Mix (Applied Biosystems, Foster City CA). 100 ng of total cDNA was added per 25 μl reaction with sequencespecific primers and Taqman® probes. Sequences for all target gene primers and probes were purchased commercially (β-actin was used as the internal control) (Applied Biosystems, CA). Quantitative RT-PCR assays were carried out in triplicate on a StepOnePlus sequence detection system. The cycling conditions were 10 min polymerase activation at 95°C followed by 40 cycles at 95°C for 15 sec and 60°C for 60 sec. The threshold was set above the non-template control background and within the linear phase of target gene amplification to calculate the cycle number at which the transcript was detected (denoted as CT). miRNA qPCR analysis Total RNAs were extracted and cDNA was synthesized using the Mir-X™ miRNA First-Strand Synthesis Kit (Clontech, CA, USA). Quantitative RT-PCR assays were carried out in triplicate on a StepOnePlus sequence detection system. The cycling conditions were 10 min polymerase activation at 95°C followed by 40 cycles at 95°C for 15 sec and 60°C for 60 sec. Relative gene expression was quantified using an endogenous control gene (U6). The threshold cycle (CT) was defined as the fractional cycle number at which fluorescence passed a fixed threshold, and relative expression was calculated using the comparative CT method. Transwell migration and invasion assays All cell migration assays were performed using Transwell inserts (8-μm pore size; Costar, NY) in 24well dishes. Chondrosarcoma cells were pretreated for 30 min with the indicated concentrations of inhibitors or vehicle (0.1% DMSO). Cells (1 × 10 4 in 200 μl of serumfree medium) were seeded in the upper chamber of the Transwell and 300 μl of the same medium containing varying concentrations of ET-1 was placed in the lower chamber. Each experiment was performed with triplicate wells and repeated at least 3 times. For the cell invasion assay, each well was pre-coated with Matrigel (25 mg/50 mL; BD Biosciences, Bedford, MA) to form a continuous, thin layer. Protocol was followed in the migration assay. Establishment of migration-prone sublines Subpopulations of JJ012 cells were selected according to their differential migration abilities using the cell culture insert system as described above. After overnight migration, cells that penetrated through pores and migrated to the undersides of filters were trypsinized and harvested for a second round of selection. After 10 rounds of selection, migration-prone subline was designated as JJ012 (S10). Original cells were designated as JJ012 (S0) [48]. Wound healing assay For wound-healing migration assays, cells were seeded on 12-well plates at a density of 1 × 10 5 cells/well in culture medium. At 24 h after seeding, the confluent monolayer of culture was scratched with a fine pipette tip, and migration was visualized by microscopy. The rate of wound closure was observed at the indicated times. Plasmid construction and luciferase reporter assay Wild-type (wt) Twist-3'-UTR was constructed into the pGL2-Control vector. The mutation of Twist-3'-UTR was performed by Quickchange™ site-directed mutagenesis protocol (Stratagene; La Jolla, CA, USA), according to the manufacturer's instructions. Immunohistochemistry analysis The human chondrosarcoma tissue array was purchased from Biomax (Rockville, MD, USA; 6 cases for normal cartilage, 24 cases for grade I chondrosarcoma, 9 cases for grade II chondrosarcoma, and 15 cases for grade III chondrosarcoma). Fixed and paraffin-embedded tissues were deparaffinized with xylene, and rehydrated through a graded series of alcohols to water. Endogenous peroxidase activity was blocked with 3% hydrogen peroxide. Heatinduced antigen retrieval was carried out for all sections in 0.01 M sodium citrate buffer, pH 6 at 95°C for 20 min. Human ET-1 or Twist antibodies were applied at a dilution of 1:200 and incubated at 4°C overnight. Bound antibodies were detected by NovoLink Polymer Detection System (Leica Microsystems, Newcastle, UK) and visualized with the diaminobenzidine reaction. The sections were counterstained with hematoxylin. The staining intensity was evaluated as 0, 1+, 2+, 3+, 4+, and 5+ for no staining, very weak staining, weak staining, moderate staining, strong staining, and very strong staining, respectively, by two independent and blinded observers. IHC score was determined as the sum of the intensity score. Statistics All data are presented as the mean ± SEM. Statistical comparison of two groups was performed using the Student's t-test. Statistical comparisons of more than two groups were performed using one-way analysis of variance with Bonferroni's post hoc test. In all cases, P < 0.05 was considered significant.
2018-04-03T01:43:44.821Z
2016-09-02T00:00:00.000
{ "year": 2016, "sha1": "309913ed0d23b2b0ce445ce0b5714f76b8d9794b", "oa_license": "CCBY", "oa_url": "http://www.oncotarget.com/index.php?journal=oncotarget&op=download&page=article&path[]=11835&path[]=37463", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "309913ed0d23b2b0ce445ce0b5714f76b8d9794b", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }